Skip to content

Commit f784374

Browse files
committed
merge
2 parents c717c15 + 90263e7 commit f784374

15 files changed

+599
-473
lines changed

README.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,15 +25,19 @@ ______________________________________________________________________
2525
## Introduction
2626

2727
This is a set of tutorials for the CMS Machine Learning Hands-on Advanced Tutorial Session (HATS).
28-
They are intended to show you how to build machine learning models in python, using `Keras`, `TensorFlow`, and `PyTorch`, and use them in your `ROOT`-based analyses.
29-
We will build event-level classifiers for differentiating VBF Higgs and standard model background 4 muon events and jet-level classifiers for differentiating boosted W boson jets from QCD jets using dense and convolutional neural networks.
28+
They are intended to show you how to build machine learning models in python, using `xgboost`, `Keras`, `TensorFlow`, and `PyTorch`, and use them in your `ROOT`-based analyses.
29+
We will build event-level classifiers for differentiating VBF Higgs and standard model background 4 muon events and jet-level classifiers for differentiating boosted W boson jets from QCD jets using BDTs, and dense and convolutional neural networks.
3030
We will also explore more advanced models such as graph neural networks (GNNs), variational autoencoders (VAEs), and generative adversarial networks (GANs) on simple datasets.
3131

3232
## Setup
3333

34-
### Vanderbilt Jupyterhub (Recommended!)
34+
### Purdue Analysis Facility (New and recommended!)
3535

36-
The recommended method for running the tutorials live is the Vanderbilt Jupyterhub, follow the instructions [here](https://fnallpc.github.io/machine-learning-hats/setup/vanderbilt-jupyterhub/vanderbilt.html).
36+
The recommended method for running the tutorials live is the Purdue AF, follow the instructions [here](https://fnallpc.github.io/machine-learning-hats/setup/purdue/purdue.html).
37+
38+
### Vanderbilt Jupyterhub
39+
40+
Another option is the Vanderbilt Jupyterhub, instructions [here](https://fnallpc.github.io/machine-learning-hats/setup/vanderbilt-jupyterhub/vanderbilt.html).
3741

3842
### FNAL LPC
3943

machine-learning-hats/_toc.yml

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ root: index
66
parts:
77
- caption: Setup
88
chapters:
9+
- file: setup/purdue/purdue
910
- file: setup/vanderbilt-jupyterhub/vanderbilt
1011
- file: setup/lpc
1112
- file: setup-libraries
@@ -14,12 +15,13 @@ parts:
1415
maxdepth: 2
1516
chapters:
1617
- file: notebooks/1-datasets-uproot
17-
- file: notebooks/2-dense
18+
- file: notebooks/2-boosted-decision-tree
19+
- file: notebooks/3-dense
1820
sections:
19-
- file: notebooks/2.1-dense-keras
20-
- file: notebooks/2.2-dense-pytorch
21-
- file: notebooks/2.3-dense-bayesian-optimization
22-
- file: notebooks/3-conv2d
23-
- file: notebooks/4-gnn-cora
24-
- file: notebooks/5-vae-mnist
25-
- file: notebooks/6-gan-mnist
21+
- file: notebooks/3.1-dense-keras
22+
- file: notebooks/3.2-dense-pytorch
23+
- file: notebooks/3.3-dense-bayesian-optimization
24+
- file: notebooks/4-conv2d
25+
- file: notebooks/5-gnn-cora
26+
- file: notebooks/6-vae-mnist
27+
- file: notebooks/7-gan-mnist

machine-learning-hats/notebooks/1-datasets-uproot.ipynb

Lines changed: 15 additions & 453 deletions
Large diffs are not rendered by default.

machine-learning-hats/notebooks/2-boosted-decision-tree.ipynb

Lines changed: 556 additions & 0 deletions
Large diffs are not rendered by default.

machine-learning-hats/notebooks/3-conv2d.ipynb renamed to machine-learning-hats/notebooks/4-conv2d.ipynb

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -55,14 +55,13 @@
5555
"### Convolution Operation\n",
5656
"Two-dimensional convolutional layer for image height $H$, width $W$, number of input channels $C$, number of output kernels (filters) $N$, and kernel height $J$ and width $K$ is given by:\n",
5757
"\n",
58-
"\\begin{align}\n",
59-
"\\label{convLayer}\n",
60-
"\\boldsymbol{Y}[v,u,n] &= \\boldsymbol{\\beta}[n] + \\sum_{c=1}^{C} \\sum_{j=1}^{J} \\sum_{k=1}^{K} \\boldsymbol{X}[v+j,u+k,c]\\, \\boldsymbol{W}[j,k,c,n]\\,,\n",
61-
"\\end{align}\n",
58+
"$$\n",
59+
"\\boldsymbol{Y}[v,u,n] = \\boldsymbol{\\beta}[n] + \\sum_{c=1}^{C} \\sum_{j=1}^{J} \\sum_{k=1}^{K} \\boldsymbol{X}[v+j,u+k,c]\\, \\boldsymbol{W}[j,k,c,n]\\,,\n",
60+
"$$\n",
6261
"\n",
6362
"where $Y$ is the output tensor of size $V \\times U \\times N$, $W$ is the weight tensor of size $J \\times K \\times C \\times N$ and $\\beta$ is the bias vector of length $N$ .\n",
6463
"\n",
65-
"The example below has $C=1$ input channel and $N=1$ ($J\\times K=3\\times 3$) kernel [credit](https://towardsdatascience.com/types-of-convolution-kernels-simplified-f040cb307c37):\n",
64+
"The example below has $C=1$ input channel and $N=1$ ($J\\times K=3\\times 3$) kernel ([credit](https://towardsdatascience.com/types-of-convolution-kernels-simplified-f040cb307c37)):\n",
6665
"\n",
6766
"![convolution](https://miro.medium.com/v2/resize:fit:780/1*Eai425FYQQSNOaahTXqtgg.gif)"
6867
]
@@ -84,7 +83,7 @@
8483
"source": [
8584
"### Pooling\n",
8685
"\n",
87-
"We also add pooling layers to reduce the image size between layers. For example, max pooling: (also from [here]([page](https://cs231n.github.io/convolutional-networks/))\n",
86+
"We also add pooling layers to reduce the image size between layers. For example, max pooling (also from [here]([page](https://cs231n.github.io/convolutional-networks/))):\n",
8887
"\n",
8988
"![maxpool](https://cs231n.github.io/assets/cnn/maxpool.jpeg)"
9089
]

machine-learning-hats/notebooks/4-gnn-cora.ipynb renamed to machine-learning-hats/notebooks/5-gnn-cora.ipynb

Lines changed: 3 additions & 1 deletion
Large diffs are not rendered by default.
47.6 KB
Loading
31.9 KB
Loading

requirements.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,5 @@ pandas
1010
torch
1111
ipykernel
1212
tqdm
13-
jupyter
13+
jupyter
14+
xgboost

0 commit comments

Comments
 (0)