You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The **Neptune-PyTorch** integration simplifies tracking your PyTorch experiments with Neptune by providing automated tracking of PyTorch model internals including activations, gradients, and parameters.
13
13
@@ -87,6 +87,7 @@ for epoch in range(num_epochs):
87
87
```
88
88
89
89
**Logged data in Neptune:**
90
+
90
91
-**Model architecture**: Visual diagram and summary of the neural network
91
92
-**Training metrics**: Loss curves and epoch progress
92
93
-**Layer activations**: Mean, std, norm, histograms for each layer
@@ -176,6 +177,7 @@ for epoch in range(num_epochs):
176
177
```
177
178
178
179
**Features demonstrated:**
180
+
179
181
-**Layer filtering**: Only track Conv2d and Linear layers (reduces overhead)
180
182
-**Custom statistics**: Use mean, std, hist instead of all 8 statistics
181
183
-**Phase-specific tracking**: Different tracking strategies for train/validation
@@ -184,23 +186,27 @@ for epoch in range(num_epochs):
184
186
## Features
185
187
186
188
### Model monitoring
189
+
187
190
-**Layer activations**: Track activation patterns across all layers with 8 different statistics
188
191
-**Gradient analysis**: Monitor gradient flow and detect vanishing/exploding gradients
189
192
-**Parameter tracking**: Log parameter statistics and distributions for model analysis
190
193
-**Custom statistics**: Choose from mean, std, norm, min, max, var, abs_mean, and hist
191
194
192
195
### Configuration options
196
+
193
197
-**Layer filtering**: Track only specific layer types (Conv2d, Linear, etc.)
194
198
-**Phase organization**: Separate tracking for training/validation phases with custom prefixes
195
199
-**Custom namespaces**: Organize experiments with custom folder structures
196
200
197
201
### Visualizations
202
+
198
203
-**Model architecture**: Automatic model diagram generation with torchviz
199
204
-**Distribution histograms**: 50-bin histograms for all tracked metrics
200
205
-**Real-time monitoring**: Live tracking during training with Neptune
201
206
-**Comparative analysis**: Easy comparison across experiments and runs
202
207
203
208
### Integration
209
+
204
210
-**Minimal setup**: Simple integration with existing code
205
211
-**PyTorch native**: Works with existing PyTorch workflows
206
212
@@ -263,20 +269,23 @@ The integration organizes all logged data under a clear hierarchical and customi
263
269
**Example namespaces:**
264
270
265
271
With `base_namespace="my_experiment"`:
272
+
266
273
-`my_experiment/batch/loss` - Training loss
267
274
-`my_experiment/model/summary` - Model architecture
268
275
-`my_experiment/model/internals/activations/conv/1/mean` - Mean activation (no prefix)
269
276
-`my_experiment/model/internals/train/activations/conv/1/mean` - Mean activation (with "train" prefix)
270
277
-`my_experiment/model/internals/validation/gradients/linear1/norm` - L2 norm of gradients (with "validation" prefix)
271
278
272
279
With `base_namespace=None`:
280
+
273
281
-`batch/loss` - Training loss
274
282
-`model/summary` - Model architecture
275
283
-`model/internals/activations/conv/1/mean` - Mean activation (no prefix)
276
284
-`model/internals/train/activations/conv/1/mean` - Mean activation (with "train" prefix)
277
285
-`model/internals/validation/gradients/linear1/norm` - L2 norm of gradients (with "validation" prefix)
278
286
279
287
**Layer name handling:**
288
+
280
289
- Dots in layer names are automatically replaced with forward slashes for proper namespace organization
281
290
- Example: `seq_model.0.weight` becomes `seq_model/0/weight` in the namespace
282
291
- Example: `module.submodule.layer` becomes `module/submodule/layer` in the namespace
0 commit comments