Skip to content

Conversation

@SiddhantSadangi
Copy link
Member

@SiddhantSadangi SiddhantSadangi commented Sep 19, 2025

Summary by Sourcery

Upgrade neptune-pytorch to support Neptune 3.x via neptune_scale and introduce a new TorchWatcher-based model internals tracking system

New Features:

  • Add support for Neptune 3.x using neptune_scale.Run and drop older Neptune client support
  • Implement TorchWatcher to log layer activations, gradients, and parameters with configurable statistics
  • Introduce track_layers and tensor_stats options along with a new log_model_internals API
  • Enable automatic model diagram generation and upload with assign_files and wait_for_processing

Enhancements:

  • Refactor NeptuneLogger to simplify constructor, remove legacy hooks, and delegate internals logging to TorchWatcher
  • Redesign namespace structure for model internals under {base_namespace}/model/internals/{prefix}/{metric_type}/{layer}/{statistic}
  • Switch to importlib.metadata for versioning and require Python ≥3.10, PyTorch ≥1.11, NumPy ≥1.20

Build:

  • Raise minimum Python requirement to 3.10 and bump dependencies (torch ≥1.11, numpy ≥1.20, neptune_scale ≥0.14.0) in pyproject.toml

CI:

  • Expand CI matrix to run on Python 3.10 and 3.13
  • Update pre-commit configuration to latest hook versions and add a pytest hook

Documentation:

  • Overhaul README with installation instructions, quick start guide, advanced configuration, feature list, API reference, and namespace examples
  • Update CHANGELOG for version 3.0.0 to document breaking changes and new capabilities
  • Refresh pyproject.toml metadata, classifiers, and dependency declarations

Tests:

  • Add a comprehensive test suite for _HookManager and _TorchWatcher functionality
  • Update end-to-end tests to mock neptune_scale.Run and verify log_model_internals, log_configs, and file uploads

@SiddhantSadangi SiddhantSadangi self-assigned this Sep 19, 2025
@SiddhantSadangi SiddhantSadangi added the enhancement New feature or request label Sep 19, 2025
@sourcery-ai
Copy link

sourcery-ai bot commented Sep 19, 2025

Reviewer's Guide

This PR upgrades the Neptune-PyTorch integration to work with Neptune 3.x via neptune_scale.Run, introduces a new TorchWatcher mechanism for comprehensive model internals tracking, refactors the NeptuneLogger API and diagram upload flow, expands documentation and examples, updates tests with Mock Run and adds extensive TorchWatcher tests, and bumps dependencies and CI/tooling configurations.

File-Level Changes

Change Details Files
Upgrade to Neptune 3.x compatibility
  • Replace imports and types to use neptune_scale.Run instead of legacy neptune-client
  • Simplify version lookup with importlib.metadata and update version file
  • Remove legacy Handler, verify_type and neptune-client references
src/neptune_pytorch/impl/__init__.py
src/neptune_pytorch/impl/version.py
pyproject.toml
Introduce TorchWatcher-based internals tracking and refactor NeptuneLogger API
  • Add _torchwatcher.py with HookManager and TorchWatcher classes
  • Modify NeptuneLogger to delegate activations/gradients/parameter logging to TorchWatcher
  • Change constructor signature to use base_namespace, track_layers, tensor_stats and update log_model_internals
src/neptune_pytorch/impl/_torchwatcher.py
src/neptune_pytorch/impl/__init__.py
Refactor model diagram upload logic
  • Replace safe_upload_visualization with _safe_upload_diagram using run.assign_files and wait_for_processing
  • Update forward hook to handle diagram rendering and file cleanup
src/neptune_pytorch/impl/__init__.py
Overhaul documentation and README
  • Expand installation, quick start, advanced configuration, API reference sections
  • Update badges, examples, and namespace diagrams to reflect Neptune 3.x and TorchWatcher support
README.md
Refactor and extend tests
  • Update test_e2e.py to use a Mock Run and verify new log_metrics/log_configs/assign_files behavior
  • Add comprehensive tests for HookManager and TorchWatcher in tests/test_torchwatcher.py
tests/test_e2e.py
tests/test_torchwatcher.py
Bump dependencies and update CI/tooling
  • Require Python >=3.10, torch>=1.11, numpy>=1.20, neptune_scale>=0.14
  • Update pyproject.toml, pre-commit-config.yaml and CI matrix to include modern versions and tools
pyproject.toml
.pre-commit-config.yaml
.github/workflows/ci.yml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • Relying on del for hook cleanup may leave hooks attached if garbage collection doesn’t run immediately; consider providing an explicit close() or context‐manager API to deterministically remove hooks.
  • The tensor_stats type annotation using Literal[tuple(TENSOR_STATS.keys())] won’t actually enforce valid keys—either generate a proper union of string literals from TENSOR_STATS or validate the list at runtime.
  • The _safe_upload_diagram function synchronously calls run.wait_for_processing, which may block; consider batching uploads or making this asynchronous to prevent performance hiccups during training.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Relying on __del__ for hook cleanup may leave hooks attached if garbage collection doesn’t run immediately; consider providing an explicit close() or context‐manager API to deterministically remove hooks.
- The tensor_stats type annotation using Literal[tuple(TENSOR_STATS.keys())] won’t actually enforce valid keys—either generate a proper union of string literals from TENSOR_STATS or validate the list at runtime.
- The _safe_upload_diagram function synchronously calls run.wait_for_processing, which may block; consider batching uploads or making this asynchronous to prevent performance hiccups during training.

## Individual Comments

### Comment 1
<location> `src/neptune_pytorch/impl/__init__.py:218` </location>
<code_context>
-            if handler is not None:
-                handler.remove()
+        # Clean up TorchWatcher
+        self._torch_watcher.hm.remove_hooks()

-        if self._vis_hook_handler is not None:
</code_context>

<issue_to_address>
**suggestion:** Calling remove_hooks in __del__ may not guarantee hook removal if object deletion order is unpredictable.

Instead of relying on __del__, add an explicit cleanup method (e.g., close()) to ensure hooks are removed reliably, and document its usage.

Suggested implementation:

```python
    def close(self):
        """
        Clean up resources and remove all hooks.

        Call this method when you are done with the object to ensure hooks are removed reliably.
        """
        # Remove hooks
        if self._diagram_hook_handler is not None:
            self._diagram_hook_handler.remove()

        # Clean up TorchWatcher
        self._torch_watcher.hm.remove_hooks()

```

1. You will need to call `close()` explicitly when you are done with the object, instead of relying on `__del__`.
2. If there is a `__del__` method that currently calls this cleanup logic, you should remove the hook removal from `__del__` and update documentation/comments to instruct users to call `close()` explicitly.
3. Consider updating any usage examples or tests to call `close()` as part of cleanup.
</issue_to_address>

### Comment 2
<location> `src/neptune_pytorch/impl/version.py:21` </location>
<code_context>
     raise PackageNotFoundError(msg)

-try:
+with contextlib.suppress(PackageNotFoundError):
     __version__ = version("neptune-pytorch")
-except PackageNotFoundError:
</code_context>

<issue_to_address>
**issue (bug_risk):** Suppressing PackageNotFoundError may hide installation issues.

Setting a default value for __version__ or raising a clear error will help prevent silent failures in dependent code.
</issue_to_address>

### Comment 3
<location> `tests/test_e2e.py:59-61` </location>
<code_context>
+            )

-        npt_logger.log_checkpoint()
+    # Verify that the correct logging methods were called
+    # Check that log_metrics was called for batch loss
+    assert run.log_metrics.call_count >= 1

-    # Save final model
</code_context>

<issue_to_address>
**suggestion (testing):** Missing negative test cases for NeptuneLogger error handling.

Add tests to confirm NeptuneLogger raises exceptions for invalid 'run' or 'model' arguments to improve error handling coverage.
</issue_to_address>

### Comment 4
<location> `tests/test_e2e.py:102-111` </location>
<code_context>
+    assert batch_loss_calls, "Batch loss should be logged"
+    assert model_internals_calls, "Model internals should be logged"
+
+    # Verify that the model internals calls contain the expected namespace structure
+    for call in model_internals_calls:
+        call_kwargs = call.kwargs
+        if "data" in call_kwargs:
+            metrics = call_kwargs["data"]
+            for metric_name in metrics.keys():
+                assert metric_name.startswith(
+                    f"{npt_logger.base_namespace}/model/internals/"
+                ), f"Metric {metric_name} should start with correct namespace"
+                assert any(
+                    metric_type in metric_name for metric_type in ["activations", "gradients", "parameters"]
+                ), f"Metric {metric_name} should contain one of: activations, gradients, parameters"
</code_context>

<issue_to_address>
**suggestion (testing):** Test does not verify behavior when no layers are tracked or tensor_stats is empty.

Add a test where 'track_layers' or 'tensor_stats' is empty to confirm NeptuneLogger handles these cases without errors or unexpected logging.

Suggested implementation:

```python
    # The TorchWatcher should have logged metrics for activations, gradients, and parameters
    log_metrics_calls = run.log_metrics.call_args_list

    # Check that we have calls for both batch loss and model internals
    batch_loss_calls = [call for call in log_metrics_calls if f"{npt_logger.base_namespace}/batch/loss" in str(call)]
    model_internals_calls = [call for call in log_metrics_calls if "model/internals" in str(call)]

    assert batch_loss_calls, "Batch loss should be logged"
    assert model_internals_calls, "Model internals should be logged"

    # Verify that the model internals calls contain the expected namespace structure
    for call in model_internals_calls:
        call_kwargs = call.kwargs
        if "data" in call_kwargs:
            metrics = call_kwargs["data"]
            for metric_name in metrics.keys():
                assert metric_name.startswith(
                    f"{npt_logger.base_namespace}/model/internals/"
                ), f"Metric {metric_name} should start with correct namespace"
                assert any(
                    metric_type in metric_name for metric_type in ["activations", "gradients", "parameters"]
                ), f"Metric {metric_name} should contain one of: activations, gradients, parameters"

def test_neptune_logger_handles_empty_track_layers(monkeypatch):
    """
    Test that NeptuneLogger does not log model internals when track_layers is empty.
    """
    from my_module import NeptuneLogger  # Adjust import as needed

    # Mock Neptune run and logger
    class DummyRun:
        def __init__(self):
            self.log_metrics = lambda *args, **kwargs: None
            self.log_metrics.call_args_list = []

    run = DummyRun()
    npt_logger = NeptuneLogger(run=run, track_layers=[], tensor_stats={"activations": True, "gradients": True, "parameters": True})

    # Simulate logging
    try:
        npt_logger.log_model_internals()
    except Exception as e:
        assert False, f"NeptuneLogger raised an exception with empty track_layers: {e}"

    # Should not log any model internals
    log_metrics_calls = run.log_metrics.call_args_list
    model_internals_calls = [call for call in log_metrics_calls if "model/internals" in str(call)]
    assert not model_internals_calls, "No model internals should be logged when track_layers is empty"

def test_neptune_logger_handles_empty_tensor_stats(monkeypatch):
    """
    Test that NeptuneLogger does not log model internals when tensor_stats is empty.
    """
    from my_module import NeptuneLogger  # Adjust import as needed

    class DummyRun:
        def __init__(self):
            self.log_metrics = lambda *args, **kwargs: None
            self.log_metrics.call_args_list = []

    run = DummyRun()
    npt_logger = NeptuneLogger(run=run, track_layers=["layer1", "layer2"], tensor_stats={})

    # Simulate logging
    try:
        npt_logger.log_model_internals()
    except Exception as e:
        assert False, f"NeptuneLogger raised an exception with empty tensor_stats: {e}"

    log_metrics_calls = run.log_metrics.call_args_list
    model_internals_calls = [call for call in log_metrics_calls if "model/internals" in str(call)]
    assert not model_internals_calls, "No model internals should be logged when tensor_stats is empty"

```

- Adjust the import path for `NeptuneLogger` to match your codebase.
- If `log_model_internals` is not the correct method to trigger logging, replace it with the appropriate method.
- If your logger uses a different mechanism for tracking calls, update the dummy/mocking logic accordingly.
- If you use pytest fixtures for mocking, you may want to refactor the dummy run setup.
</issue_to_address>

### Comment 5
<location> `tests/test_torchwatcher.py:89-92` </location>
<code_context>
+        hm = _HookManager(test_model, track_layers)
+        assert hm.track_layers == track_layers
+
+    def test_hook_manager_invalid_model(self):
+        """Test HookManager with invalid model type."""
+        with pytest.raises(TypeError, match="The model must be a PyTorch model"):
+            _HookManager("not_a_model")
+
+    def test_hook_manager_invalid_layer_types(self, test_model):
</code_context>

<issue_to_address>
**suggestion (testing):** Missing test for _HookManager with None as model.

Add a test that passes None as the model to _HookManager and checks for a TypeError to validate error handling for null inputs.

```suggestion
    def test_hook_manager_invalid_model(self):
        """Test HookManager with invalid model type."""
        with pytest.raises(TypeError, match="The model must be a PyTorch model"):
            _HookManager("not_a_model")

    def test_hook_manager_none_model(self):
        """Test HookManager with None as model."""
        with pytest.raises(TypeError, match="The model must be a PyTorch model"):
            _HookManager(None)
```
</issue_to_address>

### Comment 6
<location> `tests/test_torchwatcher.py:187-194` </location>
<code_context>
+        with pytest.raises(TypeError, match="The model must be a PyTorch model"):
+            _TorchWatcher(model="not_a_model", run=mock_run, base_namespace="test")
+
+    def test_torch_watcher_invalid_tensor_stats(self, mock_run, test_model):
+        """Test TorchWatcher with invalid tensor statistics."""
+        with pytest.raises(ValueError, match="Invalid statistics requested"):
+            _TorchWatcher(
+                model=test_model,
+                run=mock_run,
+                base_namespace="test",
+                tensor_stats=["invalid_stat", "mean"],
+            )
+
</code_context>

<issue_to_address>
**suggestion (testing):** Missing test for empty tensor_stats list in _TorchWatcher.

Please add a test to ensure _TorchWatcher correctly handles an empty tensor_stats list without logging metrics.

Suggested implementation:

```python
    def test_torch_watcher_invalid_tensor_stats(self, mock_run, test_model):
        """Test TorchWatcher with invalid tensor statistics."""
        with pytest.raises(ValueError, match="Invalid statistics requested"):
            _TorchWatcher(
                model=test_model,
                run=mock_run,
                base_namespace="test",
                tensor_stats=["invalid_stat", "mean"],
            )

    def test_torch_watcher_empty_tensor_stats(self, mock_run, test_model):
        """Test TorchWatcher with empty tensor_stats list does not log metrics."""
        tw = _TorchWatcher(
            model=test_model,
            run=mock_run,
            base_namespace="test",
            tensor_stats=[],
        )
        assert tw.tensor_stats == {}
        # If the watcher logs metrics during initialization or forward, you may want to check:
        # mock_run.log_metric.assert_not_called()

```

If `_TorchWatcher` logs metrics during initialization or during a forward pass, you may want to add a forward pass and assert that `mock_run.log_metric` is not called. Adjust the assertion as needed based on your logging implementation.
</issue_to_address>

### Comment 7
<location> `tests/test_torchwatcher.py:308-317` </location>
<code_context>
+    def test_histogram_processing(self, mock_run, test_model, test_data):
</code_context>

<issue_to_address>
**suggestion (testing):** Missing test for histogram edge cases (NaN/Inf values).

Add a test with tensors containing NaN or Inf to confirm that such histograms are skipped without errors.
</issue_to_address>

### Comment 8
<location> `tests/test_torchwatcher.py:414-415` </location>
<code_context>
+        assert logger._torch_watcher is not None
+        mock_run.log_metrics.assert_called()
+
+    def test_neptune_logger_without_tracking(self, mock_run, test_model):
+        """Test NeptuneLogger with tracking disabled via log_model_internals."""
+        from neptune_pytorch.impl import NeptuneLogger
+
+        logger = NeptuneLogger(
+            run=mock_run,
+            model=test_model,
+        )
+
+        # Should have TorchWatcher
+        assert logger._torch_watcher is not None
+
+        # log_model_internals with all tracking disabled
+        logger.log_model_internals(step=0, track_activations=False, track_gradients=False, track_parameters=False)
+        # Should still call log_metrics but with no data
+        mock_run.log_metrics.assert_called()
+
+
</code_context>

<issue_to_address>
**suggestion (testing):** Missing test for NeptuneLogger with invalid base_namespace values.

Add a test initializing NeptuneLogger with invalid or edge-case base_namespace values (such as empty string, None, or special characters) to verify correct metric logging and safe namespace handling.

```suggestion
        assert logger._torch_watcher is not None
        mock_run.log_metrics.assert_called()

    @pytest.mark.parametrize("base_namespace", [None, "", "!@#$%", "metrics/namespace"])
    def test_neptune_logger_with_invalid_base_namespace(self, mock_run, test_model, base_namespace):
        """Test NeptuneLogger with invalid or edge-case base_namespace values."""
        from neptune_pytorch.impl import NeptuneLogger

        logger = NeptuneLogger(
            run=mock_run,
            model=test_model,
            base_namespace=base_namespace,
        )

        # Should have TorchWatcher
        assert logger._torch_watcher is not None

        # log_model_internals should not crash and should call log_metrics
        logger.log_model_internals(step=0)
        mock_run.log_metrics.assert_called()
```
</issue_to_address>

### Comment 9
<location> `src/neptune_pytorch/impl/__init__.py:140` </location>
<code_context>
-        self._params_hook_handler = None
-        if self.log_parameters:
-            self._add_hooks_for_params()
+            summary_key = f"{self._base_namespace}/model/summary" if self._base_namespace else "model/summary"
+            self.run.log_configs({summary_key: str(model)})
+            self._add_diagram_hook()
</code_context>

<issue_to_address>
**issue (complexity):** Consider introducing a helper for namespaced keys and centralizing file upload and cleanup logic to reduce boilerplate.

```markdown
You can trim down the remaining namespace‐ and upload‐boilerplate by:

1. Introducing a small helper for building namespaced keys instead of inlining `f"{self._base_namespace}/…"` everywhere.  
2. Factoring out your “assign & cleanup” logic into one utility, instead of having almost the same in `_safe_upload_diagram` and any future uploads.

For example:

```python
class NeptuneLogger:
    def __init__(…):
        …
        # instead of inline f-strings later:
        summary_key = self._ns("model", "summary")
        self.run.log_configs({summary_key: str(model)})
        …

    def _ns(self, *parts: str) -> str:
        # joins ["training","model","diagram"] -> "training/model/diagram"
        if self._base_namespace:
            return "/".join((self._base_namespace, *parts))
        return "/".join(parts)

    def _add_diagram_hook(self):
        def hook(module, inp, output):
            if not self._is_diagram_saved:
                viz_name = f"{uuid.uuid4()}.png"
                try:
                    dot.render(outfile=viz_name, cleanup=True)
                    _upload_and_cleanup(self.run, self._ns("model", "diagram"), viz_name)
                except ExecutableNotFound:
                    warnings.warn("Skipping diagram—no dot binary.")
                self._is_diagram_saved = True

        self._diagram_hook_handler = self.model.register_forward_hook(hook)
```

And a file‐upload helper:

```python
def _upload_and_cleanup(run: Run, key: str, file_name: str):
    try:
        run.assign_files({key: file_name})
        run.wait_for_processing()
    finally:
        for fn in (file_name, file_name.replace(".png", ".gv")):
            if os.path.exists(fn):
                os.remove(fn)
```

This removes the repeated f‐strings in `__init__` and in your hook, and centralizes cleanup logic in one place.```
</issue_to_address>

### Comment 10
<location> `src/neptune_pytorch/impl/_torchwatcher.py:95` </location>
<code_context>
+
+        return hook
+
+    def register_hooks(self, track_activations: bool = True, track_gradients: bool = True):
+        """
+        Register hooks for the model with configurable tracking.
</code_context>

<issue_to_address>
**issue (complexity):** Consider refactoring hook registration, metric tracking, and histogram conversion into unified functions to reduce code duplication and nesting.

```markdown
You can significantly reduce duplication and nesting by:

1. **Merging forward/backward hook registration into one pass**  
   Instead of two loops over `named_modules()`, do both in one:
   ```python
   def register_hooks(self, track_activations=True, track_gradients=True):
       self.remove_hooks()
       for name, module in self.model.named_modules():
           if not name or (self.track_layers and not isinstance(module, tuple(self.track_layers))):
               continue
           if track_activations:
               self.hooks.append(module.register_forward_hook(self.save_activation(name)))
           if track_gradients:
               self.hooks.append(module.register_full_backward_hook(self.save_gradient(name)))
   ```
   - Cuts one full traversal of the model.

2. **Unify all three `track_*` methods into one**  
   Introduce a single `track()` that accepts a list of metric‐types:
   ```python
   def track(self, metrics: List[str], prefix: Optional[str] = None):
       data = {
           "activations": self.hm.get_activations(),
           "gradients":    self.hm.get_gradients(),
           "parameters":   {n: p.data for n,p in self.model.named_parameters()},
       }
       for metric in metrics:
           self._track_metric(metric, data[metric], prefix)
   ```
   Then in `watch()`:
   ```python
   to_track = []
   if track_activations: to_track.append("activations")
   if track_gradients:    to_track.append("gradients")
   if track_parameters:   to_track.append("parameters")

   self.debug_metrics.clear()
   self.track(to_track, prefix)
   ```

3. **Extract histogram conversion to a helper**  
   ```python
   def _convert_hist(self, h: torch.return_types.histogram) -> Histogram:
       # force=True is needed to convert CUDA tensors if any
       counts = h.hist.numpy(force=True).astype(int).tolist()
       edges  = h.bin_edges.numpy(force=True).astype(float).tolist()
       return Histogram(bin_edges=edges, counts=counts)
   ```
   Then in `watch()` you can partition once:
   ```python
   hist_data = {
       k: self._convert_hist(v)
       for k, v in self.debug_metrics.items()
       if k.endswith("/hist")
   }
   metric_data = {
       k: v
       for k, v in self.debug_metrics.items()
       if not k.endswith("/hist")
   }

   self.run.log_metrics(data=metric_data, step=step)
   if hist_data:
       self.run.log_histograms(histograms=hist_data, step=step)
   ```
This preserves all functionality while removing repeated loops, deep nesting, and branching.
</issue_to_address>

### Comment 11
<location> `tests/test_e2e.py:74-81` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 12
<location> `tests/test_e2e.py:76-81` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 13
<location> `tests/test_e2e.py:103-113` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 14
<location> `tests/test_e2e.py:105-113` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 15
<location> `tests/test_e2e.py:107-113` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 16
<location> `tests/test_torchwatcher.py:208-210` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 17
<location> `tests/test_torchwatcher.py:225-227` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 18
<location> `tests/test_torchwatcher.py:242-244` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 19
<location> `tests/test_torchwatcher.py:281-288` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 20
<location> `tests/test_torchwatcher.py:305-306` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 21
<location> `tests/test_torchwatcher.py:330-336` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

sourcery-ai[bot]
sourcery-ai bot previously requested changes Sep 19, 2025
Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New security issues found

@SiddhantSadangi
Copy link
Member Author

@sourcery-ai dismiss
@sourcery-ai review

@SiddhantSadangi SiddhantSadangi added this to the 3.0.0 milestone Oct 10, 2025
Co-authored-by: Sabine Ståhlberg <[email protected]>
@SiddhantSadangi SiddhantSadangi merged commit 2c4d7b5 into master Oct 13, 2025
9 checks passed
@SiddhantSadangi SiddhantSadangi deleted the ss/neptune-scale-support branch October 13, 2025 12:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants