Skip to content

MONet Bundle Integration into MONAI Deploy#574

Merged
chezhia merged 18 commits intoProject-MONAI:mainfrom
SimoneBendazzoli93:main
Apr 30, 2026
Merged

MONet Bundle Integration into MONAI Deploy#574
chezhia merged 18 commits intoProject-MONAI:mainfrom
SimoneBendazzoli93:main

Conversation

@SimoneBendazzoli93
Copy link
Copy Markdown
Contributor

@SimoneBendazzoli93 SimoneBendazzoli93 commented Dec 12, 2025

This PR introduces support for the MONet Bundle (an nnUNet wrapper for the MONAI Bundle) into MONAI Deploy.

Key Features:

  • Added a new operator: MONetBundleInferenceOperator, extending MonaiBundleInferenceOperator

  • Included an example application demonstrating spleen segmentation using the MONetBundleInferenceOperator

Summary by CodeRabbit

  • New Features
    • Added MONetBundleInferenceOperator — a specialized inference operator with nnUNet-style model support, multimodal input handling, automatic predictor setup, and streamlined prediction behavior.
  • Bug Fixes
    • Fixed YAML extension handling when reading bundle configurations.
    • Normalized missing input metadata to ensure consistent validation and processing.
  • Chores
    • Exported MONetBundleInferenceOperator in the package public API.

@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
7.3% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Comment thread devel/monet_bundle_inference_operator.py Outdated
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds a new MONetBundleInferenceOperator for nnUNet-style multimodal inference, exposes it in the public API, and fixes bundle YAML suffix handling plus metadata initialization in MonaiBundleInferenceOperator.

Changes

Cohort / File(s) Summary
New MONet operator
monai/deploy/operators/monet_bundle_inference_operator.py
Adds MONetBundleInferenceOperator class: initializes internal nnUNet predictor, restricts/validates accepted model types, assembles multimodal inputs (resampling + concatenation), ensures batch dim, runs predictor, and propagates metadata. Exports via __all__.
Bundle inference fixes
monai/deploy/operators/monai_bundle_inference_operator.py
Fixes bundle config suffix handling to include leading dot for .yml and defaults missing meta_data to an empty dict before type validation and meta pruning.
Public API export
monai/deploy/operators/__init__.py
Imports and exposes MONetBundleInferenceOperator in the module autosummary and __all__, adding it to the package public API.

Sequence Diagram

sequenceDiagram
    participant Client
    participant MONetOp as MONetBundleInferenceOperator
    participant Transform as ResampleToMatch / ConcatItemsd
    participant Predictor as nnUNet_Predictor

    Client->>MONetOp: predict(data, **kwargs)
    MONetOp->>MONetOp: ensure _nnunet_predictor and model network
    alt extra modalities provided
        MONetOp->>Transform: resample modalities to match image
        Transform-->>MONetOp: resampled modalities
        MONetOp->>Transform: concat modalities into "image" tensor
        Transform-->>MONetOp: multimodal input tensor
    end
    MONetOp->>MONetOp: ensure batch dimension
    MONetOp->>Predictor: run predictor(input)
    Predictor-->>MONetOp: prediction
    MONetOp->>MONetOp: copy data.meta -> prediction.meta (if present)
    MONetOp-->>Client: return prediction
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐇 I hop through bundles, dots in place,
I stitch modalities, align each space.
Metadata tidy, predictors hum,
Together we make the outputs come. 🎉

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically summarizes the main change—integrating MONet Bundle support into MONAI Deploy, which is reflected in the three modified files (one new operator, one updated config handler, and one public export).
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
monai/deploy/operators/monai_bundle_inference_operator.py (1)

149-149: ⚠️ Potential issue | 🔴 Critical

Bug: Missing leading dot on "yml" suffix in _read_directory_bundle_config.

bundle_suffixes here has "yml" without a leading dot, so constructing f"{config_name_base}{suffix}" at Line 170 would produce e.g. "inferenceyml" instead of "inference.yml". The archive-based reader at Line 189 was correctly fixed to ".yml", but this directory-based reader was missed.

🐛 Proposed fix
-    bundle_suffixes = (".json", ".yaml", "yml")  # The only supported file ext(s)
+    bundle_suffixes = (".json", ".yaml", ".yml")  # The only supported file ext(s)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monai_bundle_inference_operator.py` at line 149, In
_read_directory_bundle_config the bundle_suffixes tuple is missing a leading dot
for "yml", causing f"{config_name_base}{suffix}" to produce filenames like
"inferenceyml"; update bundle_suffixes in monai_bundle_inference_operator.py to
include the leading dot (".yml") so that filenames built by config_name_base +
suffix are correct; confirm the change in the _read_directory_bundle_config
function where config_name_base and suffix are concatenated.
🧹 Nitpick comments (2)
monai/deploy/operators/monet_bundle_inference_operator.py (2)

90-95: Non-MetaTensor kwargs (e.g. from base class) are silently dropped from multimodal data.

The base class compute passes **other_inputs to predict, which may include non-tensor entries. The if len(kwargs) > 0 guard enters the multimodal path for any kwargs, but only MetaTensor values are added to multimodal_data. Non-MetaTensor kwargs are silently ignored. Consider filtering kwargs more explicitly — e.g. only enter multimodal path if there are actually MetaTensor values:

Proposed fix
-        if len(kwargs) > 0:
-            multimodal_data = {"image": data}
-            for key in kwargs.keys():
-                if isinstance(kwargs[key], MetaTensor):
-                    multimodal_data[key] = ResampleToMatch(mode="bilinear")(kwargs[key], img_dst=data)
-            data = ConcatItemsd(keys=list(multimodal_data.keys()), name="image")(multimodal_data)["image"]
+        meta_tensor_kwargs = {k: v for k, v in kwargs.items() if isinstance(v, MetaTensor)}
+        if meta_tensor_kwargs:
+            multimodal_data = {"image": data}
+            for key, value in meta_tensor_kwargs.items():
+                multimodal_data[key] = ResampleToMatch(mode="bilinear")(value, img_dst=data)
+            data = ConcatItemsd(keys=list(multimodal_data.keys()), name="image")(multimodal_data)["image"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 90 -
95, The multimodal branch currently triggers for any kwargs but only adds
MetaTensor values and silently drops others; change the logic in predict
(referencing kwargs, multimodal_data, MetaTensor, ResampleToMatch, ConcatItemsd)
to first filter kwargs for MetaTensor entries (e.g., meta_kwargs = {k:v for k,v
in kwargs.items() if isinstance(v, MetaTensor)}), only enter the multimodal path
when meta_kwargs is non-empty, build multimodal_data from meta_kwargs
(resampling via ResampleToMatch and concatenating with ConcatItemsd) and leave
non-MetaTensor kwargs untouched so compute/predict (and other_inputs) still
receive them.

17-17: Hard import of monai.transforms breaks the optional_import pattern used elsewhere.

The base operator and this file use optional_import for torch and MetaTensor, but ConcatItemsd and ResampleToMatch are imported directly. If monai is not installed (or partially installed), this will raise ImportError at module load time rather than deferring it to usage.

Proposed fix
-from monai.transforms import ConcatItemsd, ResampleToMatch
+ConcatItemsd, _ = optional_import("monai.transforms", name="ConcatItemsd")
+ResampleToMatch, _ = optional_import("monai.transforms", name="ResampleToMatch")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` at line 17,
Replace the hard import of ConcatItemsd and ResampleToMatch with the same
optional_import pattern used for torch/MetaTensor: use
optional_import("monai.transforms") to get the transforms module (or None), then
assign ConcatItemsd = transforms.ConcatItemsd and ResampleToMatch =
transforms.ResampleToMatch if transforms is not None; if they are None, ensure
any code that uses ConcatItemsd/ResampleToMatch checks for None and raises a
clear ImportError or defers functionality until monai is available. Reference
the symbols ConcatItemsd and ResampleToMatch in
monet_bundle_inference_operator.py and follow the existing optional_import usage
style in the file for consistency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/deploy/operators/monet_bundle_inference_operator.py`:
- Line 1: Update the file header in monet_bundle_inference_operator.py to
correct the copyright year: replace the incorrect "2002" with "2025" in the
top-of-file comment so the copyright line accurately reflects the current year.
- Around line 26-46: Remove the duplicated sentence in the module/class
docstring that repeats "A specialized operator for performing inference using
the MONet bundle"; edit the docstring (above MonetBundleInferenceOperator / the
class definition containing _init_config and predict) to keep a single, coherent
opening sentence, preserve the rest of the docstring content and formatting
(attributes/methods sections), and ensure the triple-quoted string remains
properly closed and PEP257-style spacing is preserved.
- Around line 58-64: The _init_config implementation is re-parsing the bundle
and overwriting self._parser after calling super()._init_config, which causes
double I/O and a mismatch with objects the parent initialized (e.g.,
self._device, self._inferer, self._preproc, self._postproc); remove the extra
get_bundle_config call and instead reuse the parser the parent already created
(use self._parser) to obtain network_def via
self._parser.get_parsed_content("network_def") and assign that to
self._nnunet_predictor without reassigning self._parser.
- Around line 75-81: The runtime type-check block for model_network is using
torch.jit.isinstance (meant for TorchScript refinement) which is incorrect for
eager Python; replace torch.jit.isinstance(model_network,
torch.jit.ScriptModule) with the standard isinstance(model_network,
torch.jit.ScriptModule) in the validation that checks model_network in the
MonetBundleInferenceOperator (the block referencing model_network,
torch.jit.ScriptModule, TorchScriptModel, TritonModel) so the condition uses
only Python isinstance checks and the TypeError remains unchanged.

---

Outside diff comments:
In `@monai/deploy/operators/monai_bundle_inference_operator.py`:
- Line 149: In _read_directory_bundle_config the bundle_suffixes tuple is
missing a leading dot for "yml", causing f"{config_name_base}{suffix}" to
produce filenames like "inferenceyml"; update bundle_suffixes in
monai_bundle_inference_operator.py to include the leading dot (".yml") so that
filenames built by config_name_base + suffix are correct; confirm the change in
the _read_directory_bundle_config function where config_name_base and suffix are
concatenated.

---

Nitpick comments:
In `@monai/deploy/operators/monet_bundle_inference_operator.py`:
- Around line 90-95: The multimodal branch currently triggers for any kwargs but
only adds MetaTensor values and silently drops others; change the logic in
predict (referencing kwargs, multimodal_data, MetaTensor, ResampleToMatch,
ConcatItemsd) to first filter kwargs for MetaTensor entries (e.g., meta_kwargs =
{k:v for k,v in kwargs.items() if isinstance(v, MetaTensor)}), only enter the
multimodal path when meta_kwargs is non-empty, build multimodal_data from
meta_kwargs (resampling via ResampleToMatch and concatenating with ConcatItemsd)
and leave non-MetaTensor kwargs untouched so compute/predict (and other_inputs)
still receive them.
- Line 17: Replace the hard import of ConcatItemsd and ResampleToMatch with the
same optional_import pattern used for torch/MetaTensor: use
optional_import("monai.transforms") to get the transforms module (or None), then
assign ConcatItemsd = transforms.ConcatItemsd and ResampleToMatch =
transforms.ResampleToMatch if transforms is not None; if they are None, ensure
any code that uses ConcatItemsd/ResampleToMatch checks for None and raises a
clear ImportError or defers functionality until monai is available. Reference
the symbols ConcatItemsd and ResampleToMatch in
monet_bundle_inference_operator.py and follow the existing optional_import usage
style in the file for consistency.

Comment thread monai/deploy/operators/monet_bundle_inference_operator.py Outdated
Comment thread monai/deploy/operators/monet_bundle_inference_operator.py
Comment thread monai/deploy/operators/monet_bundle_inference_operator.py Outdated
Comment thread monai/deploy/operators/monet_bundle_inference_operator.py
@Project-MONAI Project-MONAI deleted a comment from coderabbitai Bot Mar 19, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
monai/deploy/operators/monet_bundle_inference_operator.py (2)

57-63: ⚠️ Potential issue | 🟠 Major

Avoid reparsing and overwriting self._parser after super()._init_config.

Line 60 and Line 61 reinitialize parser state already built by the base class. This duplicates parsing work and can desync parser-dependent fields initialized in MonaiBundleInferenceOperator._init_config.

Proposed fix
     def _init_config(self, config_names):
 
         super()._init_config(config_names)
-        parser = get_bundle_config(str(self._bundle_path), config_names)
-        self._parser = parser
-
-        self._nnunet_predictor = parser.get_parsed_content("network_def")
+        self._nnunet_predictor = self._parser.get_parsed_content("network_def")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 57 -
63, The code reinitializes the parser after calling super()._init_config, which
duplicates work and can overwrite parser state; in _init_config avoid calling
get_bundle_config again and do not assign to self._parser a new parser—use the
parser instance already initialized by super()._init_config (self._parser) and
set self._nnunet_predictor = self._parser.get_parsed_content("network_def") (or
call get_parsed_content on the existing parser variable) instead of reassigning
self._parser via get_bundle_config.

74-80: ⚠️ Potential issue | 🟠 Major

Use isinstance for eager runtime checks and align the error message with accepted types.

Line 76 uses torch.jit.isinstance, which is intended for TorchScript type refinement, not regular Python runtime validation. Also, Line 80’s message omits accepted TorchScriptModel and TritonModel.

Proposed fix
         if (
             not isinstance(model_network, torch.nn.Module)
-            and not torch.jit.isinstance(model_network, torch.jit.ScriptModule)
+            and not isinstance(model_network, torch.jit.ScriptModule)
             and not isinstance(model_network, TorchScriptModel)
             and not isinstance(model_network, TritonModel)
         ):
-            raise TypeError("model_network must be an instance of torch.nn.Module or torch.jit.ScriptModule")
+            raise TypeError(
+                "model_network must be an instance of torch.nn.Module, "
+                "torch.jit.ScriptModule, TorchScriptModel, or TritonModel"
+            )
In PyTorch (including 1.10.2), is `torch.jit.isinstance` intended for TorchScript type refinement rather than regular eager-mode runtime type checks?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 74 -
80, The runtime type check in monet_bundle_inference_operator.py incorrectly
uses torch.jit.isinstance (meant for TorchScript refinement) and the TypeError
message omits accepted types; update the conditional in the validation block
that checks model_network (the one currently testing torch.nn.Module,
torch.jit.ScriptModule, TorchScriptModel, TritonModel) to use plain
isinstance(...) for all checks (replace torch.jit.isinstance with isinstance)
and change the raised TypeError message in that same block to list all accepted
types: torch.nn.Module, torch.jit.ScriptModule, TorchScriptModel, and
TritonModel so the message accurately reflects the allowed types.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/deploy/operators/monet_bundle_inference_operator.py`:
- Around line 89-94: The current loop in the inference operator only adds kwargs
entries when isinstance(..., MetaTensor), silently dropping others; update the
handling in the method (where multimodal_data, ResampleToMatch, and ConcatItemsd
are used) to validate kwargs: iterate items in kwargs and for each key either
resample and add it to multimodal_data if it's a MetaTensor, or raise a clear
TypeError/ValueError that includes the offending key name and its actual type so
callers know they passed an unsupported modality type (do not silently ignore
non-MetaTensor values).
- Line 98: The assignment prediction.meta = data.meta can raise if either
prediction or data lack a .meta attribute; update the
MonetBundleInferenceOperator where this line occurs to guard the propagation by
checking attributes (e.g., using hasattr(prediction, "meta") and hasattr(data,
"meta") or isinstance checks) and only copy data.meta when both objects expose
.meta, otherwise skip or attach a safe metadata container; ensure you reference
the variables prediction and data in the conditional so behavior remains
unchanged for tensor-like outputs.

---

Duplicate comments:
In `@monai/deploy/operators/monet_bundle_inference_operator.py`:
- Around line 57-63: The code reinitializes the parser after calling
super()._init_config, which duplicates work and can overwrite parser state; in
_init_config avoid calling get_bundle_config again and do not assign to
self._parser a new parser—use the parser instance already initialized by
super()._init_config (self._parser) and set self._nnunet_predictor =
self._parser.get_parsed_content("network_def") (or call get_parsed_content on
the existing parser variable) instead of reassigning self._parser via
get_bundle_config.
- Around line 74-80: The runtime type check in
monet_bundle_inference_operator.py incorrectly uses torch.jit.isinstance (meant
for TorchScript refinement) and the TypeError message omits accepted types;
update the conditional in the validation block that checks model_network (the
one currently testing torch.nn.Module, torch.jit.ScriptModule, TorchScriptModel,
TritonModel) to use plain isinstance(...) for all checks (replace
torch.jit.isinstance with isinstance) and change the raised TypeError message in
that same block to list all accepted types: torch.nn.Module,
torch.jit.ScriptModule, TorchScriptModel, and TritonModel so the message
accurately reflects the allowed types.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 32044423-f81a-4a86-a66f-992e420065a8

📥 Commits

Reviewing files that changed from the base of the PR and between 0743b3e and 67d24d6.

📒 Files selected for processing (1)
  • monai/deploy/operators/monet_bundle_inference_operator.py

Comment thread monai/deploy/operators/monet_bundle_inference_operator.py
Comment thread monai/deploy/operators/monet_bundle_inference_operator.py Outdated
@chezhia
Copy link
Copy Markdown
Contributor

chezhia commented Mar 19, 2026

@SimoneBendazzoli93 It looks like the DCO (Developer Certificate of Origin) check is failing. To fix this, please ensure all your commits are signed off.

You can do this by amending your previous commits using:
git commit --amend --signoff

Or, if you have multiple commits, you can perform an interactive rebase:
git rebase -i main --signoff

Then, force-push the changes to the branch. This is required for the PR to be mergedr

Copy link
Copy Markdown
Contributor

@chezhia chezhia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of items identified by copilot needs review from author. Accepted a few minor suggestions.

Comment thread monai/deploy/operators/monet_bundle_inference_operator.py Outdated
Comment thread monai/deploy/operators/monet_bundle_inference_operator.py Outdated
Comment thread monai/deploy/operators/monet_bundle_inference_operator.py
SimoneBendazzoli93 and others added 16 commits April 17, 2026 12:29
- Included MONetBundleInferenceOperator in the __init__.py file for operator registration.
- Updated import statements to reflect the addition of the new operator.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Corrected the bundle suffixes tuple to include a period before 'yml'.
- Fixed a method call to ensure casefold() is invoked correctly.
- Initialized meta_data to an empty dictionary if not provided.

These changes enhance code clarity and prevent potential runtime errors.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Introduced a new operator, MONetBundleInferenceOperator, for performing inference using the MONet bundle.
- Extended functionality from MonaiBundleInferenceOperator to support nnUNet-specific configurations.
- Implemented methods for initializing configurations and performing predictions with multimodal data handling.

This addition enhances the inference capabilities within the MONAI framework.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Introduced a new file containing the implementation of the MONetBundleInferenceOperator.
- This operator extends the MonaiBundleInferenceOperator to facilitate inference with nnUNet-specific configurations.
- Implemented methods for configuration initialization and multimodal data prediction, enhancing the MONAI framework's inference capabilities.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Registered MONetBundleInferenceOperator in the __init__.py file to ensure it is included in the module's public API.
- This change facilitates easier access to the operator for users of the MONAI framework.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
… tested alone (Project-MONAI#573)

* Added saving decoded pixels for in deepth review if needed

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Fixed linting complaints

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Fixed the code and improve the tests with failed tests to be addressed.

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Force YBR for JEPG baseline, and test nvimgcodec without any decault decoders

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Critical changes make uncompressed images matching pydicom default decoders.

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Removed support for 12bit "JPEG Extended, Process 2+4"

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Address review comments including from AI agent

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Added reason for ignoring dcm files known to fail to uncompress

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Updated the notes on perf test results

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Explicitly minimized lazy loading impact and added comments on it.

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Updated doc sentences

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Editorial changes made to comments

Signed-off-by: M Q <mingmelvinq@nvidia.com>

---------

Signed-off-by: M Q <mingmelvinq@nvidia.com>
Signed-off-by: Simone Bendazzoli <simben@kth.se>
* Release v3.5.0

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Bump version: 3.4.0 → 3.5.0

Signed-off-by: M Q <mingmelvinq@nvidia.com>

---------

Signed-off-by: M Q <mingmelvinq@nvidia.com>
Signed-off-by: Simone Bendazzoli <simben@kth.se>
…mplementation of the MONetBundleInferenceOperator. This deletion simplifies the codebase by eliminating unused or redundant components.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Enhanced the docstring for MONetBundleInferenceOperator to include a reference to the MONet bundle repository and provide additional context on its functionality.
- This update improves clarity for users regarding the operator's purpose and usage.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Improved the type checking for the model_network parameter to enhance readability and maintainability.
- Adjusted formatting in the predict method for better clarity and consistency in multimodal data handling.
- These changes contribute to cleaner code and improved functionality within the MONAI framework.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Integrated TritonModel type checking into the MONetBundleInferenceOperator to enhance model compatibility.
- Updated the predict method to retain metadata from input data, improving the output structure for predictions.

These changes improve the operator's functionality and usability within the MONAI framework.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
Minor typos

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Elanchezhian <chezhipower@gmail.com>
Signed-off-by: Simone Bendazzoli <simben@kth.se>
Applying minor patch to docs

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Elanchezhian <chezhipower@gmail.com>
Signed-off-by: Simone Bendazzoli <simben@kth.se>
minor change for stability

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Elanchezhian <chezhipower@gmail.com>
Signed-off-by: Simone Bendazzoli <simben@kth.se>
protection for meta attribute - added safety

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Elanchezhian <chezhipower@gmail.com>
Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Updated the _init_config method to directly use the parser instance for retrieving the network definition, improving code clarity and reducing redundancy.
- This change enhances the maintainability of the MONetBundleInferenceOperator within the MONAI framework.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
@sonarqubecloud
Copy link
Copy Markdown

@sonarqubecloud
Copy link
Copy Markdown

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
monai/deploy/operators/monet_bundle_inference_operator.py (3)

55-55: Type annotation contradicts the initial value.

self._nnunet_predictor: torch.nn.Module = None annotates as torch.nn.Module but assigns None. Use Optional[torch.nn.Module] (or drop the annotation) for accuracy and to avoid tripping type checkers.

-        self._nnunet_predictor: torch.nn.Module = None
+        self._nnunet_predictor: Optional[torch.nn.Module] = None

(Also requires Optional in the typing import on Line 12.)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` at line 55, The
attribute self._nnunet_predictor is annotated as torch.nn.Module but initialized
to None; change its annotation to Optional[torch.nn.Module] (or remove the
annotation) and add Optional to the typing imports so type checkers accept the
None default; update the declaration of self._nnunet_predictor in
MonetBundleInferenceOperator accordingly and ensure the typing import line
includes Optional.

15-19: Import ordering nit.

from monai.transforms import ... (third-party) is placed after monai.deploy.* (first-party) imports, and TorchScriptModel/TritonModel imports are interleaved with it. Regrouping into standard/third-party/first-party blocks would be cleaner, though this is cosmetic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 15 -
19, The import statements are not grouped in conventional order; reorder them
into standard library, third-party, then first-party blocks and keep related
imports together: place "from monai.transforms import ConcatItemsd,
ResampleToMatch" with other third-party MONAI imports, keep "from
monai.deploy.utils.importutil import optional_import" and "from
monai.deploy.operators.monai_bundle_inference_operator import
MonaiBundleInferenceOperator, get_bundle_config" together as first-party deploy
imports, and ensure "from monai.deploy.core.models.torch_model import
TorchScriptModel" and "from monai.deploy.core.models.triton_model import
TritonModel" are grouped with the other monai.deploy.* imports so the file
imports are neatly ordered and not interleaved.

86-94: Batch-dim handling assumes a fixed rank.

Line 92 unconditionally adds a batch dim when data.ndim == 4. After ConcatItemsd on the multimodal path, the result may already be (C, H, W, D) (4D) or channel-first 3D depending on inputs, and directly-passed data may already include batch. For 5D inputs this is a no-op (good), but for a 3D input this silently skips the batch add. Consider normalizing to the nnU-Net expected rank explicitly (e.g., ensure (B, C, H, W, D)) rather than keying only on ndim == 4.

Also: on the multimodal branch (Lines 86-91), if data is not a MetaTensor, ConcatItemsd still runs over {"image": data, ...}; verify ConcatItemsd behaves correctly with a plain tensor as one of the items.

MONAI ConcatItemsd accepts plain torch.Tensor values or requires MetaTensor
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 86 -
94, The code assumes a fixed rank (checks only ndim == 4) and may produce wrong
shapes after multimodal ConcatItemsd; normalize inputs to nnU-Net's expected
(B,C,H,W,D) rank explicitly: after building multimodal_data (using
ResampleToMatch and MetaTensor checks) ensure each value is a MetaTensor or
torch.Tensor with a channel dim, convert plain tensors to MetaTensor or add a
channel dimension where missing, then after ConcatItemsd inspect data.ndim and
repeatedly unsqueeze a leading batch dimension until data.ndim == 5 (or use a
helper like ensure_batch_dim to guarantee shape (B,C,H,W,D)); also add a guard
to coerce non-MetaTensor multimodal entries before calling ConcatItemsd or
confirm ConcatItemsd supports plain torch.Tensor and wrap otherwise, and finally
pass the normalized 5D tensor to self._nnunet_predictor.
monai/deploy/operators/monai_bundle_inference_operator.py (1)

746-751: Defaulting meta_data to {} is reasonable, but note downstream effect.

meta_data or {} also replaces a falsy-but-valid object (e.g., an empty dict-subclass that evaluates false, or 0/[] if upstream misbehaves) with a fresh {}. That's fine in practice here since the very next check enforces isinstance(meta_data, dict), but a stricter meta_data if meta_data is not None else {} would be marginally safer.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monai_bundle_inference_operator.py` around lines 746 -
751, The code currently replaces any falsy meta_data with {} using "meta_data or
{}", which can incorrectly replace legitimate falsy-but-valid objects; update
the assignment to only default when meta_data is None (e.g., set meta_data =
meta_data if meta_data is not None else {}) before the isinstance check and
before calling MetaTensor.ensure_torch_and_prune_meta; locate the meta_data
handling around the _receive_input, convert_to_dst_type, and
MetaTensor.ensure_torch_and_prune_meta calls to make this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/deploy/operators/monet_bundle_inference_operator.py`:
- Around line 62-78: In _set_model_network: ensure you unwrap accepted wrapper
types before assigning to self._nnunet_predictor.predictor.network—if
model_network is a TorchScriptModel extract the underlying nn.Module (e.g.,
model_network.predictor) and assign that; if model_network is a TritonModel,
explicitly reject it (raise TypeError) or add alternate handling (do not assign
a non-nn.Module) because TritonRemoteModel cannot be used as an nn.Module;
finally update the TypeError text to list all accepted inputs (torch.nn.Module,
torch.jit.ScriptModule, TorchScriptModel) so the message matches the isinstance
checks.

---

Nitpick comments:
In `@monai/deploy/operators/monai_bundle_inference_operator.py`:
- Around line 746-751: The code currently replaces any falsy meta_data with {}
using "meta_data or {}", which can incorrectly replace legitimate
falsy-but-valid objects; update the assignment to only default when meta_data is
None (e.g., set meta_data = meta_data if meta_data is not None else {}) before
the isinstance check and before calling MetaTensor.ensure_torch_and_prune_meta;
locate the meta_data handling around the _receive_input, convert_to_dst_type,
and MetaTensor.ensure_torch_and_prune_meta calls to make this change.

In `@monai/deploy/operators/monet_bundle_inference_operator.py`:
- Line 55: The attribute self._nnunet_predictor is annotated as torch.nn.Module
but initialized to None; change its annotation to Optional[torch.nn.Module] (or
remove the annotation) and add Optional to the typing imports so type checkers
accept the None default; update the declaration of self._nnunet_predictor in
MonetBundleInferenceOperator accordingly and ensure the typing import line
includes Optional.
- Around line 15-19: The import statements are not grouped in conventional
order; reorder them into standard library, third-party, then first-party blocks
and keep related imports together: place "from monai.transforms import
ConcatItemsd, ResampleToMatch" with other third-party MONAI imports, keep "from
monai.deploy.utils.importutil import optional_import" and "from
monai.deploy.operators.monai_bundle_inference_operator import
MonaiBundleInferenceOperator, get_bundle_config" together as first-party deploy
imports, and ensure "from monai.deploy.core.models.torch_model import
TorchScriptModel" and "from monai.deploy.core.models.triton_model import
TritonModel" are grouped with the other monai.deploy.* imports so the file
imports are neatly ordered and not interleaved.
- Around line 86-94: The code assumes a fixed rank (checks only ndim == 4) and
may produce wrong shapes after multimodal ConcatItemsd; normalize inputs to
nnU-Net's expected (B,C,H,W,D) rank explicitly: after building multimodal_data
(using ResampleToMatch and MetaTensor checks) ensure each value is a MetaTensor
or torch.Tensor with a channel dim, convert plain tensors to MetaTensor or add a
channel dimension where missing, then after ConcatItemsd inspect data.ndim and
repeatedly unsqueeze a leading batch dimension until data.ndim == 5 (or use a
helper like ensure_batch_dim to guarantee shape (B,C,H,W,D)); also add a guard
to coerce non-MetaTensor multimodal entries before calling ConcatItemsd or
confirm ConcatItemsd supports plain torch.Tensor and wrap otherwise, and finally
pass the normalized 5D tensor to self._nnunet_predictor.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ad451207-3fcc-4f42-a3ba-9ecd8fdbd502

📥 Commits

Reviewing files that changed from the base of the PR and between 8d074e5 and f3c62e7.

📒 Files selected for processing (3)
  • monai/deploy/operators/__init__.py
  • monai/deploy/operators/monai_bundle_inference_operator.py
  • monai/deploy/operators/monet_bundle_inference_operator.py
✅ Files skipped from review due to trivial changes (1)
  • monai/deploy/operators/init.py

Comment thread monai/deploy/operators/monet_bundle_inference_operator.py
@sonarqubecloud
Copy link
Copy Markdown

@chezhia chezhia merged commit 37d319d into Project-MONAI:main Apr 30, 2026
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants