Skip to content

Added instance wise segmentation metrics #998

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 16 commits into
base: master
Choose a base branch
from
Draft
2 changes: 1 addition & 1 deletion .github/workflows/black.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.9]
python-version: ["3.11"]
steps:
- uses: actions/checkout@v4

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/python-install-check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11"]
python-version: ["3.10", "3.11", "3.12"]
steps:
- uses: actions/checkout@v4

Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ __pycache__
*.egg-info*
*/__pycache__/*
.vscode
.vscode/*
*.py.*
*.pkl
*.swp
Expand Down
6 changes: 5 additions & 1 deletion .spelling/.spelling/expect.txt
Original file line number Diff line number Diff line change
Expand Up @@ -730,4 +730,8 @@ autograd
cudagraph
kwonly
torchscript
hann
hann
ASSD
listmetric
panoptica
RVAE
14 changes: 7 additions & 7 deletions Dockerfile-CPU
Original file line number Diff line number Diff line change
Expand Up @@ -6,20 +6,20 @@ LABEL version=1.0
# Install fresh Python and dependencies for build-from-source
RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update && apt-get install -y python3.9 python3-pip libjpeg8-dev zlib1g-dev python3-dev libpython3.9-dev libffi-dev libgl1
RUN python3.9 -m pip install --upgrade pip==24.0
RUN apt-get update && apt-get install -y python3.11 libpython3.11-dev python3-pip libjpeg8-dev zlib1g-dev python3-dev libffi-dev libgl1
RUN python3.11 -m pip install --upgrade pip==24.0
# EXPLICITLY install cpu versions of torch/torchvision (not all versions have +cpu modes on PyPI...)
RUN python3.9 -m pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cpu
RUN python3.9 -m pip install openvino-dev==2023.0.1 opencv-python-headless mlcube_docker
RUN python3.11 -m pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cpu
RUN python3.11 -m pip install openvino-dev==2023.0.1 opencv-python-headless mlcube_docker

# Do some dependency installation separately here to make layer caching more efficient
COPY ./setup.py ./setup.py
RUN python3.9 -c "from setup import requirements; file = open('requirements.txt', 'w'); file.writelines([req + '\n' for req in requirements]); file.close()" \
&& python3.9 -m pip install -r ./requirements.txt
RUN python3.11 -c "from setup import requirements; file = open('requirements.txt', 'w'); file.writelines([req + '\n' for req in requirements]); file.close()" \
&& python3.11 -m pip install -r ./requirements.txt

COPY . /GaNDLF
WORKDIR /GaNDLF
RUN python3.9 -m pip install -e .
RUN python3.11 -m pip install -e .
# Entrypoint forces all commands given via "docker run" to go through python, CMD forces the default entrypoint script argument to be gandlf run
# If a user calls "docker run gandlf:[tag] anonymize", it will resolve to running "gandlf anonymize" instead.
# CMD is inherently overridden by args to "docker run", entrypoint is constant.
Expand Down
16 changes: 8 additions & 8 deletions Dockerfile-CUDA11.8
Original file line number Diff line number Diff line change
Expand Up @@ -7,22 +7,22 @@ LABEL version=1.0
# Note that to do this on a Windows host you need experimental feature "CUDA on WSL" -- not yet stable.
ENV DEBIAN_FRONTEND=noninteractive

# Explicitly install python3.9 (this uses 11.1 for now, as PyTorch LTS 1.8.2 is built against it)
# Explicitly install python3.11 (this uses 11.1 for now, as PyTorch LTS 1.8.2 is built against it)
RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update && apt-get install -y python3.9 python3-pip libjpeg8-dev zlib1g-dev python3-dev libpython3.9-dev libffi-dev libgl1
RUN python3.9 -m pip install --upgrade pip==24.0
RUN python3.9 -m pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cu118
RUN python3.9 -m pip install openvino-dev==2023.0.1 opencv-python-headless mlcube_docker
RUN apt-get update && apt-get install -y python3.11 libpython3.11-dev python3-pip libjpeg8-dev zlib1g-dev python3-dev libffi-dev libgl1
RUN python3.11 -m pip install --upgrade pip==24.0
RUN python3.11 -m pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cu118
RUN python3.11 -m pip install openvino-dev==2023.0.1 opencv-python-headless mlcube_docker

# Do some dependency installation separately here to make layer caching more efficient
COPY ./setup.py ./setup.py
RUN python3.9 -c "from setup import requirements; file = open('requirements.txt', 'w'); file.writelines([req + '\n' for req in requirements]); file.close()" \
&& python3.9 -m pip install -r ./requirements.txt
RUN python3.11 -c "from setup import requirements; file = open('requirements.txt', 'w'); file.writelines([req + '\n' for req in requirements]); file.close()" \
&& python3.11 -m pip install -r ./requirements.txt

COPY . /GaNDLF
WORKDIR /GaNDLF
RUN python3.9 -m pip install -e .
RUN python3.11 -m pip install -e .

# Entrypoint forces all commands given via "docker run" to go through python, CMD forces the default entrypoint script argument to be gandlf run
# If a user calls "docker run gandlf:[tag] anonymize", it will resolve to running "gandlf anonymize" instead.
Expand Down
16 changes: 8 additions & 8 deletions Dockerfile-CUDA12.1
Original file line number Diff line number Diff line change
Expand Up @@ -7,22 +7,22 @@ LABEL version=1.0
# Note that to do this on a Windows host you need experimental feature "CUDA on WSL" -- not yet stable.
ENV DEBIAN_FRONTEND=noninteractive

# Explicitly install python3.9 (this uses 11.1 for now, as PyTorch LTS 1.8.2 is built against it)
# Explicitly install python3.11
RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update && apt-get install -y python3.9 python3-pip libjpeg8-dev zlib1g-dev python3-dev libpython3.9-dev libffi-dev libgl1
RUN python3.9 -m pip install --upgrade pip==24.0
RUN python3.9 -m pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cu121
RUN python3.9 -m pip install openvino-dev==2023.0.1 opencv-python-headless mlcube_docker
RUN apt-get update && apt-get install -y python3.11 libpython3.11-dev python3-pip libjpeg8-dev zlib1g-dev python3-dev libffi-dev libgl1
RUN python3.11 -m pip install --upgrade pip==24.0
RUN python3.11 -m pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cu121
RUN python3.11 -m pip install openvino-dev==2023.0.1 opencv-python-headless mlcube_docker

# Do some dependency installation separately here to make layer caching more efficient
COPY ./setup.py ./setup.py
RUN python3.9 -c "from setup import requirements; file = open('requirements.txt', 'w'); file.writelines([req + '\n' for req in requirements]); file.close()" \
&& python3.9 -m pip install -r ./requirements.txt
RUN python3.11 -c "from setup import requirements; file = open('requirements.txt', 'w'); file.writelines([req + '\n' for req in requirements]); file.close()" \
&& python3.11 -m pip install -r ./requirements.txt

COPY . /GaNDLF
WORKDIR /GaNDLF
RUN python3.9 -m pip install -e .
RUN python3.11 -m pip install -e .

# Entrypoint forces all commands given via "docker run" to go through python, CMD forces the default entrypoint script argument to be gandlf run
# If a user calls "docker run gandlf:[tag] anonymize", it will resolve to running "gandlf anonymize" instead.
Expand Down
21 changes: 21 additions & 0 deletions GANDLF/cli/generate_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
mean_squared_log_error,
mean_absolute_error,
ncc_metrics,
generate_instance_segmentation,
)
from GANDLF.losses.segmentation import dice
from GANDLF.metrics.segmentation import (
Expand Down Expand Up @@ -259,6 +260,26 @@ def generate_metrics_dict(
"volumeSimilarity_" + str(class_index)
] = label_overlap_filter.GetVolumeSimilarity()

elif problem_type == "segmentation_brats":
for _, row in tqdm(input_df.iterrows(), total=input_df.shape[0]):
current_subject_id = row["SubjectID"]
overall_stats_dict[current_subject_id] = {}
label_image = torchio.LabelMap(row["Target"])
pred_image = torchio.LabelMap(row["Prediction"])
label_tensor = label_image.data
pred_tensor = pred_image.data
spacing = label_image.spacing
if label_tensor.data.shape[-1] == 1:
spacing = spacing[0:2]
# add dimension for batch
parameters["subject_spacing"] = torch.Tensor(spacing).unsqueeze(0)
label_array = label_tensor.unsqueeze(0).numpy()
pred_array = pred_tensor.unsqueeze(0).numpy()

overall_stats_dict[current_subject_id] = generate_instance_segmentation(
prediction=pred_array, target=label_array
)

elif problem_type == "synthesis":

def __fix_2d_tensor(input_tensor):
Expand Down
1 change: 1 addition & 0 deletions GANDLF/metrics/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@
)
import GANDLF.metrics.classification as classification
import GANDLF.metrics.regression as regression
from .segmentation_panoptica import generate_instance_segmentation


# global defines for the metrics
Expand Down
50 changes: 50 additions & 0 deletions GANDLF/metrics/panoptica_config_brats.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
!Panoptica_Evaluator
decision_metric: null
decision_threshold: null
edge_case_handler: !EdgeCaseHandler
empty_list_std: !EdgeCaseResult NAN
listmetric_zeroTP_handling:
!Metric DSC: !MetricZeroTPEdgeCaseHandling {empty_prediction_result: !EdgeCaseResult ZERO,
empty_reference_result: !EdgeCaseResult ZERO, no_instances_result: !EdgeCaseResult NAN,
normal: !EdgeCaseResult ZERO}
!Metric clDSC: !MetricZeroTPEdgeCaseHandling {empty_prediction_result: !EdgeCaseResult ZERO,
empty_reference_result: !EdgeCaseResult ZERO, no_instances_result: !EdgeCaseResult NAN,
normal: !EdgeCaseResult ZERO}
!Metric IOU: !MetricZeroTPEdgeCaseHandling {empty_prediction_result: !EdgeCaseResult ZERO,
empty_reference_result: !EdgeCaseResult ZERO, no_instances_result: !EdgeCaseResult NAN,
normal: !EdgeCaseResult ZERO}
!Metric ASSD: !MetricZeroTPEdgeCaseHandling {empty_prediction_result: !EdgeCaseResult INF,
empty_reference_result: !EdgeCaseResult INF, no_instances_result: !EdgeCaseResult NAN,
normal: !EdgeCaseResult INF}
!Metric RVD: !MetricZeroTPEdgeCaseHandling {empty_prediction_result: !EdgeCaseResult NAN,
empty_reference_result: !EdgeCaseResult NAN, no_instances_result: !EdgeCaseResult NAN,
normal: !EdgeCaseResult NAN}
!Metric RVAE: !MetricZeroTPEdgeCaseHandling {empty_prediction_result: !EdgeCaseResult NAN,
empty_reference_result: !EdgeCaseResult NAN, no_instances_result: !EdgeCaseResult NAN,
normal: !EdgeCaseResult NAN}
expected_input: !InputType SEMANTIC
global_metrics: [!Metric DSC]
instance_approximator: !ConnectedComponentsInstanceApproximator {cca_backend: null}
instance_matcher: !NaiveThresholdMatching {allow_many_to_one: false, matching_metric: !Metric IOU,
matching_threshold: 0.5}
instance_metrics: [!Metric DSC, !Metric IOU, !Metric ASSD, !Metric RVD]
log_times: false
save_group_times: false
segmentation_class_groups: !SegmentationClassGroups
groups:
ed: !LabelGroup
single_instance: false
value_labels: [2]
et: !LabelGroup
single_instance: false
value_labels: [3]
net: !LabelGroup
single_instance: false
value_labels: [1]
tc: !LabelMergeGroup
single_instance: false
value_labels: [1, 3]
wt: !LabelMergeGroup
single_instance: false
value_labels: [1, 2, 3]
verbose: false
43 changes: 43 additions & 0 deletions GANDLF/metrics/segmentation_panoptica.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
from pathlib import Path

import numpy as np

from panoptica import (
InputType,
Panoptica_Evaluator,
Panoptica_Aggregator,
ConnectedComponentsInstanceApproximator,
NaiveThresholdMatching,
)
from panoptica.utils.segmentation_class import SegmentationClassGroups
from panoptica.utils.label_group import LabelMergeGroup


def generate_instance_segmentation(
prediction: np.ndarray, target: np.ndarray, panoptica_config_path: str = None
) -> dict:
"""
Evaluate a single exam using Panoptica.

Args:
prediction (np.ndarray): The input prediction containing objects.
label_path (str): The path to the reference label.
panoptica_config_path (str): The path to the Panoptica configuration file.

Returns:
dict: The evaluation results.
"""

cwd = Path(__file__).parent.absolute()
panoptica_config_path = (
cwd / "panoptica_config_path.yaml"
if panoptica_config_path is None
else panoptica_config_path
)
evaluator = Panoptica_Evaluator.load_from_config(panoptica_config_path)

# call evaluate
group2result = evaluator.evaluate(prediction_arr=prediction, reference_arr=target)

results = {k: r.to_dict() for k, r in group2result.items()}
return results
29 changes: 22 additions & 7 deletions docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -280,14 +280,29 @@ SubjectID,Target,Prediction
...
```

To generate image to image metrics for synthesis tasks (including for the BraTS synthesis tasks [[1](https://www.synapse.org/#!Synapse:syn51156910/wiki/622356), [2](https://www.synapse.org/#!Synapse:syn51156910/wiki/622357)]), ensure that the config has `problem_type: synthesis`, and the CSV can be in the same format as segmentation (note that the `Mask` column is optional):
### Special cases

```csv
SubjectID,Target,Prediction,Mask
001,/path/to/001/target_image.nii.gz,/path/to/001/prediction_image.nii.gz,/path/to/001/brain_mask.nii.gz
002,/path/to/002/target_image.nii.gz,/path/to/002/prediction_image.nii.gz,/path/to/002/brain_mask.nii.gz
...
```
1. BraTS Segmentation Metrics

To generate annotation to annotation metrics for BraTS segmentation tasks [[ref](https://www.synapse.org/brats)], ensure that the config has `problem_type: segmentation_brats`, and the CSV can be in the same format as segmentation:

```csv
SubjectID,Target,Prediction
001,/path/to/001/target_image.nii.gz,/path/to/001/prediction_image.nii.gz
002,/path/to/002/target_image.nii.gz,/path/to/002/prediction_image.nii.gz
...
```

2. BraTS Synthesis Metrics

To generate image to image metrics for synthesis tasks (including for the BraTS synthesis tasks [[1](https://www.synapse.org/#!Synapse:syn51156910/wiki/622356), [2](https://www.synapse.org/#!Synapse:syn51156910/wiki/622357)]), ensure that the config has `problem_type: synthesis`, and the CSV can be in the same format as segmentation (note that the `Mask` column is optional):

```csv
SubjectID,Target,Prediction,Mask
001,/path/to/001/target_image.nii.gz,/path/to/001/prediction_image.nii.gz,/path/to/001/brain_mask.nii.gz
002,/path/to/002/target_image.nii.gz,/path/to/002/prediction_image.nii.gz,/path/to/002/brain_mask.nii.gz
...
```


## Parallelize the Training
Expand Down
11 changes: 7 additions & 4 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@


import sys, re, os


from setuptools import setup, find_packages


Expand Down Expand Up @@ -33,7 +31,8 @@
]

# Any extra files should be located at `GANDLF` module folder (not in repo root)
extra_files = ["logging_config.yaml"]
extra_files_root = ["logging_config.yaml"]
extra_files_metrics = ["panoptica_config_brats.yaml"]
toplevel_package_excludes = ["testing*"]

# specifying version for `black` separately because it is also used to [check for lint](https://github.com/mlcommons/GaNDLF/blob/master/.github/workflows/black.yml)
Expand Down Expand Up @@ -88,6 +87,7 @@
"openslide-python==1.4.1",
"lion-pytorch==0.2.2",
"pydantic==2.10.6",
"panoptica>=1.3.2",
]

if __name__ == "__main__":
Expand Down Expand Up @@ -139,7 +139,10 @@
long_description=readme,
long_description_content_type="text/markdown",
include_package_data=True,
package_data={"GANDLF": extra_files},
package_data={
"GANDLF": extra_files_root,
"GANDLF.metrics": extra_files_metrics,
},
keywords="semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch",
zip_safe=False,
)
38 changes: 38 additions & 0 deletions testing/test_full.py
Original file line number Diff line number Diff line change
Expand Up @@ -3143,6 +3143,44 @@ def test_generic_cli_function_metrics_cli_rad_nd():

sanitize_outputDir()

# this is for the brats segmentation metrics test
problem_type = "segmentation_brats"
reference_image_file = os.path.join(
inputDir, "metrics", "brats", "reference.nii.gz"
)
prediction_image_file = os.path.join(
inputDir, "metrics", "brats", "prediction.nii.gz"
)
subject_id = "brats_subject_1"
# write to a temporary CSV file
df = pd.DataFrame(
{
"SubjectID": [subject_id],
"Prediction": [prediction_image_file],
"Target": [reference_image_file],
}
)
temp_infer_csv = os.path.join(outputDir, "temp_csv.csv")
df.to_csv(temp_infer_csv, index=False)

# read and initialize parameters for specific data dimension
parameters = ConfigManager(
testingDir + f"/config_segmentation.yaml", version_check_flag=False
)
parameters["modality"] = "rad"
parameters["patch_size"] = patch_size["3D"]
parameters["model"]["dimension"] = 3
parameters["verbose"] = False
temp_config = write_temp_config_path(parameters)

output_file = os.path.join(outputDir, "output_single-csv.json")
generate_metrics_dict(temp_infer_csv, temp_config, output_file)

assert os.path.isfile(
output_file
), "Metrics output file was not generated for single-csv input"

sanitize_outputDir()

def test_generic_deploy_metrics_docker():
print("50: Testing deployment of a metrics generator to Docker")
Expand Down
Loading