Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add remap labels kwarg to RandomFlip #409

Open
fepegar opened this issue Jan 1, 2021 · 6 comments
Open

Add remap labels kwarg to RandomFlip #409

fepegar opened this issue Jan 1, 2021 · 6 comments
Labels

Comments

@fepegar
Copy link
Owner

fepegar commented Jan 1, 2021

馃殌 Feature

Add remap labels kwarg to RandomFlip.

Motivation

When an image is flipped around the sagittal plane, anatomical labels in a label map lose meaning. For example, if a label map represents "left eye" with 2 and "right eye" with 3, the labels will be misleading if the image is flipped.

Pitch

I think it would be nice to add a kwarg that applies some remapping using the new RemapLabels transform only if the flipping happens.

Alternatives

The way I would do this now is using Compose with p=flip_probability and put inside a RandomFlip with flip_probability=1 and a RemapLabels.


Maybe one more kwarg is needed to specify the name of the label map that would be affected by this. Or the kwarg could be either a dict (applied to all label maps) or a dict of dicts.

What do you think, @efirdc?

@fepegar fepegar added the enhancement New feature or request label Jan 1, 2021
@efirdc
Copy link
Contributor

efirdc commented Jan 1, 2021

Would be good to support this in a convenient way. The method of using a Compose is nice but its a bit verbose for a behavior that you almost definitely want to happen in this situation.

If it's included in the RandomFlip, you would have to specify which LabelMap the remapping should be applied to as you mentioned. Also you may have multiple LabelMaps with different left/right labels that need to be swapped. It could be parameterized as a dictionary:

saggital_remap = {
    "label_map_name_a": [(left_id0, right_id0), (left_id1, right_id1), ... ],
    "label_map_name_b: [...]
}

Another direction is to have some official way to associate the integer ids in a label map with their names and optionally a left/right specifier. A method LabelMap.add_id(id, name, side=None) could be used to add associations. The internal representation would be a list of dictionaries [{"id": 1, "name": "Hippocampus", side: LEFT}, {"id": 2, name: "Hippocampus", "side": RIGHT}]. It should be possible to provide that list directly to the constructor as well instead of using the add_id method.

The advantage is that it would be possible to figure out the left <-> right remapping automatically within the Flip. The behavior could be toggled with a saggital_label_swap flag which defaults to true. These associations need to be modified by the other label transformations so they stay up to date. I think it would be nice to have this actually so the names can be used for plotting and logging as well.

Definitely more complicated to go that route though. I think a saggital_remap dictionary would be fine too.

@fepegar
Copy link
Owner Author

fepegar commented Jan 2, 2021

If it's included in the RandomFlip, you would have to specify which LabelMap the remapping should be applied to as you mentioned

Actually, shouldn't there already be a kwarg for this in RemapLabels?

Also you may have multiple LabelMaps with different left/right labels that need to be swapped. It could be parameterized as a dictionary

This is what I meant by the dict of dicts, I guess I meant a dict whose keys are label map names and whose values are remappings. I think it would be more intuitive than a dict with list of tuples as values.

Another direction is to have some official way to associate the integer ids in a label map with their names and optionally a left/right specifier.

Yeah I think this would be nice. Might as well use a new Label class to store those, I suppose.

@fepegar
Copy link
Owner Author

fepegar commented Jan 2, 2021

Maybe the label map could optionally take a path to a CSV file like this. The laterality would ideally be inferred from the path, so the format of the table is consistent. That is what 3D Slicer uses, and I think also MITK. Probably others, as well.

@fepegar
Copy link
Owner Author

fepegar commented Jan 2, 2021

Interestingly, I've been trying to use the above approach (Flip + Remap) on the brain parcellation notebook on Colab, but didn't work:

num_augmentations = 10
results = []
import pandas as pd
df = pd.read_csv('GIFNiftyNet.ctbl', sep=' ', names=['Label', 'Name', *'RGBA'])
mapping = {}
for row in df.itertuples():
    if 'Left' in row.Name:
        mapping[row.Label] = df[df.Name == f'Right-{row.Name[5:]}'].Label.values[0]
    elif 'Right' in row.Name:
        mapping[row.Label] = df[df.Name == f'Left-{row.Name[6:]}'].Label.values[0]
flip = tio.Compose((
    tio.RandomFlip(flip_probability=1),
    tio.RemapLabels(mapping),
    ),
    p=0.5,
)
resample = tio.OneOf({
    tio.RandomAffine(image_interpolation='nearest'): 0.75,
    tio.RandomElasticDeformation(image_interpolation='nearest'): 0.25,
})
augment = tio.Compose((flip, resample))
for _ in trange(num_augmentations):
    augmented = augment(preprocessed)
    input_tensor = augmented.t1.data[None].to(device)
    with torch.cuda.amp.autocast():
        logits = model(input_tensor)
    output_tensor = logits.argmax(dim=1, keepdim=True).cpu()
    augmented.t1.set_data(output_tensor[0])
    back = augmented.apply_inverse_transform(warn=True)
    results.append(back.t1.data)
result = torch.stack(results).long()
print(result[..., 100, 108, 120])

I get about 50% predictions of left gray matter and 50% right on that voxel, meaning that something is not working.

Anyway, I'll need to debug that. Also, the current way to do this is a bit awkward, we should have some higher-level features. But that's a different issue.

@efirdc
Copy link
Contributor

efirdc commented Jan 2, 2021

Actually, shouldn't there already be a kwarg for this in RemapLabels?

Yes, since it's compatible with the include and exclude system from Transform.

Maybe a list of dictionaries would be a better parameterization to fully support inclusion/exclusion:

label_swap = [
    {"pairs": [(1, 2), (3, 4)], "include": ["label_a", "label_b"] },
    {"pairs": [(16, 17)], "exclude": ["label_b", "label_c"] },
]

Would be better in cases where you need that level of control. Although I'm not sure this is any less complicated than composing a Flip and RemapLabel.

Maybe the compose approach would be a bit more clear if there was a SwapLabels(swap=[(1, 2,), (3,4)]) transform? It would essentially just wrap a RemapLabels(remapping={1:2, 2:1, 3:4, 4:3}).

I think I ran into the same issue from your code snippet. The problem is that augmented.t1 is a ScalarImage, and the RemapLabels is only applied to a LabelMap. To apply the transformation to an integer tensor I do label_transform(tio.LabelMap(tensor=integer_tensor)).data. Although there is an issue with that approach if the label_transform has an include or exclude list, since the label map passed in has no identity. My work-around was to modify the transforms directly to remove the include and exclude lists.

This is a bit off topic, but I think the way the DataParser handles Image and tensor inputs could be improved. At the moment tensor inputs are always converted to a ScalarImage, but this could be changed to a LabelMap instead if they have an integer data type. Also Transform.__call__ could take an optional image_name parameter which is passed to the DataParser. Then include and exclude lists are easier to support.

@fepegar
Copy link
Owner Author

fepegar commented Jan 4, 2021

Yes, since it's compatible with the include and exclude system from Transform.

Ah, sure. Do you mean there's no need for a new kwarg because we can already use include/exclude?

Would be better in cases where you need that level of control. Although I'm not sure this is any less complicated than composing a Flip and RemapLabel.

Yeah I'd say let's add support for something simple and modify it for more complex use cases later, if necessary.

Maybe the compose approach would be a bit more clear if there was a SwapLabels(swap=[(1, 2,), (3,4)]) transform? It would essentially just wrap a RemapLabels(remapping={1:2, 2:1, 3:4, 4:3})

SwapLabels sounds nice, but it would indeed be very a very thin wrapper of RemapLabels. Not sure it would add much value.

I think I ran into the same issue from your code snippet. The problem is that augmented.t1 is a ScalarImage, and the RemapLabels is only applied to a LabelMap.

Exactly, that was my issue. Thanks!


This is a bit off topic, but I think the way the DataParser handles Image and tensor inputs could be improved.

Yes, this should be discussed in a new issue. I think inferring the image type from the data type is not a good idea. See #375. I'm not sure to understand what the motivation to include an argument to __call__ is. We can discuss it in a new issue if you like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants