Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE]Training in 3D - How to process 2D trainings data? #866

Open
klingann opened this issue Feb 19, 2024 · 4 comments
Open

[FEATURE]Training in 3D - How to process 2D trainings data? #866

klingann opened this issue Feb 19, 2024 · 4 comments
Labels
enhancement New feature or request

Comments

@klingann
Copy link

Dear Cellpose Community,

We have drawn 3D ROIs and now want to use them to train our own model in Cellpose.
Following the issue (#745) we want to extract XY, ZX, YZ slices from the 3D labelled data and want to use them for 2D training.
However, the question now arises as to how to annotate the specific slices as xy, xz, and yz, while considering the different aspect ratios (anisotropy of the 3D data).
Can Cellpose be instructed to identify xy, xz, and yz slices separately?
Is it necessary to resample (interpolate) xz and yz slices to have the same voxel size in all dimensions?
Alternatively, is it preferable to train using only xy slices?

Looking forward to your suggestions. Thanks you!

@klingann klingann added the enhancement New feature or request label Feb 19, 2024
@AndreZeug
Copy link

I have the same problem. How to use 3D ROI labelled objects for training when Cellpose only allows 2D data for training and my xyz stacks have a particular anisotropy.
Providing xy, xz, yz slices for training require anisotropy information for the training as well, not only for predicting.
Is there some HowTo available?

@carsen-stringer
Copy link
Member

You can train a separate model on XY and then on XZ/YZ, if you have different anisotropy, and training on all together is not working well. If you want to train one model on XY, XZ, and YZ, then yes you need to interpolate so the pixels/um are all the same.

@AndreZeug
Copy link

I understand that training XY, XZ, and YZ is not working well when having anisotropy. Interpolating XZ/YZ would be an workaround, but shapes look substantially different in XZ/YZ, compared to XY, thus the best option seems to train a separate model on XY and a second model on XZ/YZ. But how do I combine both models for prediction? The first predicting in XY and the second for the orthogonal views?
Or did I understand something wrong?

@pawlowska
Copy link

@AndreZeug I stumbled upon the exact same questions some months ago when attempting 3D training. Have you found a good solution?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants