You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, I want to express my gratitude towards ZeroCostDL4Mic. It is an amazing tool that I use for almost every image analysis task in my work. However, I am currently facing an issue with training a 3D U-Net to recognize bright cell clones in light-sheet microscopy images.
To train the model, I have provided 22 labeled 128x128x128 pixel images. I have varied the validation set (20-50%), batch size (1-5), and patch size (64-128x64-128x8-32), but the validation loss function does not converge. As seen in the attached image, the validation set behaves way worse than the training set.
I also came across a similar issue reported on the image.sc forum: https://forum.image.sc/t/u-net-3d-zerocostdl4mic-training/79457
To investigate the issue further, I ran a sanity check on the dataset by repeating the same 128x128x128 labeled image 10 times and using a 50% validation fraction. However, even in this case, the validation set behaved differently than the training set. :
I understand that loss functions might differ because of regularization and at what point loss is measured, but shouldn't dice coefficient be identical in this test case? This suggests that there might be a bug in the script.
I would appreciate any help or suggestions to resolve this issue. Thank you in advance!
The text was updated successfully, but these errors were encountered:
First of all, I want to express my gratitude towards ZeroCostDL4Mic. It is an amazing tool that I use for almost every image analysis task in my work. However, I am currently facing an issue with training a 3D U-Net to recognize bright cell clones in light-sheet microscopy images.
To train the model, I have provided 22 labeled 128x128x128 pixel images. I have varied the validation set (20-50%), batch size (1-5), and patch size (64-128x64-128x8-32), but the validation loss function does not converge. As seen in the attached image, the validation set behaves way worse than the training set.
![Unknown-11](https://private-user-images.githubusercontent.com/105531636/237878520-9f0a0c07-a3d8-46f2-ad9c-3e3d48140eb0.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk1NzczMDMsIm5iZiI6MTczOTU3NzAwMywicGF0aCI6Ii8xMDU1MzE2MzYvMjM3ODc4NTIwLTlmMGEwYzA3LWEzZDgtNDZmMi1hZDljLTNlM2Q0ODE0MGViMC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjE0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxNFQyMzUwMDNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1jYmM3MmI0Yjk5ZmNjOTI1ZWJkNTljOGE4MjFjY2U2YjY0NWMyYjAwNDcwMjQ3Y2U2N2IwYzRhNjg0NTM3MTcwJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.1irwP1CISlbEFl0jQTRx3vNUH6hIViIGjXv94pHEwZs)
I also came across a similar issue reported on the image.sc forum:
https://forum.image.sc/t/u-net-3d-zerocostdl4mic-training/79457
To investigate the issue further, I ran a sanity check on the dataset by repeating the same 128x128x128 labeled image 10 times and using a 50% validation fraction. However, even in this case, the validation set behaved differently than the training set. :
![Unknown-13](https://private-user-images.githubusercontent.com/105531636/237879883-3dc94c9a-8a38-473a-bb24-360fb6a63c40.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk1NzczMDMsIm5iZiI6MTczOTU3NzAwMywicGF0aCI6Ii8xMDU1MzE2MzYvMjM3ODc5ODgzLTNkYzk0YzlhLThhMzgtNDczYS1iYjI0LTM2MGZiNmE2M2M0MC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjE0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxNFQyMzUwMDNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT05ZDIyOTljN2UzOTAyOTUzMTc2MGM5MTFlY2UwODljMGMzZWQyZTZiMTI2N2E3ZDRlZWI5Y2EwMjU4NGJmMjIwJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.F7NcLtm-CdOsPvrXsAQQ-GDzQGFqN0EBdJiO8z33CWg)
I understand that loss functions might differ because of regularization and at what point loss is measured, but shouldn't dice coefficient be identical in this test case? This suggests that there might be a bug in the script.
I would appreciate any help or suggestions to resolve this issue. Thank you in advance!
The text was updated successfully, but these errors were encountered: