Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Do Not Improve #16

Open
AuliaRizky opened this issue Jan 21, 2019 · 5 comments
Open

Performance Do Not Improve #16

AuliaRizky opened this issue Jan 21, 2019 · 5 comments

Comments

@AuliaRizky
Copy link

Hello, I'm using segcapsbasic and segnetR3 as the model for stroke segmentation of brain image using ISLES 2017 dataset. I use 25 patient 3D MRI data that I sliced (the total 2D image is 527 image) and adjusted to work with segcaps implementation. The problem is that the training performance never exceed 0.04 out_seg_dice_hard. Here is the latest training process which is stopped because the learning rate already very small:

Epoch 00037: val_out_seg_dice_hard did not improve from 0.03707
Epoch 38/100
475/475 [==============================] - 61s 128ms/step - loss: 0.9797 - out_seg_loss: 0.9708 - out_recon_loss: 0.0090 - out_seg_dice_hard: 0.0283 - val_loss: 0.9765 - val_out_seg_loss: 0
.9667 - val_out_recon_loss: 0.0099 - val_out_seg_dice_hard: 0.0326

The stroke lesion region determined based on the intensity. Since I used ADC image the intensity that shows the lesion is the one that hypointense. And it is the task for binary segmentation.

Is there any incompatibility for this algorithm to works on brain MRI?
If not, is there any suggestion to improve the performance?

Thanks

@lalonderodney
Copy link
Owner

Hello @AuliaRizky ,
Have you tried U-Net and Tiramisu to see if all are failing? You can check the 'figs' folder to make sure the ground truth and images are looking correct. Try also turning on debug inside load_3d_data.py to see if the real time data augmentation is functioning properly (perhaps adjust the parameters of the elastic deformation augmentation). If the images and ground truths looks correct going into the network then feel free to come back and ask for further advice.

@AuliaRizky
Copy link
Author

Hallo @lalonderodney ,

Thanks for your response,
I've found some mistake in preprocessing and image feeding procedure. I did full dataset training using unet only, and it shows constant dice hard value (whether I use bce, mar, or dice). Currently, I am not using augmentation. I also post issue in segcaps implementation by Cheng Lin Li. She suggest that I do over fit testing by only feeding single image to test whether the model strong enough to do the task.
Over testing result shows good DC between 0.8 - 0.9 using segcapsbasic and capsnetR3. The problem that I found is that the raw output from testing (output before applying otsu thresholding) show that the background value is 0.47 (It suppose to have 0 value) and the ROI value is (>0.65). There is no lower value than that 0.47 or higher than 0.77.
After training using the full dataset (I use segcapsbasic) the training seems could not do the task. I test it using an image taken from dataset (that I know it have ROI) and the raw output recognize all of the brain area. It seems cannot recognize anything inside the brain area. And, I think this is the effect of value shown by the raw result have very narrow value between the ROI and background (as single image over fit test result show).

Do you have any advice to solve it? Thank you very much

@Luchixiang
Copy link

hello @AuliaRizky
I've met the same problem on the Brats dataset, have you solved it?

@pavanbaloju
Copy link

pavanbaloju commented Jun 26, 2020

Hello, I'm using segcapsbasic and segnetR3 as the model for stroke segmentation of brain image using ISLES 2017 dataset. I use 25 patient 3D MRI data that I sliced (the total 2D image is 527 image) and adjusted to work with segcaps implementation. The problem is that the training performance never exceed 0.04 out_seg_dice_hard. Here is the latest training process which is stopped because the learning rate already very small:

Epoch 00037: val_out_seg_dice_hard did not improve from 0.03707
Epoch 38/100
475/475 [==============================] - 61s 128ms/step - loss: 0.9797 - out_seg_loss: 0.9708 - out_recon_loss: 0.0090 - out_seg_dice_hard: 0.0283 - val_loss: 0.9765 - val_out_seg_loss: 0
.9667 - val_out_recon_loss: 0.0099 - val_out_seg_dice_hard: 0.0326

The stroke lesion region determined based on the intensity. Since I used ADC image the intensity that shows the lesion is the one that hypointense. And it is the task for binary segmentation.

Is there any incompatibility for this algorithm to works on brain MRI?
If not, is there any suggestion to improve the performance?

Thanks

I have the same problem too. If you have the solution please help me. Thanks in advance!

@pavanbaloju
Copy link

Hello @AuliaRizky ,
Have you tried U-Net and Tiramisu to see if all are failing? You can check the 'figs' folder to make sure the ground truth and images are looking correct. Try also turning on debug inside load_3d_data.py to see if the real time data augmentation is functioning properly (perhaps adjust the parameters of the elastic deformation augmentation). If the images and ground truths looks correct going into the network then feel free to come back and ask for further advice.

Hello, U-net is doing good on the dataset. But Segcapsbasic, is not doing good. The loss isn't decreasing at all. The predictions for all pixels in binary segmentation is same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants