You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had tested the UNENT2D and 3D and running successfully , here is the another question relate the notebook and my data set
my data is CryoET denoised data and original dimension is 1024 1024 151
for Unet 2D: you had mentioned
the #of step equivalent to the number of samples in the training set divided by the batch size,
I calculate the number of steps below for this way, is correct? or we just need use 604/10
For Unet 3D, I seperate original data size 1024 1024 151 into 512 512 151, add one more depth and change initial filtering from 32 to 64, but the model still failed to catch the data's infomation, the dataset size is 24, I also set "import tensorflow as tf
from tensorflow.keras.mixed_precision import set_global_policy mix float to reduce GPU during training for High resolution in patch data
1. Enable mixed-precision BEFORE creating your model.
set_global_policy('mixed_float16')
" to reduce load for GPU, I use two ways for trainig, first is resize to patch_size for 224 224 16(batch size =1), second is randomcrop patch size 646464(batch size =2 or 3), but model still failed for catch signal even I changed depth and parameters, but I use same data for UNET2D and UNET 2D can catch siginal(original 4 layer for your note book and 5layer that I modified it)
,
is training sample is small for UNET3D? (24 data but 2D is 604), do you have another suggestion?
The text was updated successfully, but these errors were encountered:
I had tested the UNENT2D and 3D and running successfully , here is the another question relate the notebook and my data set
my data is CryoET denoised data and original dimension is 1024 1024 151
for Unet 2D: you had mentioned
the #of step equivalent to the number of samples in the training set divided by the batch size,
I calculate the number of steps below for this way, is correct? or we just need use 604/10
For Unet 3D, I seperate original data size 1024 1024 151 into 512 512 151, add one more depth and change initial filtering from 32 to 64, but the model still failed to catch the data's infomation, the dataset size is 24, I also set "import tensorflow as tf
from tensorflow.keras.mixed_precision import set_global_policy mix float to reduce GPU during training for High resolution in patch data
1. Enable mixed-precision BEFORE creating your model.
set_global_policy('mixed_float16')
" to reduce load for GPU, I use two ways for trainig, first is resize to patch_size for 224 224 16(batch size =1), second is randomcrop patch size 646464(batch size =2 or 3), but model still failed for catch signal even I changed depth and parameters, but I use same data for UNET2D and UNET 2D can catch siginal(original 4 layer for your note book and 5layer that I modified it)
is training sample is small for UNET3D? (24 data but 2D is 604), do you have another suggestion?
The text was updated successfully, but these errors were encountered: