Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The predictions by Command line and GUI are different #972

Open
ycwei0321 opened this issue Jul 15, 2024 · 22 comments
Open

The predictions by Command line and GUI are different #972

ycwei0321 opened this issue Jul 15, 2024 · 22 comments

Comments

@ycwei0321
Copy link

Hello,

I'm recently using the CellPose to do the cell count. However, I found the predictions between GUI and command line are very different that the GUI gave me a better prediction, can you give me some suggestions?

Here is the command I used:
cellpose --dir “D:/user/CellPose/input” --image_path “D:/user/CellPose/Input” --add_model “C:/Users/winshah/.cellpose/models/cyto2_cp3.npy” --chan 0 --use_gpu --flow_threshold 0.4 --cellprob_threshold 0 --save_rois --save_png --save_outlines --savedir “D:/user/CellPose/Output”

Here is the screenshot of the GUI I use:
image

If I want to get the similar result by command line, whant kind of parameter should I use?

I found if I use the pretrained model named cyto3, it gave me the opposite results. The result from command line is much better than the GUI.

Thanks for your help.

@ReallyCoolBean
Copy link

I also have the issue of different output in the GUI and from the command line using Cellpose2.0 and we are not alone, as I found others reporting that, however no good explanation or solution to the problem: #843 and #758

I trained a model using command line and now when I evaluate it through the command line in PyTorch, I have good results. Here is the code snippet:
image
and here is one of my images, a z-stack of dimensions: (41,2,2045,2046), here visualised in napari 15th slice in a stack with predicted masks. I am using 2 channels: green (2) as the main channel labelling cells and red (1) with nuclei.
image

Now this is what happens when I run the model with the same parameters in GUI:
image
Model does a poorer job, it predicts sometimes many tiny masks eg. top right corner instead of a one normal mask.

To make things even more confusing, when I try to run the same model on the same image in Arivis with the same parameters, this is what happens (again showing 15th slice from the z-stack).
image

Prediction is different than from Cellpose GUI and different than from the command line. Everything is run in the same environment. Does anyone have any idea what's going on? I don't know which result to trust. I have the best results fromt he command line and this is also where I did evaluation of different parameters and now I want to give the model to another person so they can work with it in Arivis and we can't replicate the results.

@carsen-stringer
Copy link
Member

can you please check if this is the case in cellpose v3? this is the old GUI. also it would be helpful if you could please provide the image that has discrepancies in segmentation, then I can debug it, thanks

@ReallyCoolBean
Copy link

Hey, thank you for looking into this! I cloned our environment and upgraded to cellpose3 and the problem persists in the new GUI (screenshot attached). What is the best way to provide you with the image? It's 680MB, so I can't upload it here.
image

@carsen-stringer
Copy link
Member

if you can provide a google drive link that would be great thanks, and the CLI command you are using

@ycwei0321
Copy link
Author

if you can provide a google drive link that would be great thanks, and the CLI command you are using

Hi

Thanks for your response. I can also provide some example images I tried before. Please see them in this drive link: https://drive.google.com/drive/folders/1UKnCFPk5G3RQkn65VJcfyzjkuvu7y-I4?usp=sharing

Below is the command I used for the cell count, I tried to keep the parameter same with the GUI.

cellpose --dir “D:/Yichao/CellPose/input” --image_path “D:/Yichao/CellPose/Input” --add_model “C:/Users/winshah/.cellpose/models/cyto2_cp3.npy” --chan 0 --use_gpu --flow_threshold 0.4 --cellprob_threshold 0 --save_rois --save_png --save_outlines --savedir “D:/Yichao/CellPose/Output”

If you have any questions, please let me know. Thanks for your kind help.

@carsen-stringer
Copy link
Member

carsen-stringer commented Sep 13, 2024

you've added ".npy" to the end of the model name, I think the command you want is

python -m cellpose --verbose --dir /path/to/images/ --pretrained_model cyto2_cp3 --chan 0 --flow_threshold 0.4 --cellprob_threshold 0 --save_rois --save_png --save_outlines

with this command and using the "cyto2_cp3" model in the GUI I got the exact same result (using CPU, got 1012 cells for img1t.tif in both cases).

going to close this issue for now, but let me know if you have more questions

@ReallyCoolBean
Copy link

Here is a folder with my example image and the model I trained:
https://drive.google.com/drive/folders/1ptaGPKS77ihv1Ewwte1eVuVbX1r-_wPg?usp=sharing
Perhaps I didn't describe correctly what I'm doing, I am actually not using cellpose CLI, but cellpose API by running code in PyCharm. I was inspired by the colab notebook you posted here. Here is the line of code that I am using:

masks = model.eval(test_data,
channels=[2, 1],
diameter=123,
cellprob_threshold=0,
flow_threshold=0.4,
stitch_threshold=0.2)[0]

where model is the model that I provided in the google drive and test_data is the image. The same model and the same image were loaded in GUI and I believe the parameters in GUI were also the same as in my code (see screenshots).

@carsen-stringer
Copy link
Member

Since this is 3D, can you please check if the normalization parameters are the same in both cases? They are printed in the GUI as a dictionary, and you can input that dictionary into normalize in eval.

Regarding arivis outputs, their code is closed source and I don't have a license so we can't verify whether it would work the same way

@ReallyCoolBean
Copy link

Thank you for the advice! Sadly, I still get different results: GUI predicts 871 ROIs while the python script predicts 149. I ran again the model in Cellpose3 GUI and I got the same result as before (here showing plane 14):
image
I checked the output in the terminal and normalization parameters:
image
I then run the following python script:
image
and this is my result:
image

@carsen-stringer
Copy link
Member

Can you include what is printed when the script runs? I think you're missing the do_3D=True flag in model.eval

@carsen-stringer
Copy link
Member

Oh I see you are stitching, please do include what the script is printing

@ReallyCoolBean
Copy link

ReallyCoolBean commented Oct 2, 2024

Well, it doesn't print much, just that images from my test set were loaded and that model finished finished running for the one image that I selected. I'm not sure why it says 40/40 while the image had 41 planes.
image

@carsen-stringer
Copy link
Member

hmm I am confused because in the GUI it looks like you have cell fragments that result from running the 2.5D model not the stitching model, are you sure that the screenshot is from running with stitch_threshold=0.2?

@ReallyCoolBean
Copy link

I loaded the image, set the settings like it's shown in the screenshot and then pressed 'run' and that's the result. I repeated that several times, on different days, just to make sure I didn't click something wrong by accident, and it's always the same result.
What is the 2.5D model?

@carsen-stringer
Copy link
Member

carsen-stringer commented Oct 28, 2024

the 2.5d model is described in the Cellpose1 paper (runs on YX, ZY and ZX), can you please post the command line info that prints when you run in the gui vs in the API?

@ReallyCoolBean
Copy link

ReallyCoolBean commented Oct 28, 2024

So that's what I posted above, but I think the screenshot didn't capture everything from the GUI, so here it is again:
image
image
and from running my script:
image
and here is a gain the line of the script that produced the output above:
masks = model.eval(test_data[0],
channels=[2, 1],
diameter=123.9,
cellprob_threshold=0,
flow_threshold=0.4,
stitch_threshold=0.2,
normalize={'lowhigh': None, 'percentile': [1.0, 99.0], 'normalize': True, 'norm3D': True, 'sharpen_radius': 0,
'smooth_radius': 0, 'tile_norm_blocksize': 0, 'tile_norm_smooth3D': 1, 'invert': False})[0]

@carsen-stringer
Copy link
Member

thanks can you run the script with io.logger_setup() declared before you run cellpose so we can see the loaded model and the output of the script.

@carsen-stringer
Copy link
Member

it looks like in the script you are running many images, and you may be comparing the output potentially from two different tiffs (indeed it should say 41 if it has 41 planes it loads in)

@ReallyCoolBean
Copy link

Yes, I am laoding initially everything that is in the test set directory (3 images), but then I do prediction only on the first image (test_data[0]), when I was running the script in iPython I visualised the output with Napari and that's what I posted in the screenshots above and it's definitely the same image that I load in the GUI.

thanks can you run the script with io.logger_setup() declared before you run cellpose so we can see the loaded model and the output of the script.

image

one difference between the output from the script and GUI is that GUI says that it loads a tiff image of 41 planes and 3 channels, whereas the script correctly reads it as 41 planes with 2 channels. Not sure why it is like that, it is definitely the same image analysed in both cases.

@HaofeiGe
Copy link

I have found the reason. I dont know why, but when I give up using "for”, instead of using "def+list", it works the same as it in GUI.
This is my code:
import tifffile
from cellpose import models, io, denoise
import os
from pathlib import Path

input_dir = Path('F:/温敏分子行为/D218/2024-10-31/Control_MIP/')
output_dir = Path('F:/温敏分子行为/D218/2024-10-31/Segmentation/')

output_dir.mkdir(parents=True, exist_ok=True)

model = models.Cellpose(model_type='nuclei', gpu=True)
dn_model = denoise.DenoiseModel(model_type="denoise_nuclei", gpu=True)

def process_file(file_path):
file_name = file_path.name
print(f"Processing file: {file_name}")
data = tifffile.imread(file_path)
print(f"Shape: {data.shape}")
print("Performing denoising...")
data_denoised = dn_model.eval(data, channels=None, do_3D=True, diameter=140)
print("Performing segmentation...")
masks, flows, styles, diams = model.eval(
data_denoised,
diameter=140,
channels=[0, 0],
normalize={
'lowhigh': None,
'percentile': [1.0, 99.0],
'normalize': True,
'norm3D': True,
'sharpen_radius': 0,
'smooth_radius': 0,
'tile_norm_blocksize': 0,
'tile_norm_smooth3D': 1,
'invert': False
},
flow_threshold=0.4,
cellprob_threshold=0,
do_3D=True,
anisotropy=2.5,
min_size=100
)
output_file_name = file_name.replace('.tiff', '_masks.tiff').replace('.tif', '_masks.tiff')
output_path = output_dir / output_file_name
io.save_masks(data_denoised, masks, flows, file_names=output_path, png=False, tif=True, channels=[0, 0])
print(f"Saved masks to {output_path}")

tiff_files = list(input_dir.glob('.tiff')) + list(input_dir.glob('.tif'))
list(map(process_file, tiff_files))

@carsen-stringer
Copy link
Member

@ReallyCoolBean the GUI adds a channel for viewing in RGB. in your logger output from your script I don't see your custom model being loaded, is that the issue?

@HaofeiGe we'll discuss on #1026 since this is probably a different issue

@pamaiuri
Copy link

Dear All,

I have the same issue and, I'm sorry, I didn't got how to implement the solution suggested by @HaofeiGe .
This happens processing a single image, so, if I got it well, independently from the for loop mentioned by @HaofeiGe , right?

Here the command I run on the command line.
python -m cellpose --do_3D --verbose --image_path ./D0_1_Pos1_C3.tif --chan 0 --pretrained_model 'nuclei' --diameter 100 --restore_type 'oneclick_nuclei' --flow_threshold 0.1 --cellprob_threshold -2 --flow3D_smooth 5 --anisotropy 10 --min_size 500 --save_tif --no_npy

If I use the same parameters in the GUI the result is very different.

Thank you for your kind help.

LOG from command line:
2025-02-12 11:56:06,015 [INFO] WRITING LOG OUTPUT TO /home/paolo/.cellpose/run.log
2025-02-12 11:56:06,015 [INFO]
cellpose version: 3.1.1.dev95+g31cac77
platform: linux
python version: 3.12.3
torch version: 2.5.1+cu124
2025-02-12 11:56:06,015 [INFO] >>>> using CPU
2025-02-12 11:56:06,015 [INFO] >>>> using CPU
2025-02-12 11:56:06,015 [INFO] >>>> running cellpose on 1 images using chan_to_seg GRAY and chan (opt) NONE
2025-02-12 11:56:06,015 [INFO] >> oneclick_nuclei << model set to be used
2025-02-12 11:56:06,227 [INFO] >>>> model diam_mean = 17.000 (ROIs rescaled to this size during training)
2025-02-12 11:56:06,227 [INFO] >> nuclei << model set to be used
2025-02-12 11:56:06,269 [INFO] >>>> loading model /home/paolo/.cellpose/models/nucleitorch_0
2025-02-12 11:56:06,350 [INFO] >>>> model diam_mean = 17.000 (ROIs rescaled to this size during training)
2025-02-12 11:56:06,350 [INFO] >>>> using diameter 100.000 for all images
2025-02-12 11:56:06,352 [INFO] 0%| | 0/1 [00:00<?, ?it/s]
2025-02-12 11:56:06,357 [INFO] multi-stack tiff read in as having 5 planes 1 channels
2025-02-12 11:56:06,504 [INFO]
2025-02-12 11:56:06,505 [INFO] 0%| | 0/1 [00:00<?, ?it/s]
2025-02-12 11:56:06,925 [INFO] 100%|##########| 1/1 [00:00<00:00, 2.38it/s]
2025-02-12 11:56:06,931 [INFO] imgs denoised in 0.54s
2025-02-12 11:56:06,937 [INFO] multi-stack tiff read in as having 5 planes 1 channels
2025-02-12 11:56:07,033 [INFO] resizing 3D image with rescale=0.17 and anisotropy=10.0
2025-02-12 11:56:07,037 [INFO] running YX: 8 planes of size (174, 174)
2025-02-12 11:56:07,038 [INFO]
2025-02-12 11:56:07,038 [INFO] 0%| | 0/1 [00:00<?, ?it/s]
2025-02-12 11:56:07,583 [INFO] 100%|##########| 1/1 [00:00<00:00, 1.84it/s]
2025-02-12 11:56:07,585 [INFO] running ZY: 174 planes of size (8, 174)
2025-02-12 11:56:07,586 [INFO]
2025-02-12 11:56:07,586 [INFO] 0%| | 0/22 [00:00<?, ?it/s]
2025-02-12 11:56:08,358 [INFO] 100%|##########| 22/22 [00:00<00:00, 28.53it/s]
2025-02-12 11:56:08,360 [INFO] running ZX: 174 planes of size (8, 174)
2025-02-12 11:56:08,361 [INFO]
2025-02-12 11:56:08,361 [INFO] 0%| | 0/22 [00:00<?, ?it/s]
2025-02-12 11:56:09,039 [INFO] 100%|##########| 22/22 [00:00<00:00, 32.48it/s]
2025-02-12 11:56:09,041 [INFO] resizing 3D flows and cellprob to original image size
2025-02-12 11:56:09,172 [INFO] network run in 2.14s
2025-02-12 11:57:53,878 [INFO] masks created in 104.62s
2025-02-12 11:57:54,179 [INFO] 100%|##########| 1/1 [01:47<00:00, 107.83s/it]
2025-02-12 11:57:54,179 [INFO] 100%|##########| 1/1 [01:47<00:00, 107.83s/it]
2025-02-12 11:57:54,179 [INFO] >>>> completed in 108.164 sec

LOG from GUI (please note the model set for denoise)
2025-02-12 12:00:48,119 [INFO] WRITING LOG OUTPUT TO /home/paolo/.cellpose/run.log
2025-02-12 12:00:48,120 [INFO]
cellpose version: 3.1.1.dev95+g31cac77
platform: linux
python version: 3.12.3
torch version: 2.5.1+cu124
qt.qpa.xcb: X server does not support XInput 2
qt.qpa.gl: QXcbConnection: Failed to initialize GLX
2025-02-12 12:00:48,372 [INFO] Neither TORCH CUDA nor MPS version not installed/working.
GUI_INFO: loading image: /data01/Paolo/Cantone/Xist_7d_F1/DATA_Tif/D0_1_Pos1_C3.tif
GUI_INFO: image shape: (5, 1024, 1024, 1)
GUI_INFO: converted to float and normalized values to 0.0->255.0
GUI_INFO: normalization checked: computing saturation levels (and optionally filtered image)
{'lowhigh': None, 'percentile': [1.0, 99.0], 'normalize': True, 'norm3D': True, 'sharpen_radius': 0, 'smooth_radius': 0, 'tile_norm_blocksize': 0, 'tile_norm_smooth3D': 1, 'invert': False}
[0, 255.0]
GUI_INFO: selected model nuclei, loading now
2025-02-12 12:01:09,794 [INFO] >> nuclei << model set to be used
2025-02-12 12:01:09,795 [INFO] >>>> using CPU
2025-02-12 12:01:09,795 [INFO] >>>> using CPU
2025-02-12 12:01:09,900 [INFO] >>>> loading model /home/paolo/.cellpose/models/nucleitorch_0
2025-02-12 12:01:10,058 [INFO] >>>> model diam_mean = 17.000 (ROIs rescaled to this size during training)
GUI_INFO: diameter set to 30.00 (but can be changed)
GUI_INFO: clearing restored image
one-click_nuclei
2025-02-12 12:01:58,135 [INFO] >> denoise_cyto3 << model set to be used
2025-02-12 12:01:58,135 [INFO] >>>> using CPU
2025-02-12 12:01:58,136 [INFO] >>>> using CPU
2025-02-12 12:01:58,351 [INFO] >>>> model diam_mean = 30.000 (ROIs rescaled to this size during training)
GUI_INFO: channels: [0, 0]
GUI_INFO: normalize_params: {'lowhigh': None, 'percentile': [1.0, 99.0], 'normalize': True, 'norm3D': True, 'sharpen_radius': 0, 'smooth_radius': 0, 'tile_norm_blocksize': 0, 'tile_norm_smooth3D': 1, 'invert': False}
GUI_INFO: diameter (before upsampling): 100.0
(5, 1024, 1024, 1)
2025-02-12 12:01:58,362 [INFO] multi-stack tiff read in as having 5 planes 1 channels
2025-02-12 12:01:58,599 [INFO] 0%| | 0/3 [00:00<?, ?it/s]
2025-02-12 12:02:00,123 [INFO] 100%|##########| 3/3 [00:01<00:00, 1.97it/s]
2025-02-12 12:02:00,129 [INFO] imgs denoised in 1.72s
(5, 1024, 1024, 1)
2025-02-12 12:02:00,136 [INFO] one-click_nuclei finished in 2.004 sec
2025-02-12 12:02:04,001 [INFO] >>>> using CPU
2025-02-12 12:02:04,002 [INFO] >>>> using CPU
2025-02-12 12:02:04,077 [INFO] >>>> loading model /home/paolo/.cellpose/models/nucleitorch_0
2025-02-12 12:02:04,152 [INFO] >>>> model diam_mean = 17.000 (ROIs rescaled to this size during training)
{'lowhigh': None, 'percentile': [1.0, 99.0], 'normalize': True, 'norm3D': True, 'sharpen_radius': 0, 'smooth_radius': 0, 'tile_norm_blocksize': 0, 'tile_norm_smooth3D': 1, 'invert': False}
2025-02-12 12:02:04,164 [INFO] multi-stack tiff read in as having 5 planes 1 channels
2025-02-12 12:02:04,257 [INFO] resizing 3D image with rescale=0.17 and anisotropy=10.0
2025-02-12 12:02:04,261 [INFO] running YX: 8 planes of size (174, 174)
2025-02-12 12:02:04,262 [INFO] 0%| | 0/1 [00:00<?, ?it/s]
2025-02-12 12:02:04,809 [INFO] 100%|##########| 1/1 [00:00<00:00, 1.83it/s]
2025-02-12 12:02:04,810 [INFO] running ZY: 174 planes of size (8, 174)
2025-02-12 12:02:04,811 [INFO] 0%| | 0/22 [00:00<?, ?it/s]
2025-02-12 12:02:05,646 [INFO] 100%|##########| 22/22 [00:00<00:00, 26.38it/s]
2025-02-12 12:02:05,648 [INFO] running ZX: 174 planes of size (8, 174)
2025-02-12 12:02:05,649 [INFO] 0%| | 0/22 [00:00<?, ?it/s]
2025-02-12 12:02:06,454 [INFO] 100%|##########| 22/22 [00:00<00:00, 27.33it/s]
2025-02-12 12:02:06,461 [INFO] resizing 3D flows and cellprob to original image size
2025-02-12 12:02:06,594 [INFO] network run in 2.34s
2025-02-12 12:02:06,595 [INFO] smoothing flows with sigma=3.0
2025-02-12 12:03:41,645 [INFO] masks created in 94.33s
2025-02-12 12:03:41,922 [INFO] 25 cells found with model in 97.951 sec
GUI_INFO: 25 masks found
GUI_INFO: plane 0 outlines processed
GUI_INFO: creating cellcolors and drawing masks
set denoised/filtered view
{'lowhigh': None, 'percentile': [1.0, 99.0], 'normalize': True, 'norm3D': True, 'sharpen_radius': 0, 'smooth_radius': 0, 'tile_norm_blocksize': 0, 'tile_norm_smooth3D': 1, 'invert': False}
[0, 255.0]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants