-
Notifications
You must be signed in to change notification settings - Fork 136
Home_2
ZeroCostDL4Mic is a toolbox for the training and implementation of common Deep Learning approaches to microscopy imaging. It exploits the ease-of-use and access to GPU provided by Google Colab.
Training data can be uploaded to the Google Drive from where it can be used to train models using the provided Colab notebooks in a web-browser. Inference (predictions) on unseen data can then also be performed within the same notebook, therefore not requiring any local hardware or software set-up.
Running a ZeroCostDL4Mic notebook | Example data in ZeroCostDL4Mic | Romain's talk @ Aurox conference | Talk @ SPAOM | NEUBIAS webminar |
---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
|
![]() |
ZeroCostDL4Mic provides fully annotated Google Colab optimised Jupyter Notebooks for popular pre-existing networks. These cover a range of important image analysis tasks (e.g. segmentation, denoising, restoration, label-free prediction). There are 3 types of implemented networks:
- Fully supported - considered mature and considerably tested by our team.
- Under beta-testing - an early prototype of networks which may not be stable yet.
- Contributed - networks following the ZeroCostDL4Mic guidelines and contributed by community members. Although the core ZeroCostDL4Mic team does not maintain these networks, we synergise with the developers with the goal of providing researchers with a similar workflow experience and quality control. We welcome network contributions from the research community. If you wish to contribute, please read our guidelines first.
Both fully supported and beta-testing versions of the individual notebooks can be directly opened from GitHub into Colab by clicking one of the respective links in the table below. You will need to create a local copy to your Google Drive in order to save and modify the notebooks. Once opened in Colab, follow the instructions described in the specific notebook that you selected to install the relevant packages, load the training dataset, train, check on test datasets and perform inference and predictions on unseen data.
With the exception of the U-net training data, we provide training and test datasets that were generated by our labs. These can be downloaded from Zenodo using the various links below. The U-net data was obtained from the ISBI segmentation contest.
Network | Paper(s) | Tasks | Status | Link to example training and test dataset | Direct link to notebook in Colab |
---|---|---|---|---|---|
U-Net (2D) | here and here | Segmentation | ISBI challenge or here | ||
U-Net (2D) multilabel | here and here | Semantic segmentation | here | ||
U-Net (3D) | here | Segmentation | EPFL dataset | ||
StarDist (2D) | here and here | Nuclei segmentation | here | ||
StarDist (3D) | here and here | Nuclei segmentation | from Stardist github | ||
(https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/CARE_3D_ZeroCostDL4Mic.ipynb) | |||||
Cellpose (2D) | here | Cells or Nuclei segmentation | Coming soon! | ||
SplineDist | here | Instance segmentation | Coming soon! | ||
EmbedSeg | [here][EmbedSeg_link] | Instance segmentation | Coming soon! | ||
MaskRCNN | here | Instance segmentation | Coming soon! | ||
DenoiSeg | here | Joint denoising and segmentation | Available soon | ||
Interactive Segmentation - Kaibu | here | Interactive instance segmentation | Coming soon! |
Network | Paper(s) | Tasks | Status | Link to example training and test dataset | Direct link to notebook in Colab |
---|---|---|---|---|---|
Noise2Void (2D) | here | Denoising | here | ||
Noise2Void (3D) | here | Denoising | here | ||
CARE (2D) | here | Denoising | here | ||
CARE (3D) | here | Denoising | here | ||
3D-RCAN | here | Denoising | Available soon | ||
DecoNoising (2D) | here | Denoising | here |
Network | Paper(s) | Tasks | Status | Link to example training and test dataset | Direct link to the notebook in Colab |
---|---|---|---|---|---|
Deep-STORM | here | Single Molecule Localization Microscopy (SMLM) image reconstruction from high-density emitter data | Training data simulated in the notebook or available from here |
Network | Paper(s) | Tasks | Status | Link to example training and test dataset | Direct link to the notebook in Colab |
---|---|---|---|---|---|
YOLOv2 | here | Object detection (bounding boxes) | here | ||
Detectron2 | here | Object detection (bounding boxes) | here | ||
RetinaNet | here | Object detection (bounding boxes) | here |
Network | Paper(s) | Tasks | Status | Link to example training and test dataset | Direct link to the notebook in Colab |
---|---|---|---|---|---|
Label-free prediction (fnet) 2D | here | Artificial labelling | here | ||
Label-free prediction (fnet) 3D | here | Artificial labelling | here | ||
CycleGAN | here | Unpaired Image-to-Image Translation | here | ||
pix2pix | here | Paired Image-to-Image Translation | here |
Network | Paper(s) | Tasks | Status | Link to example training and test dataset | Direct link to the notebook in Colab |
---|---|---|---|---|---|
DRMIME | here | Affine or perspective image registration | Coming soon! |
Networks that are compatible with BioImage.IO and can be used in ImageJ via deepImageJ.
Network | Paper(s) | Task | Link to example training and test dataset | Direct link to notebook in Colab |
---|---|---|---|---|
StarDist (2D) with DeepImageJ export | StarDist: here and here, and DeepImageJ | Nuclei segmentation | here | |
Deep-STORM with DeepImageJ export | Deep-STORM and DeepImageJ | Single Molecule Localization Microscopy (SMLM) image reconstruction from high-density emitter data | Training data simulated in the notebook or available from here | |
U-Net (2D) with DeepImageJ export | U-Net and DeepImageJ | Segmentation | ISBI challenge or here | |
U-Net (3D) with DeepImageJ export | 3D U-Net and DeepImageJ | Segmentation | EPFL dataset |
Network | Paper(s) | Task | Status | Link to example training and test dataset | Direct link to the notebook in Colab |
---|---|---|---|---|---|
Augmentor | here | Image augmentation | None | ||
Quality Control | Available soon | Error mapping and quality metrics estimation | None |
- Lucas von Chamier
- Johanna Jukkala
- Christoph Spahn
- Martina Lerche
- Sara Hernández-Pérez
- Pieta K. Mattila
- Eleni Karinou
- Seamus Holden
- Ahmet Can Solak
- Alexander Krull
- Tim-Oliver Buchholz
- Florian Jug
- Loïc A Royer
- Mike Heilemann
- Romain F. Laine
- Guillaume Jacquemet
- Ricardo Henriques
Main:
- Home
- Step by step "How to" guide
- How to contribute
- Tips, tricks and FAQs
- Data augmentation
- Quality control
- Running notebooks locally
- Running notebooks on FloydHub
- BioImage Modell Zoo user guide
- ZeroCostDL4Mic over time
Fully supported networks:
- U-Net
- StarDist
- Noise2Void
- CARE
- Label free prediction (fnet)
- Object Detection (YOLOv2)
- pix2pix
- CycleGAN
- Deep-STORM
Beta notebooks
Other resources: