Skip to content

Predicting major subcellular structures from a single input transmitted light using diffusion models

License

Notifications You must be signed in to change notification settings

raghuveerbhat/BrightFieldDiffusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Predicting Major Subcellular Structures from a Transmitted Light Image using Diffusion Models

Open In Colab

Fluorescence microscopy have many application, especially in healthcare field. But they are often expensive, time-consuming, and damaging to cells. So, a potential solution is to use transmitted light image which is relatively low cost to obtain and is label/dye free. But it lacks clear and specific contrast between different structures. So, my work explored the use of diffusion models to translate from transmitted light image to fluorescence image. Over traditional method, like U-net, my approach is also able to produce variance maps since I'm trying to predict the entire target image distribution.

Diffusion Sampling Output - Example -Transmitted Light Image to Fluorescent Target(TOM20 labeled with Alexa Fluor 594):

Dataset credit: Spahn, C., & Heilemann, M. (2020). ZeroCostDL4Mic - Label-free prediction (fnet) example training and test dataset (Version v2) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.3748967
Input Transmitted Light Image Ground Truth Fluorescent Target (TOM20) Diffusion Model Sampling Process
 (5)
visdom_image (2) visdom_image (3)
visdom_image (1) visdom_image (4)

Diffusion Sampling Output - Example - lifeact-RFP to sir-DNA:

Input Conditional Signal (lifeact-RFP) Ground Truth Target (sir-DNA) Diffusion Model Sampling Process

Uncertainity Map for lifeact-RFP to sir-DNA:

Architecture:

Complete Report:

https://drive.google.com/file/d/15_wCXFuqHkFsNnH8OVUgwUu_2FYEw33r/view?usp=sharing

Presentation:

https://docs.google.com/presentation/d/1GJT3Eeq-3QbhA5H54fTH7MeN7_pI-xaNqsKhT9L33c8/edit?usp=sharing

Credits:

  1. Prafulla Dhariwal, and Alex Nichol 2021. Diffusion Models Beat GANs on Image Synthesis. CoRR, abs/2105.05233. (https://arxiv.org/abs/2105.05233) And source code found in: https://github.com/openai/guided-diffusion
  2. https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix - For the discriminator network architecture
  3. Weng, Lilian. (Jul 2021). What are diffusion models? Lil’Log. https://lilianweng.github.io/posts/2021-07-11-diffusion-models/
  4. Also many thanks to my supervisor Dr.Iain Styles(https://www.cs.bham.ac.uk/~ibs/) without whom this work wouldn't be done. His constant assistance and guidance at every stage of the research was immensely helpful. Also thanks to Dr. Carl Wilding (https://uk.linkedin.com/in/carl-wilding-4048a5101) for providing constructive feedback on my project.

For complete credits please refer my report. For any issues please feel free to contact me, and i'll try to respond.

About

Predicting major subcellular structures from a single input transmitted light using diffusion models

Topics

Resources

License

Stars

Watchers

Forks

Languages