Skip to content

Commit

Permalink
Merge branch 'main' of github.com:Tonks684/dlmbl_material
Browse files Browse the repository at this point in the history
  • Loading branch information
Tonks684 committed Aug 15, 2024
2 parents e941705 + 18e58d8 commit 1545555
Show file tree
Hide file tree
Showing 3 changed files with 83 additions and 54 deletions.
4 changes: 1 addition & 3 deletions .github/workflows/build-notebooks.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
name: Build Notebooks
on:
push:
paths:
- './solution.py'



jobs:
run:
Expand Down
59 changes: 37 additions & 22 deletions exercise.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -88,16 +88,26 @@
"cell_type": "code",
"execution_count": null,
"id": "74051a44",
"metadata": {},
"outputs": [],
"source": [
"# TO DO: Change the path to the directory where the data and code is stored is stored.\n",
"import os\n",
"import sys\n",
"parent_dir = os.path.abspath(\"~/data/06_image_translation/part2/\")\n",
"sys.path.append(parent_dir)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3712f6b8",
"metadata": {
"lines_to_next_cell": 0
},
"outputs": [],
"source": [
"from pathlib import Path\n",
"import os\n",
"import sys\n",
"parent_dir = os.path.abspath(os.path.join(os.getcwd(), '..'))\n",
"sys.path.append(parent_dir)\n",
"import torch\n",
"import numpy as np\n",
"import pandas as pd\n",
Expand Down Expand Up @@ -305,19 +315,15 @@
"\n",
"## A heads up of what to expect from the training...\n",
"<br><br>\n",
"**Visualise Phase, Fluorescence and Virtual Stain for Validation Examples**<br>\n",
"- We can observe how the performance improves over time using the images tab and the sliding window.\n",
"**- Visualise results**: We can observe how the performance improves over time using the images tab and the sliding window.\n",
"<br><br>\n",
"**Discriminator Predicted Probabilities**<br>\n",
"- We plot the discriminator's predicted probabilities that the phase with fluorescence is phase and fluorescence and that the phase with virtual stain is phase with virtual stain. It is typically trained until the discriminator can no longer classify whether or not the generated images are real or fake better than a random guess (p(0.5)). We plot this for both the training and validation datasets.\n",
"**- Discriminator Predicted Probabilities**: We plot the discriminator's predicted probabilities that the phase with fluorescence is phase and fluorescence and that the phase with virtual stain is phase with virtual stain. It is typically trained until the discriminator can no longer classify whether or not the generated images are real or fake better than a random guess (p(0.5)). We plot this for both the training and validation datasets.\n",
"<br><br>\n",
"**Adversarial Loss**<br>\n",
"- We can formulate the adversarial loss as a Least Squared Error Loss in which for real data the discriminator should output a value close to 1 and for fake data a value close to 0. The generator's goal is to make the discriminator output a value as close to 1 for fake data. We plot the least squared error loss.\n",
"**- Adversarial Loss**: We can formulate the adversarial loss as a Least Squared Error Loss in which for real data the discriminator should output a value close to 1 and for fake data a value close to 0. The generator's goal is to make the discriminator output a value as close to 1 for fake data. We plot the least squared error loss.\n",
"<br><br>\n",
"**Feature Matching Loss**<br>\n",
"- Both networks are also trained using the generator feature matching loss which encourages the generator to produce images that contain similar statistics to the real images at each scale. We also plot the feature matching L1 loss for the training and validation sets together to observe the performance and how the model is fitting the data.<br><br>\n",
"**- Feature Matching Loss**: Both networks are also trained using the generator feature matching loss which encourages the generator to produce images that contain similar statistics to the real images at each scale. We also plot the feature matching L1 loss for the training and validation sets together to observe the performance and how the model is fitting the data.<br><br>\n",
"<br><br>\n",
"This implementation allows for the turning on/off of the least-square loss term by setting the opt.no_lsgan flag to the model options. As well as the turning off of the feature matching loss term by setting the opt.no_ganFeat_loss flag to the model options. Something you might want to explore in the next section!<br><br>\n",
"- This implementation allows for the turning on/off of the least-square loss term by setting the opt.no_lsgan flag to the model options. As well as the turning off of the feature matching loss term by setting the opt.no_ganFeat_loss flag to the model options. Something you might want to explore in the next section!<br><br>\n",
"</div>"
]
},
Expand Down Expand Up @@ -522,7 +528,7 @@
"test_data_loader = CreateDataLoader(opt)\n",
"test_dataset = test_data_loader.load_data()\n",
"visualizer = Visualizer(opt)\n",
"\n",
"print(f\"Total Test Images = {len(test_data_loader)}\")\n",
"# Load pre-trained model\n",
"model = create_model(opt)"
]
Expand Down Expand Up @@ -573,21 +579,30 @@
"cell_type": "markdown",
"id": "0dbb8ea0",
"metadata": {
"lines_to_next_cell": 1,
"cell_marker": "\"\"\"",
"lines_to_next_cell": 0,
"tags": []
},
"source": [
"\"\"\"\n",
"<div class=\"alert alert-info\">\n",
"\n",
"## Task 3.1 Visualise the results of the model on the test set.\n",
"### Task 3.1 Visualise the results of the model on the test set.\n",
"\n",
"Create a matplotlib plot that visalises random samples of the phase images, target stains, and virtual stains.\n",
"If you can incorporate the crop function below to zoom in on the images that would be great!\n",
"</div>\n",
"\"\"\"\n",
"\n",
"Define a function to crop the images so we can zoom in.\n",
"</div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb768fe1",
"metadata": {
"lines_to_next_cell": 1
},
"outputs": [],
"source": [
"# Define a function to crop the images so we can zoom in.\n",
"def crop(img, crop_size, type=None):\n",
" \"\"\"\n",
" Crop the input image.\n",
Expand Down Expand Up @@ -719,9 +734,9 @@
" column=[\"psnr_nuc\"],\n",
" rot=30,\n",
")\n",
"test_pixel_metrics.head()\n",
"#%%[markdown]\n",
"\"\"\"\n",
"########## TODO ##############\n",
"- What do these metrics tells us about the performance of the model?\n",
"- How do the pixel-level metrics compare to the regression-based approach?\n",
"- Could these metrics be skewed by the presence of hallucinations or background pilxels in the virtual stains?\n",
Expand Down
74 changes: 45 additions & 29 deletions solution.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -88,16 +88,26 @@
"cell_type": "code",
"execution_count": null,
"id": "08152e85",
"metadata": {},
"outputs": [],
"source": [
"# TO DO: Change the path to the directory where the data and code is stored is stored.\n",
"import os\n",
"import sys\n",
"parent_dir = os.path.abspath(\"~/data/06_image_translation/part2/\")\n",
"sys.path.append(parent_dir)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "021ca571",
"metadata": {
"lines_to_next_cell": 0
},
"outputs": [],
"source": [
"from pathlib import Path\n",
"import os\n",
"import sys\n",
"parent_dir = os.path.abspath(os.path.join(os.getcwd(), '..'))\n",
"sys.path.append(parent_dir)\n",
"import torch\n",
"import numpy as np\n",
"import pandas as pd\n",
Expand Down Expand Up @@ -305,19 +315,15 @@
"\n",
"## A heads up of what to expect from the training...\n",
"<br><br>\n",
"**Visualise Phase, Fluorescence and Virtual Stain for Validation Examples**<br>\n",
"- We can observe how the performance improves over time using the images tab and the sliding window.\n",
"**- Visualise results**: We can observe how the performance improves over time using the images tab and the sliding window.\n",
"<br><br>\n",
"**Discriminator Predicted Probabilities**<br>\n",
"- We plot the discriminator's predicted probabilities that the phase with fluorescence is phase and fluorescence and that the phase with virtual stain is phase with virtual stain. It is typically trained until the discriminator can no longer classify whether or not the generated images are real or fake better than a random guess (p(0.5)). We plot this for both the training and validation datasets.\n",
"**- Discriminator Predicted Probabilities**: We plot the discriminator's predicted probabilities that the phase with fluorescence is phase and fluorescence and that the phase with virtual stain is phase with virtual stain. It is typically trained until the discriminator can no longer classify whether or not the generated images are real or fake better than a random guess (p(0.5)). We plot this for both the training and validation datasets.\n",
"<br><br>\n",
"**Adversarial Loss**<br>\n",
"- We can formulate the adversarial loss as a Least Squared Error Loss in which for real data the discriminator should output a value close to 1 and for fake data a value close to 0. The generator's goal is to make the discriminator output a value as close to 1 for fake data. We plot the least squared error loss.\n",
"**- Adversarial Loss**: We can formulate the adversarial loss as a Least Squared Error Loss in which for real data the discriminator should output a value close to 1 and for fake data a value close to 0. The generator's goal is to make the discriminator output a value as close to 1 for fake data. We plot the least squared error loss.\n",
"<br><br>\n",
"**Feature Matching Loss**<br>\n",
"- Both networks are also trained using the generator feature matching loss which encourages the generator to produce images that contain similar statistics to the real images at each scale. We also plot the feature matching L1 loss for the training and validation sets together to observe the performance and how the model is fitting the data.<br><br>\n",
"**- Feature Matching Loss**: Both networks are also trained using the generator feature matching loss which encourages the generator to produce images that contain similar statistics to the real images at each scale. We also plot the feature matching L1 loss for the training and validation sets together to observe the performance and how the model is fitting the data.<br><br>\n",
"<br><br>\n",
"This implementation allows for the turning on/off of the least-square loss term by setting the opt.no_lsgan flag to the model options. As well as the turning off of the feature matching loss term by setting the opt.no_ganFeat_loss flag to the model options. Something you might want to explore in the next section!<br><br>\n",
"- This implementation allows for the turning on/off of the least-square loss term by setting the opt.no_lsgan flag to the model options. As well as the turning off of the feature matching loss term by setting the opt.no_ganFeat_loss flag to the model options. Something you might want to explore in the next section!<br><br>\n",
"</div>"
]
},
Expand Down Expand Up @@ -522,7 +528,7 @@
"test_data_loader = CreateDataLoader(opt)\n",
"test_dataset = test_data_loader.load_data()\n",
"visualizer = Visualizer(opt)\n",
"\n",
"print(f\"Total Test Images = {len(test_data_loader)}\")\n",
"# Load pre-trained model\n",
"model = create_model(opt)"
]
Expand Down Expand Up @@ -573,21 +579,30 @@
"cell_type": "markdown",
"id": "d168855d",
"metadata": {
"lines_to_next_cell": 1,
"cell_marker": "\"\"\"",
"lines_to_next_cell": 0,
"tags": []
},
"source": [
"\"\"\"\n",
"<div class=\"alert alert-info\">\n",
"\n",
"## Task 3.1 Visualise the results of the model on the test set.\n",
"### Task 3.1 Visualise the results of the model on the test set.\n",
"\n",
"Create a matplotlib plot that visalises random samples of the phase images, target stains, and virtual stains.\n",
"If you can incorporate the crop function below to zoom in on the images that would be great!\n",
"</div>\n",
"\"\"\"\n",
"\n",
"Define a function to crop the images so we can zoom in.\n",
"</div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "915ddf15",
"metadata": {
"lines_to_next_cell": 1
},
"outputs": [],
"source": [
"# Define a function to crop the images so we can zoom in.\n",
"def crop(img, crop_size, type=None):\n",
" \"\"\"\n",
" Crop the input image.\n",
Expand Down Expand Up @@ -759,9 +774,9 @@
" column=[\"psnr_nuc\"],\n",
" rot=30,\n",
")\n",
"test_pixel_metrics.head()\n",
"#%%[markdown]\n",
"\"\"\"\n",
"########## TODO ##############\n",
"- What do these metrics tells us about the performance of the model?\n",
"- How do the pixel-level metrics compare to the regression-based approach?\n",
"- Could these metrics be skewed by the presence of hallucinations or background pilxels in the virtual stains?\n",
Expand Down Expand Up @@ -963,7 +978,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "0a8ccbd2",
"id": "6665fe93",
"metadata": {
"lines_to_next_cell": 1,
"tags": [
Expand All @@ -978,15 +993,16 @@
"##########################\n",
"\n",
"def visualise_both_methods(\n",
" phase_images: np.array, target_stains: np.array, pix2pixHD_results: np.array, viscy_results: np.array,crop_size=None\n",
" phase_images: np.array, target_stains: np.array, pix2pixHD_results: np.array, viscy_results: np.array,crop_size=None,crop_type='center'\n",
"):\n",
" fig, axes = plt.subplots(5, 4, figsize=(15, 15))\n",
" sample_indices = np.random.choice(len(phase_images), 5)\n",
" if crop is not None:\n",
" phase_images = phase_images[:,:crop_size,:crop_size]\n",
" target_stains = target_stains[:,:crop_size,:crop_size]\n",
" pix2pixHD_results = pix2pixHD_results[:,:crop_size,:crop_size]\n",
" viscy_results = viscy_results[:,:crop_size,:crop_size]\n",
" if crop_size is not None:\n",
" phase_image = crop(phase_image, crop_size, crop_type)\n",
" target_stain = crop(target_stain, crop_size, crop_type)\n",
" target_label = crop(target_label, crop_size, crop_type)\n",
" pred_stain = crop(pred_stain, crop_size, crop_type)\n",
" pred_label = crop(pred_label, crop_size, crop_type)\n",
"\n",
" for i, idx in enumerate(sample_indices):\n",
" axes[i, 0].imshow(phase_images[idx], cmap=\"gray\")\n",
Expand Down

0 comments on commit 1545555

Please sign in to comment.