Skip to content

keiserlab/amyloid-yolo-paper

Repository files navigation

amyloid-yolo-paper

author: Daniel Wong ([email protected])

Open access image data

DOI: 10.17605/OSF.IO/FCPMW
Please download the zip file called data.zip and place in the amyloid-yolo-paper/ directory

Installation Instructions:

We've included an example conda environment in this repository called YOLOv3_.yml. To install the necessary packages, simply install conda first (https://conda.io/projects/conda/en/latest/user-guide/install/index.html), and then 'conda env create -f YOLOv3_.yml -n YOLOv3' to create a new conda environment (called YOLOv3) from the .yml file. Install time should take just a few minutes. Alternatively, we've listed the python packages and version numbers in requirements.txt

Hardware and Software Specifications:

All deep learning models were trained using Nvidia Geforce GTX 1080 GPUs with a 64 CPU machine. We used a CentOS Linux operating system (version 7).

Content:

checkpoints:
contains different PyTorch models saved at each epoch during training of model version 2. The model "yolov3_ckpt_105.pth" was the final one used for prospective validation.

checkpoints_modelv1:
contains different PyTorch models saved at each epoch during training of model version 1. The model "yolov3_ckpt_157.pth" was used for making the CAA training labels for training model version 2.

config
contains original configuration files from the repo: https://github.com/eriklindernoren/PyTorch-YOLOv3

core.py
contains most of the core method and class definitions of the study

crop.py
is the script used to crop the WSIs into smaller 1536 x 1536 pixel tiles

data:
This folder contains the image dataset and labels:
        amyloid_test: contains all of the raw test set images
        amyloid_train: contains all of the raw training set images
        CERAD: contains all of the image data pertaining to the CERAD validation analysis. The dataset is pulled from https://zenodo.org/record/1470797#.YapievHMK3I
        custom: contains labels, the training validation split, and raw images used for training model version 2
        MRPI_tiles: contains all of the 1536 x 1536 pixel tiles in the MRPI grant. Only the tiles used for prospective validation are released in this study. The full dataset will be released in a subsequent publication.

detect.py
is the script used to run the object detector on images and save the resulting boxed output images to output/. E.g. to run the model detection on the prospective validation images: python3 detect.py --image_folder prospective_validation_images/ --class_path data/custom/classes.names --model_def config/yolov3-custom.cfg --checkpoint_model yolov3_ckpt_105.pth --conf_thres 0.8 --weights_path checkpoints/yolov3_ckpt_105.pth --img_size 416 --merge_boxes True --filter_CAA_detections_by_model True

figures
is a destination directory for saving figure images

models.py
contains method and class definitions relevant to the model architecture

original_data
contains the original labels used for training model version 1. These labels are the raw bounding bounding box annotations from a consensus of 2 experts where any overlapping boxes of the same class are merged into a super box. Contrast this with labels found in the data/custom/ directory. For these labels, only the training set labels are modified such that CAA predictions from model version 1 are stipulated as label data (to train model version 2).

output/
is a temporary destination directory for writing different image outputs for inspection

pickles
contains different pickle files PRC_tables
contains various intermediate CSVs used for calculating precision recall metrics during the prospective validation phase of the study

prospective_annotations
contains raw expert annotation for the prospective validation phase of the study

prospective.py
contains the method definitions and runner code for the prospective validation phase of the study

prospective_validation_images
contains the raw images used in the prospective validation phase of the study

pyvips.yml is an example conda environment that can be used for installing the necessary packages for cropping the WSI images

test.py
is the script used to evaluate the model. E.g. python3 test.py --model_def config/yolov3-custom.cfg --data_config config/custom.data --weights_path checkpoints/yolov3_ckpt_105.pth --img_size 416

train.py
is the script used to train the model, saves to checkpoints/ directory

unit_test.py
contains various unit tests

utils
contains various utility scripts, originally pulled from https://github.com/eriklindernoren/PyTorch-YOLOv3

validation.py
contains method definitions for analysis post-training and pre-prospective validation.

weights
contains pre-trained dark net weights

YOLOv3.yml
is an example conda environment that can be used for installing the necessary packages.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published