Skip to content

rendeirolab/lazyslide-benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LazySlide Benchmark

This project benchmarks the usage simplicity of LazySlide compared to other whole slide image (WSI) processing libraries:

  • LazySlide - Accessible and interoperable whole slide image analysis
  • CLAM - Clustering-constrained Attention Multiple-instance Learning
  • TRIDENT - Toolkit for large-scale whole-slide image processing.
  • PathML - Tools for computational pathology
  • Tiatoolbox - Computational Pathology Toolbox developed by TIA Centre, University of Warwick.
  • Histolab - Library for Digital Pathology Image Processing
  • Slideflow - Powerful, open-source AI tools for digital pathology.

Benchmark Overview

Tasks:

  • Preprocessing (Tissue segmentation, tiling)
  • Run feature extraction with ResNet50
  • Build a torch dataset that fetch tile images
  • An anndata that store features with tile coordinates

Each library will be used to achieve the above tasks.

The code for each library is placed at ./benchmarks, following metrics will be used to profile the code complexity.

  • Number of tokens
  • Lines of codes
  • Entropy of API calls

Project Structure

lazyslide-benchmark/
├── benchmarks/           # Individual benchmark scripts for each library
│   ├── lazyslide_benchmark.py
│   ├── clam_benchmark.py
│   ├── pathml_benchmark.py
│   ├── tiatoolbox_benchmark.py
│   ├── histolab_benchmark.py
│   └── slideflow_benchmark.py
├── data/                 # Directory for benchmark data
│   └── download_data.py  # Script to download the benchmark WSI
├── script_stats.py       # Script to run all benchmarks summaries
└── README.md             # This file

Running the Benchmark

git clone https://github.com/yourusername/lazyslide-benchmark.git
cd lazyslide-benchmark
uv run python script_stats.py

You will find two fils, script_stats.csv and benchmark_results.pdf

Running the task scripts for each library

If you want to executes scripts under ./benchmarks, please following the instructions.

Run LazySlide/CLAM/TRIDENT/Histolab/PathML

```shell
lib=lazyslide # Replace 'lazyslide' with clam, trident, histolab, or pathml as needed

docker build --pull --rm -t $lib:latest -f docker/$lib/Dockerfile .

docker run -it --rm --gpus device=0 -v ./data:/bench/data -v ./benchmarks/${lib}_benchmark.py:/bench/main.py $lib:latest

# in the container
uv run python main.py

Run TiaToolbox

docker pull ghcr.io/tissueimageanalytics/tiatoolbox:latest

docker build --pull --rm -t tiatoolbox:latest -f docker/tiatoolbox/Dockerfile .

docker run -it --rm --gpus device=0 -v ./data:/bench/data -v ./benchmark/tiatoolbox_benchmark.py:/bench/main.py  -w /bench ghcr.io/tissueimageanalytics/tiatoolbox:latest

# in the container
pip install anndata
python main.py

Run Slideflow

docker pull jamesdolezal/slideflow:latest-torch

docker run -it --rm --gpus device=0 -v ./data:/bench/data -v ./benchmark/tiatoolbox_benchmark.py:/bench/main.py -w /bench jamesdolezal/slideflow:latest-torch

# in the container
pip install anndata
python main.py

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published