diff --git a/environment.yml b/environment.yml index 9beec7ea..ed8c0da9 100644 --- a/environment.yml +++ b/environment.yml @@ -4,14 +4,8 @@ channels: - anaconda dependencies: - python=3.9 - - matplotlib - - numpy - - scipy - - pandas - - fsspec - - pyyaml - - requests - - aiohttp - - intake - - intake-xarray - - jupyter \ No newline at end of file + - jupyter + - pip + - pip: + - -r requirements.txt + - git+https://github.com/ots22/intake-xarray@feature/exif \ No newline at end of file diff --git a/examples/README.md b/examples/README.md index 6dc17e23..f201e769 100644 --- a/examples/README.md +++ b/examples/README.md @@ -5,9 +5,9 @@ This directory contains: To run any of the notebooks in this directory locally, do the following, from the top level of this repo: -1. Install scivision: `pip install -v -e .` -2. Create the environment for the notebooks: `conda env create -f environment.yml` -3. Activate it: `conda activate scivision` +1. Create the environment for the notebooks: `conda env create -f environment.yml` +2. Activate it: `conda activate scivision` +3. Install scivision: `pip install -e .` 4. Open the notebook in `/examples` with `jupyter notebook` **Visit the [Scivision Gallery](https://github.com/scivision-gallery) to see more examples and use-cases.** diff --git a/examples/scivision-core-functionality.ipynb b/examples/scivision-core-functionality.ipynb index 62a55ca0..f8a765db 100644 --- a/examples/scivision-core-functionality.ipynb +++ b/examples/scivision-core-functionality.ipynb @@ -8,23 +8,17 @@ "\n", "In this notebook, we will:\n", "\n", - "1. Demonstrate using the scivision [Python API](https://scivision.readthedocs.io/en/latest/api.html) to load a pretrained (ImageNet) model, which we previously added to the scivision catalog with the name \"scivision-test-plugin\", as per [this guide](https://scivision.readthedocs.io/en/latest/contributing.html#extending-the-scivision-catalog)\n", + "1. Demonstrate using the scivision [Python API](https://scivision.readthedocs.io/en/latest/api.html) to load several pretrained image classification models\n", "2. Use the scivision catalog to find a matching dataset, which the model can be run on\n", - "3. Run the model on the data, performing simple model inference" + "3. Run the model on the data, performing simple model inference\n", + "4. Use the scivision catalog to find another model that can be run on the same dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Note: The model repository follows the strcuture specified in [this template](https://scivision.readthedocs.io/en/latest/model_repository_template.html), including a `scivision` [model config file](https://github.com/alan-turing-institute/scivision-test-plugin/blob/main/.scivision/model.yml)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We first import some things from scivision: `default_catalog` is a scivision **catalog** that will let us discover models and datasets, and `load_pretrained_model` provides a convenient way to load and run a model." + "First let's import some things from scivision: `default_catalog` is a scivision **catalog** that will let us discover models and datasets, and `load_pretrained_model` provides a convenient way to load and run a model." ] }, { @@ -44,7 +38,7 @@ "\n", "A scivision catalog is a collection of **models** and **datasources**.\n", "\n", - "For this example, we want to find datasources compatible with \"scivision-test-plugin\". But first, let's first let's use the catalog to retrive the \"scivision-test-plugin\" repository url, take a look at the other models in the *default catalog* (the built-in catalog, distributed as part of scivision) and see how this catalog is structured." + "For this example, we want to find datasources compatible with the model catalog entry \"scivision_classifier\". But first, let's use the catalog to retrive the \"scivision_classifier\" repository url and take a look at the data contained in the *default catalog* (the built-in catalog, distributed as part of scivision) and see how this is structured." ] }, { @@ -55,7 +49,7 @@ { "data": { "text/plain": [ - "AnyUrl('https://github.com/alan-turing-institute/scivision-test-plugin', scheme='https', host='github.com', tld='com', host_type='domain', path='/alan-turing-institute/scivision-test-plugin')" + "AnyUrl('https://github.com/alan-turing-institute/scivision_classifier', scheme='https', host='github.com', tld='com', host_type='domain', path='/alan-turing-institute/scivision_classifier')" ] }, "execution_count": 2, @@ -66,8 +60,8 @@ "source": [ "# Get the model repo url\n", "models_catalog = default_catalog.models.to_dataframe()\n", - "stp_repo = models_catalog[models_catalog.name == \"scivision-test-plugin\"].url.item()\n", - "stp_repo # Why not paste the repo link into your browser and see how it looks?" + "model_repo = models_catalog[models_catalog.name == \"scivision_classifier\"].url.item()\n", + "model_repo # Why not paste the repo link into your browser and see how it looks?" ] }, { @@ -110,116 +104,34 @@ " \n", "
\n", "\n",
- "
| \n",
- "\n", - " | \n", + "
array([ 0, 1, 2, ..., 237, 238, 239])
array([ 0, 1, 2, ..., 259, 260, 261])
array([0, 1, 2])
\n", + " | name | \n", + "description | \n", + "tasks | \n", + "url | \n", + "pkg_url | \n", + "format | \n", + "pretrained | \n", + "labels_required | \n", + "institution | \n", + "tags | \n", + "
---|---|---|---|---|---|---|---|---|---|---|
0 | \n", + "model-000 | \n", + "None | \n", + "(TaskEnum.object_detection, TaskEnum.segmentat... | \n", + "https://github.com/stardist/stardist | \n", + "git+https://github.com/stardist/stardist.git@main | \n", + "image | \n", + "True | \n", + "True | \n", + "(epfl,) | \n", + "(2D, 3D, optical-microscopy, xray, microtomogr... | \n", + "
1 | \n", + "model-001 | \n", + "None | \n", + "(TaskEnum.segmentation, TaskEnum.thresholding,... | \n", + "https://github.com/danforthcenter/plantcv | \n", + "git+https://github.com/danforthcenter/plantcv@... | \n", + "image | \n", + "True | \n", + "True | \n", + "(danforthcenter,) | \n", + "(2D, hyperspectral, multispectral, near-infrar... | \n", + "
4 | \n", + "scivision-test-plugin | \n", + "None | \n", + "(TaskEnum.object_detection,) | \n", + "https://github.com/alan-turing-institute/scivi... | \n", + "git+https://github.com/alan-turing-institute/s... | \n", + "image | \n", + "True | \n", + "False | \n", + "(alan-turing-institute,) | \n", + "(dummy,) | \n", + "
5 | \n", + "mapreader-plant | \n", + "automated detection of plant patches in images... | \n", + "(TaskEnum.classificiation, TaskEnum.object_det... | \n", + "https://github.com/alan-turing-institute/mapre... | \n", + "git+https://github.com/alan-turing-institute/m... | \n", + "image | \n", + "True | \n", + "False | \n", + "(alan-turing-institute,) | \n", + "(2D, plant, phenotype, rgb, biology, agriculture) | \n", + "
6 | \n", + "resnet50-plantkton | \n", + "automated classification of plankton images co... | \n", + "(TaskEnum.classificiation,) | \n", + "https://github.com/alan-turing-institute/plank... | \n", + "git+https://github.com/alan-turing-institute/p... | \n", + "image | \n", + "True | \n", + "False | \n", + "(alan-turing-institute, cefas, plankton-analyt... | \n", + "(2D, plankton, ecology, environmental-science) | \n", + "
8 | \n", + "scivision_classifier | \n", + "None | \n", + "(TaskEnum.classificiation,) | \n", + "https://github.com/alan-turing-institute/scivi... | \n", + "git+https://github.com/alan-turing-institute/s... | \n", + "image | \n", + "True | \n", + "False | \n", + "(alan-turing-institute,) | \n", + "(classification, 2D, image) | \n", + "
9 | \n", + "scivision_huggingface | \n", + "None | \n", + "(TaskEnum.classificiation,) | \n", + "https://github.com/alan-turing-institute/scivi... | \n", + "git+https://github.com/alan-turing-institute/s... | \n", + "image | \n", + "True | \n", + "False | \n", + "(alan-turing-institute,) | \n", + "(classification, 2D, image) | \n", + "