|
1 | 1 | <div align="center" style="display: flex; justify-content: space-between;"> |
2 | 2 | <div style="flex: 1; padding: 10px;"> |
3 | | - <a href="https://github.com/encord-team/text-to-image-eval/actions?query=workflow%3ATests" target="_blank" style="text-decoration:none"><img src="https://github.com/encord-team/text-to-image-eval/actions/workflows/tests.yml/badge.svg?branch=main"></a> |
| 3 | + <a href="https://colab.research.google.com/github/encord-team/text-to-image-eval/blob/main/notebooks/tti_eval_CLI_Quickstart.ipynb" target="_blank" style="text-decoration:none"><img alt="CLI Quickstart In Colab" src="https://img.shields.io/badge/Try_It_In_Colab-blue?logo=googlecolab&labelColor=555"></a> |
4 | 4 | <a href="https://www.python.org/downloads/release/python-3119/" target="_blank" style="text-decoration:none"><img src="https://img.shields.io/badge/python-3.11%2B-blue" alt="Python Versions"></a> |
5 | | - <a target="_blank" style="text-decoration:none"><img alt="PRs Welcome" src="https://img.shields.io/badge/PRs-Welcome-blue"></a> |
6 | 5 | <img alt="License" src="https://img.shields.io/github/license/encord-team/text-to-image-eval"> |
| 6 | + <a href="https://github.com/encord-team/text-to-image-eval/actions?query=workflow%3ATests" target="_blank" style="text-decoration:none"><img src="https://github.com/encord-team/text-to-image-eval/actions/workflows/tests.yml/badge.svg?branch=main"></a> |
| 7 | + <a target="_blank" style="text-decoration:none"><img alt="PRs Welcome" src="https://img.shields.io/badge/PRs-Welcome-blue"></a> |
7 | 8 | </div> |
8 | 9 | <div style="flex: 1; padding: 10px;"> |
9 | 10 | <a href="https://github.com/encord-team/encord-notebooks" target="_blank" style="text-decoration:none"><img alt="Encord Notebooks" src="https://img.shields.io/badge/Encord_Notebooks-blue?logo=github&label=&labelColor=181717"></a> |
@@ -59,7 +60,11 @@ You can easily benchmark different models and datasets against each other. Here |
59 | 60 | export ENCORD_SSH_KEY_PATH=<path_to_the_encord_ssh_key_file> |
60 | 61 | ``` |
61 | 62 |
|
62 | | -## CLI Usage |
| 63 | +## CLI Quickstart |
| 64 | + |
| 65 | +<a href="https://colab.research.google.com/github/encord-team/text-to-image-eval/blob/main/notebooks/tti_eval_CLI_Quickstart.ipynb" target="_blank" style="text-decoration:none"> |
| 66 | + <img alt="CLI Quickstart In Colab" src="https://img.shields.io/badge/Try_It_In_Colab-blue?logo=googlecolab&labelColor=555"> |
| 67 | +</a> |
63 | 68 |
|
64 | 69 | ### Embeddings Generation |
65 | 70 |
|
@@ -103,6 +108,13 @@ To interactively explore the animation in a temporary session, use the `--intera |
103 | 108 | <img width="600" src="https://storage.googleapis.com/docs-media.encord.com/static/img/text-to-image-eval/embeddings.gif"> |
104 | 109 | </div> |
105 | 110 |
|
| 111 | +> ℹ️ You can also carry out these operations using Python. Explore our Python Quickstart guide for more details. |
| 112 | +> |
| 113 | +> <a href="https://colab.research.google.com/github/encord-team/text-to-image-eval/blob/main/notebooks/tti_eval_Python_Quickstart.ipynb" target="_blank" style="text-decoration:none"> |
| 114 | +> <img alt="Python Quickstart In Colab" src="https://img.shields.io/badge/Python_Quickstart_In_Colab-blue?logo=googlecolab&labelColor=555"> |
| 115 | +> </a> |
| 116 | +
|
| 117 | + |
106 | 118 | ## Some Example Results |
107 | 119 |
|
108 | 120 | One example of where this `tti-eval` is useful is to test different open-source models against different open-source datasets within a specific domain. |
@@ -185,6 +197,10 @@ The models are evaluated against four different medical datasets. Note, Further |
185 | 197 |
|
186 | 198 | ## Datasets |
187 | 199 |
|
| 200 | +<a href="https://colab.research.google.com/github/encord-team/text-to-image-eval/blob/main/notebooks/tti_eval_Bring_Your_Own_Dataset_From_Encord_Quickstart.ipynb" target="_blank" style="text-decoration:none"> |
| 201 | + <img alt="Datasets Quickstart In Colab" src="https://img.shields.io/badge/Quickstart_In_Colab-blue?logo=googlecolab&labelColor=555"> |
| 202 | +</a> |
| 203 | + |
188 | 204 | This repository contains classification datasets sourced from [Hugging Face](https://huggingface.co/datasets) and [Encord](https://app.encord.com/projects). |
189 | 205 |
|
190 | 206 | > ⚠️ Currently, only image and image groups datasets are supported, with potential for future expansion to include video datasets. |
@@ -258,6 +274,10 @@ However, all embeddings previously built on that dataset will remain intact and |
258 | 274 |
|
259 | 275 | ## Models |
260 | 276 |
|
| 277 | +<a href="https://colab.research.google.com/github/encord-team/text-to-image-eval/blob/main/notebooks/tti_eval_Bring_Your_Own_Model_From_Hugging_Face_Quickstart.ipynb" target="_blank" style="text-decoration:none"> |
| 278 | + <img alt="Models Quickstart In Colab" src="https://img.shields.io/badge/Quickstart_In_Colab-blue?logo=googlecolab&labelColor=555"> |
| 279 | +</a> |
| 280 | + |
261 | 281 | This repository contains models sourced from [Hugging Face](https://huggingface.co/models), [OpenCLIP](https://github.com/mlfoundations/open_clip) and local implementations based on OpenCLIP models. |
262 | 282 |
|
263 | 283 | _TODO_: Some more prose about what's the difference between implementations. |
|
0 commit comments