Skip to content

Commit 36f1bf2

Browse files
authored
Fix library top-level imports (#1296)
Users are currently running into install issues. After a clean install of `outlines` they get an error message that asks for `transformers` to be installed. This should not be the case, as the library is not required for every integration. In this PR we remove `transformers` and `datasets` top-level imports, and add per-integration optional dependencies. ## TODO - [x] Test `import outlines` from clean install - [x] Test installing outlines with vLLM optional dependencies - [x] Test installing outlines with MLX optional dependencies - [x] Test installing outlines with transformers optional dependencies - [x] Test installing outlines with llama-cpp optional dependencies - [x] Test installing outlines with exllamav2 optional dependencies - [x] Test installing outlines with openai optional dependencies - [x] Update the documentation Supersedes #1295. Fixes #1263.
1 parent e9485cf commit 36f1bf2

File tree

9 files changed

+37
-13
lines changed

9 files changed

+37
-13
lines changed

docs/reference/models/llamacpp.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,11 @@ Outlines provides an integration with [Llama.cpp](https://github.com/ggerganov/l
44

55
!!! Note "Installation"
66

7-
You need to install the `llama-cpp-python` library to use the llama.cpp integration. See the [installation section](#installation) for instructions to install `llama-cpp-python` with CUDA, Metal, ROCm and other backends.
7+
You need to install the `llama-cpp-python` library to use the llama.cpp integration. See the [installation section](#installation) for instructions to install `llama-cpp-python` with CUDA, Metal, ROCm and other backends. To get started quickly you can also run:
8+
9+
```bash
10+
pip install "outlines[llamacpp]"
11+
```
812

913
## Load the model
1014

docs/reference/models/mlxlm.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,11 @@ Outlines provides an integration with [mlx-lm](https://github.com/ml-explore/mlx
44

55
!!! Note "Installation"
66

7-
You need to install the `mlx` and `mlx-lm` libraries on a device which [supports Metal](https://support.apple.com/en-us/102894) to use the mlx-lm integration.
7+
You need to install the `mlx` and `mlx-lm` libraries on a device which [supports Metal](https://support.apple.com/en-us/102894) to use the mlx-lm integration. To get started quickly you can also run:
8+
9+
```bash
10+
pip install "outlines[mlxlm]"
11+
```
812

913

1014
## Load the model

docs/reference/models/openai.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,11 @@
22

33
!!! Installation
44

5-
You need to install the `openai` library to be able to use the OpenAI API in Outlines.
5+
You need to install the `openai` library to be able to use the OpenAI API in Outlines. Or alternatively:
6+
7+
```bash
8+
pip install "outlines[openai]"
9+
```
610

711
## OpenAI models
812

docs/reference/models/transformers.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@
33

44
!!! Installation
55

6-
You need to install the `transformer`, `datasets` and `torch` libraries to be able to use these models in Outlines:
6+
You need to install the `transformer`, `datasets` and `torch` libraries to be able to use these models in Outlines, or alternatively:
77

88
```bash
9-
pip install torch transformers datasets
9+
pip install "outlines[transformers]"
1010
```
1111

1212

docs/reference/models/vllm.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,11 @@
33

44
!!! Note "Installation"
55

6-
You need to install the `vllm` library to use the vLLM integration. See the [installation section](#installation) for instructions to install vLLM for CPU or ROCm.
6+
You need to install the `vllm` library to use the vLLM integration. See the [installation section](#installation) for instructions to install vLLM for CPU or ROCm. To get started you can also run:
7+
8+
```bash
9+
pip install "outlines[vllm]"
10+
```
711

812
## Load the model
913

outlines/models/transformers.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,6 @@
22
import inspect
33
from typing import TYPE_CHECKING, Iterator, List, Optional, Tuple, Union
44

5-
from datasets.fingerprint import Hasher
6-
75
from outlines.generate.api import GenerationParameters, SamplingParameters
86
from outlines.models.tokenizer import Tokenizer
97

@@ -116,6 +114,8 @@ def __eq__(self, other):
116114
return NotImplemented
117115

118116
def __hash__(self):
117+
from datasets.fingerprint import Hasher
118+
119119
return hash(Hasher.hash(self.tokenizer))
120120

121121
def __getstate__(self):

outlines/models/vllm.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,10 @@
11
import dataclasses
22
from typing import TYPE_CHECKING, List, Optional, Union
33

4-
from transformers import SPIECE_UNDERLINE, PreTrainedTokenizerBase
5-
64
from outlines.generate.api import GenerationParameters, SamplingParameters
75

86
if TYPE_CHECKING:
7+
from transformers import PreTrainedTokenizerBase
98
from vllm import LLM
109
from vllm.sampling_params import SamplingParams
1110

@@ -188,7 +187,7 @@ def vllm(model_name: str, **vllm_model_params):
188187
return VLLM(model)
189188

190189

191-
def adapt_tokenizer(tokenizer: PreTrainedTokenizerBase) -> PreTrainedTokenizerBase:
190+
def adapt_tokenizer(tokenizer: "PreTrainedTokenizerBase") -> "PreTrainedTokenizerBase":
192191
"""Adapt a tokenizer to use to compile the FSM.
193192
194193
The API of Outlines tokenizers is slightly different to that of `transformers`. In
@@ -205,6 +204,8 @@ def adapt_tokenizer(tokenizer: PreTrainedTokenizerBase) -> PreTrainedTokenizerBa
205204
PreTrainedTokenizerBase
206205
The adapted tokenizer.
207206
"""
207+
from transformers import SPIECE_UNDERLINE
208+
208209
tokenizer.vocabulary = tokenizer.get_vocab()
209210
tokenizer.special_tokens = set(tokenizer.all_special_tokens)
210211

pyproject.toml

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,6 @@ dependencies = [
3636
"jsonschema",
3737
"requests",
3838
"tqdm",
39-
"datasets",
4039
"typing_extensions",
4140
"pycountry",
4241
"airportsdata",
@@ -46,6 +45,12 @@ dependencies = [
4645
dynamic = ["version"]
4746

4847
[project.optional-dependencies]
48+
vllm = ["vllm", "transformers", "numpy2"]
49+
transformers = ["transformers", "accelerate", "datasets", "numpy<2"]
50+
mlxlm = ["mlx-lm", "datasets"]
51+
openai = ["openai"]
52+
llamacpp = ["llama-cpp-python", "transformers", "datasets", "numpy<2"]
53+
exllamav2 = ["exllamav2"]
4954
test = [
5055
"pre-commit",
5156
"pytest",
@@ -61,10 +66,12 @@ test = [
6166
"mlx-lm>=0.19.2; platform_machine == 'arm64' and sys_platform == 'darwin'",
6267
"huggingface_hub",
6368
"openai>=1.0.0",
69+
"datasets",
6470
"vllm; sys_platform != 'darwin'",
6571
"transformers",
6672
"pillow",
6773
"exllamav2",
74+
"jax"
6875
]
6976
serve = [
7077
"vllm>=0.3.0",

tests/generate/test_integration_transformers_vision.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ def img_from_url(url):
2323
@pytest.fixture(scope="session")
2424
def model(tmp_path_factory):
2525
return transformers_vision(
26-
"trl-internal-testing/tiny-random-LlavaForConditionalGeneration",
26+
"trl-internal-testing/tiny-LlavaForConditionalGeneration",
2727
model_class=LlavaForConditionalGeneration,
2828
device="cpu",
2929
)

0 commit comments

Comments
 (0)