Skip to content
This repository was archived by the owner on Oct 19, 2024. It is now read-only.

Commit 8239e8a

Browse files
authored
[Doc] add a tutorial for inference (#538)
1 parent db7c3b4 commit 8239e8a

File tree

3 files changed

+121
-5
lines changed

3 files changed

+121
-5
lines changed

docs/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@ Alpa can automatically generate dstirbuted execution plans that unify data, oper
1919
tutorials/pipeshard_parallelism.rst
2020
tutorials/alpa_vs_pmap.rst
2121
tutorials/perf_tuning_guide.rst
22+
tutorials/opt_serving.rst
2223

2324
.. toctree::
2425
:maxdepth: 1

docs/tutorials/opt_serving.rst

Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
Serving OPT-175B using Alpa
2+
===========================
3+
4+
This tutorial provides guides to setup a serving system to serve the largest available pretrained language model OPT-175B.
5+
6+
7+
As a serving system, Alpa provides the following unique advantages:
8+
9+
- **Support commodity hardware**: With Alpa, you can serve OPT-175B using your in-house GPU cluster, without needing the latest generations of A100 80GB GPUs nor fancy InfiniBand connections -- no hardware constraints!
10+
11+
- **Flexible parallelism strategies**: Alpa will automatically figure out the appropriate model-parallelism strategies based on your cluster setup.
12+
13+
14+
In this example, we use Alpa to serve the open-source OPT model, supporting all sizes ranging from 125M to 175B.
15+
Specifically, Alpa provides:
16+
17+
- A **backend** to perform model-parallel distributed inference for the large OPT models;
18+
19+
- A **web frontend** to collect and batch inference requests from users.
20+
21+
.. note::
22+
23+
The trained OPT model weights can be obtained from `Metaseq download page <https://github.com/facebookresearch/metaseq/tree/main/projects/OPT>`_. Usages of
24+
the pretrained model weights are subject to their `license <https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/MODEL_LICENSE.md>`_ .
25+
26+
.. note::
27+
28+
You will need at least 350GB memory to to serve the OPT-175B model. You can also follow this guide to setup a serving system to serve smaller versions of OPT,
29+
such as OPT-66B, OPT-30B, etc. Pick an appropriate size from `OPT weight release page <https://github.com/facebookresearch/metaseq/tree/main/projects/OPT>`_ based on
30+
your available resources.
31+
32+
33+
Requirements
34+
------------
35+
1. Install Alpa following the `installation guide <https://alpa-projects.github.io/install.html>`_.
36+
37+
2. Install additional requirements for serving:
38+
39+
.. code:: bash
40+
41+
pip3 install transformers flask cython
42+
43+
# Install torch corresponding to your CUDA version, e.g., for CUDA 11.3:
44+
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
45+
46+
3. Compile several cython files for faster data processing:
47+
48+
.. code:: bash
49+
50+
cd examples/opt_serving && bash build.sh
51+
52+
Get OPT Weights
53+
---------------
54+
There are two ways you can obtain the pretrained OPT weights.
55+
56+
1. You can download the original OPT weights released by `Metaseq <https://github.com/facebookresearch/metaseq/tree/main/projects/OPT>`_,
57+
then use our script `convert_to_numpy_weight.py <scripts/convert_to_numpy_weights.p>`_ to convert it into Alpa-compatible formats.
58+
59+
2. We provide links to download the preprocessed 125M and 2.7B model below. For other sizes of OPT, please join `Alpa slack <https://forms.gle/YEZTCrtZD6EAVNBQ7>`_ to request a copy from the Alpa developer team.
60+
- `OPT-125M weights <https://drive.google.com/file/d/1Ps7DFD80wNO7u2t39YCYcBX-9XwypGzl/view?usp=sharing>`_
61+
- `OPT-2.7B weights <https://drive.google.com/file/d/1ayIaKRhxF9osZWgcFG-3vSkjcepSWdQd/view?usp=sharing>`_
62+
63+
64+
Run Generation in Command Line
65+
------------------------------
66+
67+
For a small model that can fit into one GPU, such as the OPT-125M, we can run single-GPU generation using either PyTorch backend or JAX backend.
68+
For examples:
69+
70+
1. Run generation using the 125M OPT model with PyTorch/HuggingFace backend:
71+
72+
.. code:: bash
73+
74+
cd benchmark
75+
python3 benchmark_text_gen.py --model facebook/opt-125m --path [PATH_TO_WEIGHT]
76+
77+
2. Run generation using the OPT-125M model with JAX backend in debug model to output the generated text:
78+
79+
.. code:: bash
80+
81+
python3 benchmark_text_gen.py --model jax/opt-125m --path [PATH_TO_WEIGHT] --debug
82+
83+
3. Run model-parallel generation using the 2.7B model with Alpa:
84+
85+
.. code:: bash
86+
87+
ray start --head
88+
python3 benchmark_text_gen.py --model alpa/opt-2.7b --path [PATH_TO_WEIGHT] --debug
89+
90+
4. Run distributed generation with the 175B model using Alpa; Note you will need >350Gb total GPU memory in the entire cluster to successfully run the inference.
91+
92+
.. code:: bash
93+
94+
# Remember to start ray on the entire cluster before running the generation
95+
python3 benchmark_text_gen.py --model alpa/opt-175b --path [PATH_TO_WEIGHT] --debug
96+
97+
Launch a web server to serve the OPT models
98+
-------------------------------------------
99+
100+
Launch the web server:
101+
102+
.. code:: bash
103+
104+
# Serve the OPT-175B model at port 10001
105+
python3 interactive_hosted.py --model alpa/opt-175b --port 10001 --path [PATH_TO_WEIGHT]
106+
107+
Then open ``https://[IP-ADDRESS]:10001`` in your browser to try out the model!
108+
109+
110+
License
111+
-------
112+
113+
The Use of the OPT pretrained weights are subject to the `Model Licence <https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/MODEL_LICENSE.md>`_ by Metaseq.

examples/opt_serving/README.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@
22
As a serving system, Alpa provides the following unique advantages:
33
- **Support commodity hardware**: With Alpa, you can serve OPT-175B using your in-house GPU cluster, without needing the latest generations of A100 80GB GPUs nor fancy InfiniBand connections -- no hardware constraints!
44
- **Flexible parallelism strategies**: Alpa will automatically figure out the appropriate model-parallelism strategies based on your cluster setup.
5-
- **Serve with arbitrary numbers of GPUs, from 0 - 100s**: No matter how many GPUs you have, you can serve OPT as long as your total memory is sufficient.
65

76
In this example, we use Alpa to serve the open-source OPT model, supporting all sizes ranging from 125M to 175B.
87

@@ -41,12 +40,12 @@ then use our script [convert_to_numpy_weight.py](scripts/convert_to_numpy_weight
4140

4241
For a small model, we can run single-GPU generation using either PyTorch backend or JAX backend:
4342

44-
Run generation using the 125M OPT model with PyTorch/HuggingFace backend
43+
Run generation using the 125M OPT model with PyTorch/HuggingFace backend:
4544
```shell
4645
cd benchmark
47-
python3 benchmark_text_gen.py --model facebook/opt-125m --path [PATH_TO_WEIGHT]
46+
python3 benchmark_text_gen.py --model facebook/opt-125m
4847
```
49-
Run generation using the OPT-125M model with JAX backend in debug model to output the generated text
48+
Run generation using the OPT-125M model with JAX backend in debug model to output the generated text:
5049
```shell
5150
python3 benchmark_text_gen.py --model jax/opt-125m --path [PATH_TO_WEIGHT] --debug
5251
```
@@ -58,7 +57,7 @@ ray start --head
5857
python3 benchmark_text_gen.py --model alpa/opt-2.7b --path [PATH_TO_WEIGHT] --debug
5958
```
6059

61-
Run distributed generation using the 175B model with Alpa.
60+
Run distributed generation using the 175B model with Alpa as below.
6261
Note you will need >350Gb total GPU memory in the entire cluster to successfully run the inference.
6362
```shell
6463
# Remember to start ray on the entire cluster before running the generation
@@ -82,3 +81,6 @@ Then open `https://[IP-ADDRESS]:10001` in your browser to try out the model!
8281
- [examples/opt_serving/service](service): Model serving web server.
8382
- [examples/opt_serving/generator.py](generator.py): Backend for web server.
8483
- [examples/opt_serving/interactive_hosted.py](interactive_hosted.py): Web server entry point.
84+
85+
## License
86+
The Use of the OPT pretrained weights are subject to the [Model Licenc](https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/MODEL_LICENSE.md) by Metaseq.

0 commit comments

Comments
 (0)