NOTE: Fastertransformer backend is currently undergoing restructuring. Build instructions are only tested with Triton container versions <= 23.04
.
The triton faster transformer backend works as an interface to call FasterTransformer in triton.
All necessary implements are actually in FasterTransformer repository.
The CMakeList.txt
will fetch relative repository to organize and compile the project with:
- this repository itself
- Faster Transformer repository
- 3rdparty
- cutlass
- Megatron
- etc...
To check how faster transformer support LLaMa, and how triton support LLaMa, here is the structure:
Faster Transformer Library
├── examples
│ └── cpp
│ └── llama
│ ├── CMakeList.txt
│ ├── llama_config.ini
│ ├── llama_example.cc
│ └── llama_triton_example.cc
└── src
└── fastertransformer
├── models
│ └── llama
│ ├── CMakeList.txt
│ ├── Llama.h
│ ├── LlamaContextDecoder.h
│ ├── LlamaDecoder.h
│ ├── LlamaDecoderLayerWeight.h
│ └── LlamaWeight.h
└── triton_backend
└── llama
├── CMakeList.txt
├── LlamaTritonModel.h
└── LlamaTritonModelInstance.h
Faster Transformer Backend
├── all_models
│ └── llama
│ ├── ensemble
│ ├── fastertransformer
│ ├── postprocessing
│ └── preprocessing
└── src
└── libfastertransformer.cc
The faster transformer repository work as a library to support different models.
examples/cpp/your_model
is essential if you want to run your model on faster transformer.
src/fastertransformer/models/your_model
is essential because it stores your_model_config.ini
, and other files (bad_words.csv
) to ensure your model to work well.
src/triton_backend/your/model
is optional.
Only when you want to deploy your model on triton server with faster transformer backend, you need to implement this part.
We have deployed llama-7b to triton inference server, see the llama_guide to boost your deploying work and get familiar with NVIDIA Triton Inference Server