You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Open standard and SDK for AI apps, pack your code, inference pipelines, model files, dependencies, and runtime configurations in a [Bento](https://docs.bentoml.com/en/latest/concepts/bento.html).
21
-
* Auto-generate API servers, supporting REST API, gRPC, and long-running inference jobs.
22
-
* Auto-generate Docker container images.
18
+
- Open standard and SDK for AI apps, pack your code, inference pipelines, model
19
+
files, dependencies, and runtime configurations in a
- Auto-generate API servers, supporting REST API, gRPC, and long-running
22
+
inference jobs.
23
+
- Auto-generate Docker container images.
23
24
24
25
### 🏄 Freedom to build with any AI models
25
26
26
-
* Import from any model hub or bring your own models built with frameworks like PyTorch, TensorFlow, Keras, Scikit-Learn, XGBoost and [many more](https://docs.bentoml.com/en/latest/frameworks/index.html).
27
-
* Native support for [LLM inference](https://github.com/bentoml/openllm/#bentoml), [generative AI](https://github.com/bentoml/stable-diffusion-bentoml), [embedding creation](https://github.com/bentoml/CLIP-API-service), and [multi-modal AI apps](https://github.com/bentoml/Distributed-Visual-ChatGPT).
28
-
* Run and debug your BentoML apps locally on Mac, Windows, or Linux.
27
+
- Import from any model hub or bring your own models built with frameworks like
28
+
PyTorch, TensorFlow, Keras, Scikit-Learn, XGBoost and
[embedding creation](https://github.com/bentoml/CLIP-API-service), and
34
+
[multi-modal AI apps](https://github.com/bentoml/Distributed-Visual-ChatGPT).
35
+
- Run and debug your BentoML apps locally on Mac, Windows, or Linux.
29
36
30
37
### 🍭 Simplify modern AI application architecture
31
38
32
-
* Python-first! Effortlessly scale complex AI workloads.
33
-
* Enable GPU inference [without the headache](https://docs.bentoml.com/en/latest/guides/gpu.html).
34
-
*[Compose multiple models](https://docs.bentoml.com/en/latest/guides/graph.html) to run concurrently or sequentially, over [multiple GPUs](https://docs.bentoml.com/en/latest/guides/scheduling.html) or [on a Kubernetes Cluster](https://github.com/bentoml/yatai).
35
-
* Natively integrates with [MLFlow](https://docs.bentoml.com/en/latest/integrations/mlflow.html), [LangChain](https://github.com/ssheng/BentoChain), [Kubeflow](https://www.kubeflow.org/docs/external-add-ons/serving/bentoml/), [Triton](https://docs.bentoml.com/en/latest/integrations/triton.html), [Spark](https://docs.bentoml.com/en/latest/integrations/spark.html), [Ray](https://docs.bentoml.com/en/latest/integrations/ray.html), and many more to complete your production AI stack.
36
-
39
+
- Python-first! Effortlessly scale complex AI workloads.
40
+
- Enable GPU inference
41
+
[without the headache](https://docs.bentoml.com/en/latest/guides/gpu.html).
BentoML supports billions of model runs per day and is used by thousands of organizations around the globe.
190
+
BentoML supports billions of model runs per day and is used by thousands of
191
+
organizations around the globe.
161
192
162
-
Join our [Community Slack 💬](https://l.bentoml.com/join-slack), where thousands of AI application developers contribute to the project and help each other.
193
+
Join our [Community Slack 💬](https://l.bentoml.com/join-slack), where thousands
194
+
of AI application developers contribute to the project and help each other.
163
195
164
-
To report a bug or suggest a feature request, use [GitHub Issues](https://github.com/bentoml/BentoML/issues/new/choose).
* Report bugs and "Thumbs up" on issues that are relevant to you.
171
-
* Investigate issues and review other developers' pull requests.
172
-
* Contribute code or documentation to the project by submitting a GitHub pull request.
173
-
* Check out the [Contributing Guide](https://github.com/bentoml/BentoML/blob/main/CONTRIBUTING.md) and [Development Guide](https://github.com/bentoml/BentoML/blob/main/DEVELOPMENT.md) to learn more
174
-
* Share your feedback and discuss roadmap plans in the `#bentoml-contributors` channel [here](https://l.bentoml.com/join-slack).
203
+
- Report bugs and "Thumbs up" on issues that are relevant to you.
204
+
- Investigate issues and review other developers' pull requests.
205
+
- Contribute code or documentation to the project by submitting a GitHub pull
If you use BentoML in your research, please cite using the following
253
+
[citation](./CITATION.cff:
254
+
255
+
```bibtex
256
+
@software{Yang_BentoML_The_framework,
257
+
author = {Yang, Chaoyu and Sheng, Sean and Pham, Aaron and Zhao, Shenyang and Lee, Sauyon and Jiang, Bo and Dong, Fog and Guan, Xipeng and Ming, Frost},
258
+
license = {Apache-2.0},
259
+
title = {{BentoML: The framework for building reliable, scalable and cost-efficient AI application}},
0 commit comments