Releases: roboflow/inference
v0.11.2
What's Changed
- Add YOLOv10 Object Detection Support by @NickHerrig and @probicheaux in #431
New Contributors
- @NickHerrig made their first contribution in #431
Full Changelog: v0.11.1...v0.11.2
v0.11.1
🔨 Fixed
❗ setuptools>=70.0.0
breaks CLIP
and YoloWorld
models in inference
Using setuptools
in version 70.0.0
and above breaks usage of Clip and YoloWorld models. That impacts historical version of inference package installed in python environments with newest setuptools
. Problem may affect clients using inference
as Python package in their environments, docker builds are not impacted.
Symptoms of the problem:
ImportError
while attemptingfrom inference.models import YOLOWorld
, despite previouspip install inference[yolo-world]
ImportError
while attemptingfrom inference.models import Clip
We release change pinning setuptools
version into compatible ones. This is not the ultimate solution for that problem (as some time in the future it may be needed to unblock setuptools
), that's why we will need to take actions in the future releases - stay tuned.
As a solution for now, we recommend enforcing setuptools<70.0.0
in all environments using inference
, so if you are impacted restrict setuptools
in your build:
pip install setuptools>=65.5.1,<70.0.0
🏗️ docker image for Jetson with Jetpack 4.5 is now fixed
We had issues with builds on Jetpack 4.5 which should be solved now. Details: #393
🌱 Changed
- In
workflows
, one can now define selectors to runtime inputs ($inputs.<name>
) in outputs definitions, making it possible to pass input data through theworkflow
.
Full Changelog: v0.11.0...v0.11.1
v0.11.0
🚀 Added
🎉 PaliGemma in inference
! 🎉
You've probably heard about new PaliGemma model, right? We have it supported in new release of inference
thanks to @probicheaux.
To run the model, you need to build and inference
server your GPU machine using the following commands:
# clone the inference repo
git clone https://github.com/roboflow/inference.git
# navigate into repository root
cd inference
# build inference server with PaliGemma dependencies
docker build -t roboflow/roboflow-inference-server-paligemma -f docker/dockerfiles/Dockerfile.paligemma .
# run server
docker run -p 9001:9001 roboflow/roboflow-inference-server-paligemma
👉 To prompt the model visit our examples 📖 or use the following code snippet:
import base64
import requests
import os
PORT = 9001
API_KEY = os.environ["ROBOFLOW_API_KEY"]
IMAGE_PATH = "<PATH-TO-YOUR>/image.jpg"
def encode_bas64(image_path: str):
with open(image_path, "rb") as image:
x = image.read()
image_string = base64.b64encode(x)
return image_string.decode("ascii")
def do_gemma_request(image_path: str, prompt: str):
infer_payload = {
"image": {
"type": "base64",
"value": encode_bas64(image_path),
},
"api_key": API_KEY,
"prompt": prompt
}
response = requests.post(
f'http://localhost:{PORT}/llm/paligemma',
json=infer_payload,
)
return response.json()
print(do_gemma_request(
image_path=IMAGE_PATH,
prompt="Describe the image"
))
🌱 Changed
- documentations updates:
- document source_id parameter of VideoFrame by @sberan in #395
- fix workflows specification URL and other docs updates by @SolomonLake in #398
- add link to Roboflow licensing by @capjamesg in #403
🔨 Fixed
- Bug introduced into
InferencePipeline.init_with_workflow(...)
inv0.10.0
causing import errors yielding misleading error message informing about broken dependencies:
inference.core.exceptions.CannotInitialiseModelError: Could not initialise workflow processing due to lack of dependencies required. Please provide an issue report under https://github.com/roboflow/inference/issues
Fixed with this PR #407
Full Changelog: v0.10.0...v0.11.0
v0.10.0
🚀 Added
🎊 Core modules of workflows
are Apache-2.0
now
We're excited to announce that the core of workflows
is now open-source under the Apache-2.0 license! We invite the community to explore the workflows
ecosystem and contribute to its growth. We have plenty of ideas for improvements and would love to hear your feedback.
Feel free to check out our examples and docs 📖 .
🏗️ Roboflow workflows
are changing before our eyes
We've undergone a major refactor of the workflows
Execution Engine to make it more robust:
blocks
can now be stand-alone modules - what makes them separated from Execution Enginebocks
now expose OpenAPI manifests for automatic parsing and validation- custom
plugins
withblocks
can be created, installed via pip, and integrated with our core libraryblocks
.
Thanks to @SkalskiP and @stellasphere we've made the documentation much better. Relying on new blocks self-describing capabilities we can now automatically generate workflows
docs - you can now see exactly how to connect different blocks and how JSON definitions should look like.
Visit our docs 📖 to discover more
❗ There are minor breaking changes in manifests of some steps (DetectionsFilter
, DetectionsConsensus
, ActiveLearningDataCollector
) as we needed to fix shortcuts made in initial version. Migration would require plugging output of another step
into fields image_metadata
, prediction_type
of mentioned blocks.
🔧 inference --version
Thanks to @Griffin-Sullivan we have now a new command in inference-cli
available to show details on what version of inference*
packages are installed.
inference --version
🌱 Changed
- Huge general docs upgrade by @LinasKo (#385, #378, #372) fixing broken links, general structure and aliases for keypoints coco-models
🔨 Fixed
- Inconsistency in builds due to release of
fastapi
package by @grzegorz-roboflow #374 - Middleware error in
inference server
- making every response not gettingHTTP 2xx
intoHTTP 500
😢 - introduced in v0.9.23 - thanks @probicheaux for taking the effort to fix it - bug that was present in post-processing of all
instance-segmentation
models making batch inference faulty when some image yields zero predictions - huge kudos to @grzegorz-roboflow for spotting the problem and fixing it.
🏅 New Contributors
- @Griffin-Sullivan made their first contribution in #339
Full Changelog: v0.9.23...v0.10.0
v0.9.23
What's Changed
- Improve benchmark output; fix exception handling by @grzegorz-roboflow in #354
- Minor docs update, API key in InferenceHTTPClient by @LinasKo in #357
- Add api key fallback for model monitoring by @hansent in #366
- Downgrade transformers to avoid faulty release of that package by @PawelPeczek-Roboflow in #363
- Upped skypilot version by @bigbitbus in #367
- Lock Grounding DINO package version to 0.2.0 by @skylargivens in #368
New Contributors
Full Changelog: v0.9.22...v0.9.23
v0.9.22
What's Changed
- Add new endpoints for workflows and prepare for future deprecation by @PawelPeczek-Roboflow in #336
- Update description for workflows steps by @grzegorz-roboflow in #345
- Add error status code to benchmark output by @grzegorz-roboflow in #351
- Add more test cases to cover tests/inference/unit_tests/core/utils/test_postprocess.py::post_process_polygons by @grzegorz-roboflow in #352
- Inference TensorRT execution provider container revival by @probicheaux in #347
- Bugfix for gaze detection (batch request) by @PacificDou in #358
- Allow alternate video sources by @sberan in #348
- Skip encode image as jpeg if no-resize is specified by @PacificDou in #359
New Contributors
- @grzegorz-roboflow made their first contribution in #345
Full Changelog: v0.9.20...v0.9.22
v0.9.20
v0.9.19
GroundingDINO bugfixes and enhancements!
Allows users to pass custom box_threshold and text_threshold params to Grounding DINO core model.
Update docs to reflect box_threshold and text_threshold params.
Fixes error by filtering out detections where text similarity is lower than text_threshold and Grounding DINO returns None for class ID.
Fixes images passed to Grounding DINO model being loaded as RBG instead of BGR.
Adds NMS to Grounding DINO, optionally using class agnostic NMS via CLASS_AGNOSTIC_NMS env var.
Try it out:
from inference.models.grounding_dino import GroundingDINO
model = GroundingDINO(api_key="")
results = model.infer(
{
"image": {
"type": "url",
"value": "https://media.roboflow.com/fruit.png",
},
"text": ["apple"],
# Optional params
"box_threshold": 0.5
"text_threshold": 0.5
}
)
print(results.json())
Full Changelog: v0.9.18...v0.9.19
v0.9.18
🚀 Added
🎥 🎥 Multiple video sources 🤝 InferencePipeline
Previous versions of the InferencePipeline
could only support a single video source. However, from now on, you can pass multiple videos into a single pipeline and have all of them processed! Here is a demo:
demo_short.mp4
Here's how to achieve the result:
from inference import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes
pipeline = InferencePipeline.init(
video_reference=["your_video.mp4", "your_other_ideo.mp4"],
model_id="yolov8n-640",
on_prediction=render_boxes,
)
pipeline.start()
pipeline.join()
There were a lot of internal changes made, but the majority of users should not experience any breaking changes. Please visit our 📖 documentation to discover all the differences. If you are affected by the changes we needed to introduce, here is the 🔧 migration guide.
Barcode detector in workflows
Thanks to @chandlersupple, we have ability to detect and read barcodes in workflows
.
Visit our 📖 documentation to see how to bring this step into your workflow.
🌱 Changed
Easier data collection in inference
🔥
We've introduced a new parameter handled by the inference
server (including hosted inference
at Roboflow platform). This parameter, called active_learning_target_dataset
, can now be added to requests to specify the Roboflow project where collected data should be stored.
Thanks to this change, you can now collect datasets while using Universe models. We've also updated Active Learning 📖 docs
from inference_sdk import InferenceHTTPClient, InferenceConfiguration
# prepare and set configuration
configuration = InferenceConfiguration(
active_learning_target_dataset="my_dataset",
)
client = InferenceHTTPClient(
api_url="https://detect.roboflow.com",
api_key="<YOUR_ROBOFLOW_API_KEY>",
).configure(configuration)
# run normal request and have your data sampled 🤯
client.infer(
"./path_to/your_image.jpg",
model_id="yolov8n-640",
)
Other changes
- Add
inference_id
to batches created by AL by @robiscoding in #319 - Improvements in 📖 documentation regarding
RGB vs BGR
topic by @probicheaux in #330
🔨 Fixed
Thanks to contribution of @hvaria 🏅 we have two problems solved:
- Ensure Graceful Interruption of Benchmark Process - Fixing for Bug #313: in #325
- Better error handling in inference CLI: in #328
New Contributors
- @chandlersupple made their first contribution in #311
Full Changelog: v0.9.17...v0.9.18
v0.9.17
🚀 Added
YOLOWorld - new versions and Roboflow hosted inference 🤯
inference
package now support 5 new versions of YOLOWorld model. We've added versions x
, v2-s
, v2-m
, v2-l
, v2-x
. Versions with prefix v2
have better performance than the previously published ones.
To use YOLOWorld in inference
, use the following model_id
: yolo_world/<version>
, substituting <version>
with one of [s, m, l, x, v2-s, v2-m, v2-l, v2-x]
.
You can use the models in different contexts:
Roboflow hosted inference
- easiest way to get your predictions 💥
💡 Please make sure you have inference-sdk installed
If you do not have the whole inference
package installed, you will need to install at leastinference-sdk
:
pip install inference-sdk
💡 You need Roboflow account to use our hosted platform
import cv2
from inference_sdk import InferenceHTTPClient
client = InferenceHTTPClient(api_url="https://infer.roboflow.com", api_key="<YOUR_ROBOFLOW_API_KEY>")
image = cv2.imread("<path_to_your_image>")
results = client.infer_from_yolo_world(
image,
["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
model_version="s", # <-- you do not need to provide `yolo_world/` prefix here
)
Self-hosted inference
server
💡 Please remember to clean up old version of docker image
If you ever used inference
server before, please run:
docker rmi roboflow/roboflow-inference-server-cpu:latest
# or, if you have GPU on the machine
docker rmi roboflow/roboflow-inference-server-gpu:latest
in order to make sure the newest version of image is pulled.
💡 Please make sure you run the server and have sdk installed
If you do not have the whole inference
package installed, you will need to install at least inference-cli
and inference-sdk
:
pip install inference-sdk inference-cli
Make sure you start local instance of inference server
before running the code
inference server start
import cv2
from inference_sdk import InferenceHTTPClient
client = InferenceHTTPClient(api_url="http://127.0.0.1:9001")
image = cv2.imread("<path_to_your_image>")
results = client.infer_from_yolo_world(
image,
["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
model_version="s", # <-- you do not need to provide `yolo_world/` prefix here
)
In inference
Python package
💡 Please remember to install inference with yolo-world extras
pip install "inference[yolo-world]"
import cv2
from inference.models import YOLOWorld
image = cv2.imread("<path_to_your_image>")
model = YOLOWorld(model_id="yolo_world/s")
results = model.infer(
image,
["person", "backpack", "dog", "eye", "nose", "ear", "tongue"]
)
🌱 Changed
- Track source for remote execution flows by @tonylampada in #320
- Improved documentation by @capjamesg in #321
New Contributors
- @tonylampada made their first contribution in #320 🥇
Full Changelog: v0.9.16...v0.9.17