Skip to content

v0.22.0

Compare
Choose a tag to compare
@PawelPeczek-Roboflow PawelPeczek-Roboflow released this 04 Oct 14:33
· 585 commits to main since this release
c70cc32

🚀 Added

🔥 YOLOv11 in inference 🔥

We’re excited to announce that YOLOv11 has been added to inference! 🚀 You can now use both inference and the inference server to get predictions from the latest YOLOv11 model. 🔥

All thanks to @probicheaux and @SolomonLake 🏅

skateboard_yolov11.mov
Try the model ininference Python package
import cv2
from inference import get_model

image = cv2.imread("<your-image>")
model = get_model("yolov11n-640")
predictions = model.infer(image)

print(predictions)

💪 Workflows update

Google Vision OCR in workflows

Thanks to open source contribution from @brunopicinin we have Google Vision OCR integrated into workflow ecosystem. Great to see open source community contribution 🏅

google_vision_ocr.mp4

See 📖 documentation of the new block to explore it's capabilities.

Images stitch Workflow block

📷 Your camera is not able to cover the whole area you want to observe? Don't worry! @grzegorz-roboflow just added the Workflow block which would be able to combine the POV of multiple cameras into a single image that can be further processed in your Workflow.

image 1image 2stitched image

📏 Size measurement block

Thanks to @chandlersupple, we can now measure actual size of objects with Workflows! Take a look at 📖 documentation to discover how the block works.

image

Workflows profiler and Execution Engine speedup 🏇

We've added Workflows Profiler - ecosystem extension to profile the execution of your workflow. It works for inference server requests (both self-hosted and on Roboflow platform) as well as for InferencePipeline.

image

The cool thing about profiler is that it is compatible with chrome://tracing - so you can easily grab profiler output and render it in Google Chrome browser.

To profile your Workflow execution use the following code snippet - traces are saved in ./inference_profiling directory by default.

from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(
    api_url="https://detect.roboflow.com",
    api_key="<YOUR-API-KEY>"
)
results = client.run_workflow(
    workspace_name="<your-workspace>",
    workflow_id="<your-workflow-id>",
    images={
        "image": "<YOUR-IMAGE>",
    },
    enable_profiling=True,
)

See detailed report regarding speed optimisations in the PR #710

❗ Important note

As part of speed optimisation we enabled server-side caching for workflows definitions saved on Roboflow Platform - if you frequently change and your Workflow, to see results immediately you need to specify use_cache=False parameter of client.run_workflow(...) method

🔧 Fixed

🌱 Changed

🏅 New Contributors

We do want to honor @brunopicinin who made their first contribution to inference in #709 as a part of Hacktoberfest 2024. We invite other open-source community members to contribute 😄

Full Changelog: v0.21.1...v0.22.0