Skip to content

Add /stats and /latest_frame endpoints#2118

Open
alexnorell wants to merge 1 commit intomainfrom
feat/stats-and-latest-frame-endpoints
Open

Add /stats and /latest_frame endpoints#2118
alexnorell wants to merge 1 commit intomainfrom
feat/stats-and-latest-frame-endpoints

Conversation

@alexnorell
Copy link
Contributor

Summary

  • Adds GET /stats endpoint that returns aggregated camera_fps, inference_fps, and stream_count across all active pipelines. Reuses existing list_pipelines/get_status IPC, no new commands needed.
  • Adds GET /inference_pipelines/{pipeline_id}/latest_frame endpoint that returns the most recent frame as base64-encoded JPEG with metadata (frame_id, frame_timestamp, source_id). Uses a new LATEST_FRAME IPC command that peeks at the buffer non-destructively.

Both endpoints are gated behind ENABLE_STREAM_API.

Test plan

  • pytest tests/inference/unit_tests/core/interfaces/stream_manager/test_stats_and_latest_frame.py -v
  • Manual: start server with ENABLE_STREAM_API=true, call GET /stats and verify JSON shape
  • Manual: start a pipeline, call GET /inference_pipelines/{id}/latest_frame and verify base64 JPEG decodes

Adds two new HTTP endpoints behind the ENABLE_STREAM_API flag:

- GET /stats: returns aggregated camera_fps, inference_fps, and
  stream_count across all active pipelines. Reuses existing
  list_pipelines/get_status IPC -- no new commands needed.

- GET /inference_pipelines/{pipeline_id}/latest_frame: returns the most
  recent frame as a base64-encoded JPEG with metadata. Adds a new
  LATEST_FRAME IPC command that peeks at the buffer non-destructively.
@alexnorell alexnorell force-pushed the feat/stats-and-latest-frame-endpoints branch from 82e1800 to 5c2050f Compare March 13, 2026 22:56

@app.get(
"/stats",
summary="Aggregated pipeline statistics",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

description + response_model please ;)

pipeline_ids = pipelines_resp.pipelines
stream_count = len(pipeline_ids)
for pid in pipeline_ids:
status_resp = await self.stream_manager_client.get_status(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tasks = [
    self.stream_manager_client.get_status(pid)
    for pid in pipeline_ids
]

responses = await asyncio.gather(*tasks, return_exceptions=True)

? Not sure how much would it matter though.

pid
)
report = status_resp.report
throughput = report.get("inference_throughput", 0.0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't we split this into somthing like :

async def get_stats():
  reports = await self.fetch_pipeline_reports()
  compute_stats(reports)

I think we are pushing too much into the endpoint function bodies. This is not a place for business logic.
Additionally thinking to write in this way will allow use to easily optimize the work. I know that the stats calculation is trivial, but if it would be more complicated - this is a blocking operation. Having it as a separate compute_stats would allow us to quickly fix this, running this in a separate thread or something like that.

self._responses_queue.put((request_id, response_payload))
return None
_, jpeg_bytes = cv.imencode(
".jpg", frame.image, [cv.IMWRITE_JPEG_QUALITY, 70]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't we like to allow to parametrize this through the request? Not sure about it's usefulness at this moment because I don't know the full context, but just wanted to point this out. Although in that case I would provide some enum with some reasonable values, low, medium, high where medium is 70 for example. Otherwise people would probably skew to typing 100 all the time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants