Skip to content
This repository has been archived by the owner on Feb 25, 2021. It is now read-only.

Can we abstract the keyframe descriptor extraction stage from the pipeline? #508

Open
sputtagunta opened this issue Dec 23, 2020 · 0 comments

Comments

@sputtagunta
Copy link

sputtagunta commented Dec 23, 2020

Imagine a scenario with 24TB of video data. I would like to extract descriptors from each image from a pre-processing stage that can be heavily parallelized via map reduce across a cluster of networked GPU servers on AWS. We then use the collated keyframe descriptors as an input to the actual SLAM pipeline thereby skipping the video decode process. So fundamentally, how would we:

  1. Extract keyframe landmarks from 2000+ 4K videos in parallel and save those descriptors to disk so we can massively parallelize the processing of video data? Are there any sequential dependencies in terms of state management in the descriptor creation process.
  2. Input pre-processed keyframe descriptors into the SLAM pipeline bypassing video decoding and passing the descriptors to the tracker and pose estimation modules directly?

Goals:

  1. Map / reduce descriptor extraction from 24TB of 4k video data
    (processing this sequentially is not an option due to time and scale...)
  2. Reduce stage utilizes descriptor dataset to run SLAM pipeline
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant