-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Autonomy] Vision-based Obstacle Segmentation & Detection #7
Comments
Things we should look into right away:
Other possible approaches to investigate:
|
We have begun experimentation with the Xbox Kinect (v1, model 1414). Using the Kinect should allow us to gather information on depth images for the purpose of finding obstacles. The goal is to use this data to create a 3D occupancy grid. We are currently considering the OctoMap library for this purpose. The OctoMap uses octrees and probabilistic occupancy estimation. It should give us a representation of occupied, free, and unknown space. This should allow us to recognize and map the locations of obstacles in a 3-dimensional space. |
With our focus shifting to working with the Kinect and OctoMap, working with image processing from a camera is de-prioritized, and may not be necessary. With regard to this, we made a basic Python script that can analyze a list of similar images and set the grayscale value of any noisy pixel to white, leaving behind only pixels consistent in grayscale value over each image. With the disparity image, such pixels imply a well-defined, non-repeating structure indicative of a protrusion or obstacle. In other words, we should be able to filter out noise and find obstacles using disparity images, if necessary. |
"Perceived noise" used rather generously; this script compares the standard deviation of a pixel over a set of images and sets any that don't have a standard deviation of 0 to be white (grayscale value 255).
We are no longer currently pursuing the use of OctoMap, and are looking into the ANYbotics elevation_mapping library. This library will ideally allow us to convert PointCloud2 information from Freenect directly to an elevation mapping visualization that we can use. We have made progress in running a simulated demo using the elevation_mapping library and its associated dependencies, but have an error involving an unknown_published for a 'map' variable. |
WIP #7: First go at node for mapping occupancy grid info from Kinect to a map of the field
Rotates the local map so it fits into the field map properly. Removed loops to work with numpy more efficiently. We need to find a way to deal with the additional junk values that arise in the area surrounding the rotated map.
Which section of robot code is this for?
Vision (Obstacle Detection)
Description of feature
The text was updated successfully, but these errors were encountered: