-
Notifications
You must be signed in to change notification settings - Fork 2
Software Design
Goals of the main module:
-
Calculate Current Position( Gyro and Odometer classes )
-
Calculate Desired Path (Curve and Spline classes)
- Uses Computer Vision data
- For now, output a straight line.
- Calculate Correction Vector -Desired path and Current Position
https://docs.python.org/3/tutorial/
http://wiki.ros.org/ROS/Tutorials
https://www.atlassian.com/dam/jcr:8132028b-024f-4b6b-953e-e68fcce0c5fa/atlassian-git-cheatsheet.pdf
All code should be written using Python 3.6.x C# for Unity Visualization
Use IMU accelerometer data and integrate it to get the distance travelled in 3d space.
Should operate at highest time frequency available by IMU and communication interface. Should carefully consider the time for each of the measurements. Should consider filtering such as kalman if already not implemented in the IMU to correct noise. Should consider gyroscope position if acceleration values are not absolute. Calibrates the sensor. Sets current position as 0,0,0. Always calibrate at program start-up. IMU Acceleration, Gyroscope, Magnometer.
IMU acceleration Vector (x,y,z) reference API
Should return a 3d vector. (x,y,z) which is an absolute vector of the distance traveled from the calibrated point (0,0,0)
Travel with AUV underwater for a pre-specified distance and see how well localization works
Since we know the dimensions of each competition obstacle, after detecting them with CV, it should be trivial to find how far they are from the AUV. To account for distortion created by the AUV hood, we will create a calibration function for a known object. Should return an absolute 3d vector of the distance from the AUV to the object.
https://www.pyimagesearch.com/2015/01/19/find-distance-camera-objectmarker-using-python-opencv/
Object’s detected hitbox location by computer vision, with probability
3d Vector that includes the angles of the object’s center from the AUVm, the distance and probability, e.g. (x,y,z), d, p
Detect objects at a predefined distance to see how well the model performs
Improve location accuracy by combining data from Computer Vision and IMU data. Concept is that there is inherited error in the IMU localization data as well as inherited error in the Computer Vision distance estimation data. Our goal is to combine both sources of data to reduce error and improve accuracy of our localization Module. We don’t know what is going to be more accurate, the computer vision data or the imu data, hence it will be the purpose of this project to calibrate for that and figure it out.
- IMU Data: A 3d vector. (x,y,z) which is an absolute vector of the distance traveled from the calibrated point (0,0,0)
- Computer Vision Data: 3d Vector that includes the angles of the object’s center from the AUV, the distance and probability, e.g. (x,y,z), d, p
Relative location of the Observed Object, in the format of Angles, distance and probability (x,y,z), d, p
Underwater run, with objects placed at pre-defined distances
Given location information from probability localization module, and current depth. initiate task planning for that specific task. Keep track of current task that is being solved and tasks already solved. Keep track of world’s state and surroundings.
Create a task routine for each of the tasks. e.g. see gate, center robot towards the gate drive towards the gate. Determine if passed through gate or not. Initiate next task etc.
Relative location of the Observed Object, in the format of Angles, distance and probability (x,y,z), d, p
- Stop Autonomous Mode
- Start Autonomous Mode
- Execute Motor Command (UP,DOWN, FWD, BKWD, LEFT,RIGHT)
Logging current world state and AUV state to a file (Timestamp, X,Y,Z, Probability, Object Type) (Timestamp, Sensor, Value) Transmit current world state and AUV state on live feed through ROS to Unit app, same format as above Motor Commands Go UP/Down by meters Go forward/background using power value 0 to 1 and duration in seconds Rotate left, right by Angles
Place obstacle e.g. Gate in front of AUV and judge ability to steer towards gate and pass it.
Implement the ability to get a data feed from ROS and a locally stored file.
Current world state and AUV state to a file (Timestamp, X,Y,Z, Probability, Object Type) Current world state and AUV state on live feed through ROS to Unit app, same format as above GUI elements to switch between modes file or live feed functionality.
Data Structure of elements to display on the visualization Engine.
Visualize current world state and output AUV current sensor state.
Current world state and AUV state (Timestamp, X,Y,Z, Probability, Object Type)
Rotate AUV based on sensor data. Place AUV on depth based on pressure sensor data Place obstacles and their location on the Unity Engine
Remote Control AUV through keyboard keys. Ability to stop autonomous mode of AUV by sending command to Main Module. Ability to restart autonomous mode of AUV. Ability to control AUV through keyboard.
Keyboard inputs
ROS commands directed to AUV’s Main Module
Pass commands and see if they are executed
Implement GUI elements that will communicate with the C# functions for Remote Control, Switching between different modes (autonomous vs controlled), switching between different views
Keyboard shortcuts Mouse clicks
C# Function call to remote module
Test use during wet test Computer Vision Object Detection Implement Tensorflow on Jetson to detect competition objects. Aim to 1 frames per second processed at all times.
Camera Data from USB https://www.bluerobotics.com/store/electronics/cam-usb-low-light-r1/
Hitbox location of Object, (x1,x2, y1,y2) and probability
Place objects of different shapes, similar shapes, at different distances, water clarity conditions, light conditions and see how well it performs