Skip to content

Software Design

Iordanis Fostiropoulos edited this page Sep 13, 2018 · 3 revisions

Goals of the main module:

  1. Calculate Current Position( Gyro and Odometer classes )

  2. Calculate Desired Path (Curve and Spline classes)

  • Uses Computer Vision data
  • For now, output a straight line.
  1. Calculate Correction Vector -Desired path and Current Position

Resources

Python tutorial

https://docs.python.org/3/tutorial/

ROS tutorial

http://wiki.ros.org/ROS/Tutorials

GIT Cheatsheet

https://www.atlassian.com/dam/jcr:8132028b-024f-4b6b-953e-e68fcce0c5fa/atlassian-git-cheatsheet.pdf

Competition Rules

https://www.robonation.org/sites/default/files/2018%20RoboSub_2018%20Mission%20and%20Scoring_v01.50.pdf

Environment Requirements

All code should be written using Python 3.6.x C# for Unity Visualization

High Level Module Requirements

IMU localization

Use IMU accelerometer data and integrate it to get the distance travelled in 3d space.

Concept/Equations:

https://www.dummies.com/education/science/physics/how-to-calculate-time-and-distance-from-acceleration-and-velocity/

Should operate at highest time frequency available by IMU and communication interface. Should carefully consider the time for each of the measurements. Should consider filtering such as kalman if already not implemented in the IMU to correct noise. Should consider gyroscope position if acceleration values are not absolute. Calibrates the sensor. Sets current position as 0,0,0. Always calibrate at program start-up. IMU Acceleration, Gyroscope, Magnometer.

IMU API:

http://x-io.co.uk/ngimu/

Inputs

IMU acceleration Vector (x,y,z) reference API

Outputs

Should return a 3d vector. (x,y,z) which is an absolute vector of the distance traveled from the calibrated point (0,0,0)

Validation Test

Travel with AUV underwater for a pre-specified distance and see how well localization works

Computer Vision Localization

Since we know the dimensions of each competition obstacle, after detecting them with CV, it should be trivial to find how far they are from the AUV. To account for distortion created by the AUV hood, we will create a calibration function for a known object. Should return an absolute 3d vector of the distance from the AUV to the object.

Idea and concept:

https://www.pyimagesearch.com/2015/01/19/find-distance-camera-objectmarker-using-python-opencv/

Inputs

Object’s detected hitbox location by computer vision, with probability

Outputs

3d Vector that includes the angles of the object’s center from the AUVm, the distance and probability, e.g. (x,y,z), d, p

Validation Test

Detect objects at a predefined distance to see how well the model performs

Localization Probability Module

Improve location accuracy by combining data from Computer Vision and IMU data. Concept is that there is inherited error in the IMU localization data as well as inherited error in the Computer Vision distance estimation data. Our goal is to combine both sources of data to reduce error and improve accuracy of our localization Module. We don’t know what is going to be more accurate, the computer vision data or the imu data, hence it will be the purpose of this project to calibrate for that and figure it out.

Inputs

  • IMU Data: A 3d vector. (x,y,z) which is an absolute vector of the distance traveled from the calibrated point (0,0,0)
  • Computer Vision Data: 3d Vector that includes the angles of the object’s center from the AUV, the distance and probability, e.g. (x,y,z), d, p

Outputs

Relative location of the Observed Object, in the format of Angles, distance and probability (x,y,z), d, p

Validation Test

Underwater run, with objects placed at pre-defined distances

Main Module

Given location information from probability localization module, and current depth. initiate task planning for that specific task. Keep track of current task that is being solved and tasks already solved. Keep track of world’s state and surroundings.

Create a task routine for each of the tasks. e.g. see gate, center robot towards the gate drive towards the gate. Determine if passed through gate or not. Initiate next task etc.

Inputs

Relative location of the Observed Object, in the format of Angles, distance and probability (x,y,z), d, p

ROS command:

  • Stop Autonomous Mode
  • Start Autonomous Mode
  • Execute Motor Command (UP,DOWN, FWD, BKWD, LEFT,RIGHT)

Outputs

Logging current world state and AUV state to a file (Timestamp, X,Y,Z, Probability, Object Type) (Timestamp, Sensor, Value) Transmit current world state and AUV state on live feed through ROS to Unit app, same format as above Motor Commands Go UP/Down by meters Go forward/background using power value 0 to 1 and duration in seconds Rotate left, right by Angles

Validation Test

Place obstacle e.g. Gate in front of AUV and judge ability to steer towards gate and pass it.

Unity App - ROS and File Data Feed

Implement the ability to get a data feed from ROS and a locally stored file.

Inputs

Current world state and AUV state to a file (Timestamp, X,Y,Z, Probability, Object Type) Current world state and AUV state on live feed through ROS to Unit app, same format as above GUI elements to switch between modes file or live feed functionality.

Outputs

Data Structure of elements to display on the visualization Engine.

Validation Test

Unity App - Visualization

Visualize current world state and output AUV current sensor state.

Inputs

Current world state and AUV state (Timestamp, X,Y,Z, Probability, Object Type)

Outputs

Rotate AUV based on sensor data. Place AUV on depth based on pressure sensor data Place obstacles and their location on the Unity Engine

Validation Test

Unity App - Remote Control

Remote Control AUV through keyboard keys. Ability to stop autonomous mode of AUV by sending command to Main Module. Ability to restart autonomous mode of AUV. Ability to control AUV through keyboard.

Inputs

Keyboard inputs

Outputs

ROS commands directed to AUV’s Main Module

Validation Test

Pass commands and see if they are executed

Unity App - GUI

Implement GUI elements that will communicate with the C# functions for Remote Control, Switching between different modes (autonomous vs controlled), switching between different views

Inputs

Keyboard shortcuts Mouse clicks

Outputs

C# Function call to remote module

Validation Test

Test use during wet test Computer Vision Object Detection Implement Tensorflow on Jetson to detect competition objects. Aim to 1 frames per second processed at all times.

Inputs

Camera Data from USB https://www.bluerobotics.com/store/electronics/cam-usb-low-light-r1/

Outputs

Hitbox location of Object, (x1,x2, y1,y2) and probability

Validation Test

Place objects of different shapes, similar shapes, at different distances, water clarity conditions, light conditions and see how well it performs