Skip to content
This repository has been archived by the owner on Mar 5, 2021. It is now read-only.

Alignment Function / Vision Assisted Driving #6

Open
chrisblammo123 opened this issue Feb 1, 2019 · 3 comments
Open

Alignment Function / Vision Assisted Driving #6

chrisblammo123 opened this issue Feb 1, 2019 · 3 comments
Assignees
Labels
Feature Request something new help wanted Extra attention is needed major Important bug or feature. Vision Processing All things visual

Comments

@chrisblammo123
Copy link
Contributor

Is your feature request related to a problem? Please describe.
Drivers might have a hard time lining up the robot so that we can place hatches.

Describe the solution you'd like
A function that would use vision processing to align with the reflective tape above the target.

Describe alternatives you've considered
We could just use a camera to let the drivers see, but that still relies on them being good at driving and the camera functioning well.

Additional context
Example of how we could use the limelight:
lime
(http://docs.limelightvision.io/en/latest/software_change_log.html)

@chrisblammo123 chrisblammo123 added help wanted Extra attention is needed major Important bug or feature. Feature Request something new Vision Processing All things visual labels Feb 1, 2019
@zthorson
Copy link

zthorson commented Feb 1, 2019

Even with the limelight helping, we are going to need to do this in a number of stages:

Step 1 Calibration
Calibrate the camera (if the limelight does not already do this) either with lens calibration targets, or via a very simple object of a known size and some trig. This will allow us to know the angle of an object from the camera.

Step 2 Target Acquisition
Figure out the parameters to properly recognize a vision target. For retroreflective targets, we will likely be using the ring light and a very underexposed image. Similar to what this clip shows and what we have been playing with on the axis camera.

Step 3 Path Planning
Now that we can find the target, we have to figure out how to align to it to place the hatch. There are a number of approaches that we can try, I'll list a few from easiest to hardest.

  1. Side to side and forward control ONLY
  • The driver will drive up and align the robot to be somewhat flush to the rocket
  • The vision system acquires the target, then strafes side to side until the target is centered in the camera
  • The robot then drives forward, adjusting the center as needed until the hatch is placed
  1. Side to Side and Angle correction
  • The driver will drive up until the vision target is visible
  • Using distance sensors (easy) or vision (hard) we find the angle of the wall to the robot
  • Now, the trickier path planning is to straighten the robot relative to the wall while keeping the vision target in sight
  • Once we are flush to the wall, center the target in the camera
  • Drive forward and release
  1. Full path planning
  • Lots of options here. In these cases we need to know the 3d position of the robot, as well as that of the target. A gyro would also be helpful to help determine robot yaw more accurately. This is likely more complex than we need, since the mechanum drive allows more freedom of motion than standard tank drive.

In either case, we will likely want a LIDAR or ultrasonic sensor to judge our distance to the wall. While we can use the vision system to do this when we are further away, we will likely lose sight of t

@Ethan-Bierlein
Copy link
Contributor

We will obviously need a separate class for vision processing; that much is clear. However, when we do implement auto-alignment, where do we want to do that? Implementing it in the BjorgMecanumDrive class as a small set of separate functions seems most logical to me, but I'd be open to hearing different opinions.

@zthorson
Copy link

zthorson commented Feb 4, 2019

While there really isn't a single best way to do it, in general we are going to want to split behaviors like this into their own classes. Mixing autonomous driving and auto-alignment into the same class could get confusing.

So, if we create a new component class call HatchPlacement (or whatever makes sense to you), we can put all of our control code there, then execute it when the proper button is pressed. One way to do it would be to use a command based structure. This would let you use the same 'Place Hatch' in autonomous and manual control. Some details here:
https://wpilib.screenstepslive.com/s/currentCS/m/java/l/599741-converting-a-simple-autonomous-program-to-a-command-based-autonomous-program

There are a number of ways to do things, but this guide has some good approaches and will also give a rundown on PID control with a gyro:
https://frc-pdr.readthedocs.io/en/latest/GoodPractices/structure.html

If you have questions, are confused by something, or want to bounce other ideas off, feel free to contact me.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Feature Request something new help wanted Extra attention is needed major Important bug or feature. Vision Processing All things visual
Projects
None yet
Development

No branches or pull requests

3 participants