Skip to content
/ Porky Public

The real-time object detecting autonomous robot. Create your own real-time object detection project using only a Raspberry Pi 3 B+ paired with an Intel Neural Compute Stick 2!

License

Notifications You must be signed in to change notification settings

keith-E/Porky

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

99 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🐷 Porky: The Real-Time Object Detecting Robot

The goal of this project is to demonstrate how to create a real-time object detection autonomous robot with relatively inexpensive components. By training your own machine learning model and pairing Intel's Neural Compute Stick 2 with a Raspberry Pi 3 B+, you'll be able jump-start your next real-time object detection project!

Follow the Piggy Find the Piggy Robot and Porky

Table of Contents

  1. Update History
  2. Project Overview
  3. Reading This Guide
  4. Hardware List
  5. Software List
  6. Hardware Configuration
  7. Train Object Detection Model with TensorFlow
  8. Optimize Model for Intel Neural Compute Stick 2
  9. Integrate the Optimized IR Model
  10. Testing
  11. 🐷 Deploy Porky
  12. Observations
  13. Feedback
  14. References and Acknowledgements

Update History

2019/05/09: Initial Release

Project Overview

This project will guide you on how to:

  • Train your own model in TensorFlow using a transfer learning technique to save time and money
  • Optimize the resulting TensorFlow model so that it can be used with Intel's Inference Engine/Neural Compute Stick
  • Implement the optimized Intermediate Representation model into a OpenCV/Python program
  • Deploy the program with real-time performance and feedback loops

Reading this Guide

The goal of this guide is to provide as many steps as possible in order to create an identical robot project. However, not everyone will have an identical development environment as I do/did (a Dockerfile will be provided in the future to help alleviate this issue) nor will they have identical hardware components.

As a result, please regard the following tips:

  • Refer to the links provided inline and the References and Acknowledgements for further explanations and examples.
  • When prompted with code/terminal examples that contain expressions that start with 'Path' and finish in camel case format, it's expected that you replace this expression with your own path.

For example PathToYourImageDirectory and PathToYourPictureLabel are intended to be changed to reflect your working environment:

pi@raspberrypi:~/Porky/dataset $ python3 image_capture.py -picture_directory=~/PathToYourImageDirectory -picture_label=PathToYourPictureLabel
  • Notice how the first section of the terminal example above provides the user information within the terminal: pi@raspberrypi:~$. This section is provided only as an example, your actual environment will probably differ.

Hardware List

Please take note of the 'Optional Hardware' list, this is provided only if you want to create a robot that is identical to the one this project demonstrates. This does not mean you are restricted to these components. Feel free to swap, subtract, and/or add components. However, for the best initial results (if your intention is to follow this guide), I highly suggest acquiring the components within the 'Required Hardware' section at the very least. This will enable you to train a customized machine learning model and perform real-time object detection with just a Raspberry Pi and the Intel Neural Compute Stick 2. My personal favorite sites for finding robotic components are Adafruit, RobotShop, eBay, and Amazon. The possibilities are endless!

Required Hardware

Optional Hardware

Disclaimer: Feel free to swap out any of these parts.

  • Development PC (Linux, Windows, MacOS) Development for this project was performed on a Windows 10 platform which this guide reflects. While it's recommended to utilize a dev PC (and to not develop directly on the Raspberry Pi) for sake of speed, it's not necessary. I also recommend utilizing Linux (Ubuntu 16.04) because much of the machine learning documentation out there is geared towards Linux. Future updates for this project will be performed within the Linux environment to save some frustrations while trying to follow along with documentation.
  • Display Monitor w/ HDMI Output and cable Helpful for debugging and testing within Raspberry Pi environment.
  • Robot Chassis Kit w/ Motors This project uses the Lynxmotion 4WD1 Rover Kit. You can purchase this kit directly from RobotShop or find a used kit on eBay.
  • Servos x2 w/ Mounting Hardware This project uses the Lynxmotion Pan and Tilt Kit.
  • PWM/Servo Controller This project uses this one from Amazon.
  • Motor Controller This project uses the Sabertooth 2X12 Regenerative Dual Channel Motor Controller which can be found at RobotShop.
  • Li-Po Battery To power the motor controller. This project uses an HRB 3S 11.1V 6000 mAh LiPo battery with an XT60 Plug. Search on Amazon or eBay for deals. These batteries are known to cause fires, so please be aware of the risks and proper handling procedures.
  • Balance Charger/Discharger To charge/discharge your Li-Po Battery safely. I use the SKYRC iMAX B6 Mini. Note: this charger requires the charging unit itself and a wall adapter/power supply, ensure you're purchasing both.
  • Portable Powerbank Be aware that not all portable chargers are compatible for Raspberry Pi projects (battery sleeping features). This project uses this RAVPower Portable Charger.
  • Mounting Arm For holding the Pan and Tilt Kit, this project uses the VideoSecu 1/4" Security Camera Mount and the SMALLRIG Super Clamp w/ 1/4" and 3/8" Thread.
  • iFixit Toolkit I use the iFixit Pro Tech Toolkit (highly recommended if you do a lot of tinkering).
  • Velcro Tape (for modular prototype mounting) I used a 2" Adhesive Black Hook and Loop Tape.
  • Assorted Electrical Components (switches, buttons, wires, breadboards, etc) Check out Adafruit for great deals on electrical components.

Software List

The following list can be determined on your own by following the OpenVINO Toolkit documentation.

Dev PC

Please visit the following link: OpenVINO Windows Toolkit for installing OpenVINO on your platform.

The following bullet points reflect the requirements based on a Windows 10 environment:

  • Python 3.6.5 with Python Libraries, 64-bit
  • Microsoft Visual Studio with C++ 2019, 2017, or 2015 with MSBuild
  • CMake 3.4+
  • OpenCV 3.4+
  • OpenVINO 2019.R1+
  • TensorFlow
PS C:\> pip install tensorflow
  • Glob for Python 3
PS C:\> pip install glob3
  • Pandas
PS C:\> pip install pandas
  • Pillow
PS C:\> pip install Pillow

Raspberry Pi

Please visit the following link: OpenVINO Toolkit for Raspberry Pi for installing OpenVINO on your Raspberry Pi.

The following bullet points reflect the basic requirements to

  • Python 3.5+ (included with Raspbian Stretch OS)

  • OpenVINO 2019.R1+

  • Python Libraries (non-standard):

    pi@raspberrypi:~$ pip3 install adafruit-circuitpython-servokit
    pi@raspberrypi:~$ pip3 install pysabertooth

Hardware Configuration

The wiring diagrams contained within this section were created with Fritzing, a great open-source tool.

📷 Image Capturing Setup

To train your own machine learning model, you will need to gather the data to train and validate/test your model on. The idea for this project was to train a model based on images captured with an identical camera that was eventually going to be deployed live.

This setup consists of:

Wire Diagram of the Button setup for the Raspberry Pi: Imgur

Note: the USB Camera and Powerbank are missing from the diagram above.

Image of the configured setup:

Imgur

Please see the Capture Images with the Image Capturing Setup section to capture your own images for your dataset using this hardware configuration.

🚧 Tweak and Test Setup

This hardware configuration serves the purpose for testing your hardware components (motors, servos, etc) and software integrations (debugging, testing, sandbox). This setup is geared towards using AC wall adapters to save batteries and keeping moving components as stationary as possible. Having a proper testing setup can potentially save lots of frustration and money. It is strongly suggested to test your own project before deploying it into the wild.

This setup consists of:

  • Raspberry Pi 3 B+ w/ MicroSD Card
  • Intel Neural Compute Stick 2 (NCS2)
  • PS3 Eye USB Camera eBay Search: ps3 eye camera Outer case has been removed to save weight and help with mounting.
  • Display Monitor w/ HDMI Output and Cable Helpful for debugging and testing within Raspberry Pi environment.
  • Robot Chassis Kit w/ Motors This project uses the Lynxmotion 4WD1 Rover Kit. You can purchase this kit directly from RobotShop or find a used kit on eBay.
  • Servos x2 w/ Mounting Hardware This project uses the Lynxmotion Pan and Tilt Kit.
  • PWM/Servo Controller This project uses this one from Amazon.
  • Mounting Arm For holding the Pan and Tilt Kit, this project uses the VideoSecu 1/4" Security Camera Mount and the SMALLRIG Super Clamp w/ 1/4" and 3/8" Thread.
  • Motor Controller This project uses the Sabertooth 2X12 Regenerative Dual Channel Motor Controller which can be found at RobotShop.
  • Li-Po Battery To power the motor controller. This project uses a 3S 11.1V 6000 mAh LiPo battery with an XT60 Plug. Search on Amazon or eBay for deals. These batteries are known to cause fires, so please be aware of the risks and proper handling procedures.
  • Wiring Harness w/ Switch To connect Motor Controller to Li-Po Battery. This project uses XT60 Plugs. This may come with your rover kit. If one doesn't, you'll need to pick this up or something similar and replace the installed plug with the appropriate plug type for your battery.
  • 5V 2.5A Switching Power Supply w/ MicroUSB Connector Adafruit Link. To power Raspberry Pi directly.
  • 5V 2A Power Supply w/ 2.1mm Jack Adafruit Link. To power PWM/Servo Controller directly.
  • Female DC Power Adapter - 2.1mm Jack Adafruit Link. To connect Power Supply to PWM/Servo Controller.
  • USB Adapters Amazon Link To mount the NCS2 sticks onto the Raspberry Pi. Process used: rotated the adapters into desired position and used hot glue to secure the positioning.

Robot top plate partially off to display the motor controller and Li-Po battery inside:

Robot-top Partially Open

Top-view of the robot in the testing/tweaking setup:

Robot Top View

Robot on top of books in the testing/tweaking setup to restrict base movement:

Robot On Books

Wire connection to PWM/Servo Controller:

PWM Wire Connection

Wire connection from PWM/Servo Controller to Raspberry Pi:

PWM to RPi

Sabertooth DIP switch settings (1, 2, 3 - DOWN and 4, 5, 6 - UP):

DIP Switch Settings

Full Top-View of Porky with the top plate off (shows: Sabertooth motor controller connection to Raspberry Pi):

Porky Full Top-View

Note: wire diagrams will be added in the future.

🚀 Live Deployment Setup

After performing adequate hardware and software tests, you'll be ready to release your autonomous robot without its leash. This section will show you how to configure your robot to be deployed live.

This setup consists of:

  • See Tweak and Test Setup for bulk of components (minus the wall power supplies and adapters).
  • Portable Powerbank Be aware that not all portable chargers are compatible for Raspberry Pi projects. This project uses this RAVPower Portable Charger.
  • 4 x AA Battery Holder /w On/Off Switch Adafruit Link. To power the PWM/Servo Controller.
  • 4 x AA Batteries
  • USB C to MicroUSB Cable Amazon Link To connect the Raspberry Pi to the Powerbank.

Pretty much the tweak and test setup, but without the wall supplies/adapters and additional portable power delivery devices:

Porky without Wires

Note: wire diagrams will be added in the future.

Train Object Detection Model with TensorFlow

The goal of this section is to use TensorFlow to train your custom model using transfer learning. While creating your own machine learning model from scratch can be extremely rewarding, that process typically involves much more configuration, troubleshooting, and training/validating time... which can be a costly process (1.5 hours with my training pipeline on Google Cloud Platform cost ~$11 USD). However, with transfer tearning, you can minimize all three fronts by choosing an already proven model to customize with your own dataset.

The following guides were used as reference for the machine learning sections:

Please read the above links to fill in missing gaps while this guide is updated and to get more examples of how you can use TensorFlow's Object Detection API.

Create the Dataset

First, you'll want to create your own dataset. You can do this by utilizing popular public datasets or by creating your own. I chose to create my own dataset for this project in an attempt to create a more unique classification. This process basically follows two steps: gather your data into a collection (with proper filenames to help organization, ie: piggy-1.png, piggy-2.png, etc) and label/annotate your data (label the regions of interest, ie: drawing a rectangle on the object you're classifying in the image and label it appropriately).

Capture Images with the Image Capturing Setup

This step isn't absolutely necessary to follow verbatim, you can also use images from a public dataset like ImageNet. Configure the hardware as described within the Image Capturing Setup and find the image_capture.py script within the dataset folder.

  1. Navigate to the dataset directory:
pi@raspberrypi:~$ cd ./Porky/dataset
  1. Run the image capturing Python script:
pi@raspberrypi:~/Porky/dataset $ python3 image_capture.py -picture_directory=~/PathYourImageDirectory -picture_label=PathYourPictureLabel
  1. Capture images by pointing the camera at a subject and pressing the mini-button (which is connected to the breadboard) to take the picture. The pictures will be saved within the directory that was specified and will automatically increment the image label based on the number of images already contained within the folder.

  2. After you're satisfied with the amount of images you've taken, create two folders: /train and /test within your image directory and place about 80% of your total images within the /train directory and the remaining images within the /test directory. Click this StackOverflow link to find out more about the 80/20 split.

Label the Captured Images with LabelIMG

This process consists of labelling/annotating your images in a format readable by TensorFlow (this project utilizes the Pascal VOS format).

  1. Install and launch LabelIMG. GitHub Link
  2. Click 'Change default saved annotation folder' in Menu -> File and choose the directory you want your 'train' annotations to be saved in.
  3. Click 'Open Dir' and choose the directory that contains your 'train' images.
  4. Click on an image to annotate.
  5. Click 'Create RectBox'
  6. Click and drag a rectangular box over the portion of the image you want to classify and release the mouse button when you've outlined the region of interest.
  7. A pop-up window will display that will prompt you to input a label for the region of interest that you outlined. Input the label and press the 'Ok' button or hit the 'Enter' key on your keyboard.
  8. Repeat steps 4 through 7 until you've labelled all of the images within the directory.
  9. Repeat steps 2 through 8 for the 'test' portion of your dataset.

Install TensorFlow

Once you've gathered and labelled your dataset, install TensorFlow onto your dev PC (if you haven't already).

From PowerShell (if developing from a Windows PC) install TensorFlow to your Python Environment (virtual preferred - will be updated in the future):

PS C:\> pip install tensorflow

Now install the TensorFlow models repository to your Dev PC:

PS C:\> cd PathToPreferredDirectory
PS C:\> git clone https://github.com/tensorflow/models.git

The TensorFlow models repository will contain useful configuration scripts to configure your machine learning pipeline.

Convert the Annotations to CSV

See Dat Tran's repository for the xml_to_csv.py script utilized for this step. The modified version of this script is contained within the dataset directory of this project's repository. See Gilbert Tanner's article on how those modifications came to be.

After the modifications are made, use the following command:

PS C:\Porky\dataset> py xml_to_csv.py

This well create two CSV files: train_labels.csv and test_labels.csv.

Create TFRecords from the Images and Annotations

This step requires two things to be done: your captured images need to be seperated into a two directories (train and test) and you'll need two corresponding csv files that contain your labels/annotations in Pascal VOC format.

First, modify the script, generate_tfrecord.py (Dat Tran's repository) to fit your labels. See the modified version of this script contained within the dataset directory of this project's repository. After the script has been modified, run the following commands:

To convert your 'train' images and labels to TFRecord format:

PS C:\Porky\dataset> py generate_tfrecord.py --csv_input=PathToLabelsCSVFile\train_labels.csv --image_dir=PathToImageDirectory\train --output_path=train.record

To convert your 'test' images and labels to TFRecord format:

PS C:\Porky\dataset> py generate_tfrecord.py --csv_input=PathToLabelsCSVFiles\test_labels.csv --image_dir=PathToImageDirectory\test --output_path=test.record

Pick a Supported Object Detection Model

To save some cost and time, you can pick out an already trained machine learning model to use for your customized dataset. The following two bullet points will help you in the process of choosing an appropriate model:

This project uses the ssd_mobilenet_v2_coco model. It's not listed as an officially supported Myriad model (which I learned after the fact), but I was lucky in the case that it actually worked with the Myriad plugin.

Deploy the TensorFlow Training Session

If you have access to a capable GPU, I suggest performing Machine Learning locally. However, if you're like me and don't have immediate access to a capable GPU, you can use a cloud compute service to perform your Machine Learning for you. For this project, I used the Google Cloud Platform to perform the TensorFlow training.

Using the Google Cloud Platform for Machine Learning

Please follow the following link to guide you through this process. Be aware that there are frustrations dealing with depcrecated functions and bash commands within the Windows platform. In a future update, I will thoroughly detail the process I used via Windows 10 in this guide. In the meantime, feel free to provide some feedback on any issues you incur and I will attempt to help you as best as I can.

Another useful guide from TensorFlow: Quick Start: Distributed Training on the Oxford-IIIT Pets Dataset on Google Cloud

Extract the Latest Checkpoint

Once you're satisfied with accuracy of your machine learning session, you can kill the TensorFlow process and extract the latest checkpoints for your trained model. If you used the Google Cloud Platform, the checkpoint files will be contained within your storage bucket.

A checkpoint will typically consist of three files:

  • model.ckpt-${CHECKPOINT_NUMBER}.data-00000-of-00001
  • model.ckpt-${CHECKPOINT_NUMBER}.index
  • model.ckpt-${CHECKPOINT_NUMBER}.meta

Optimize Model for Intel Neural Compute Stick 2

After training the machine learning model with TensorFlow, you're now ready to prepare the model and convert it to an Intermediate Representation (IR). This will allow the model to be utilized with the MYRIAD Plugin (Intel NCS2) and therefore be deployed live with a combination of a Raspberry Pi 3 B+ and an Intel Neural Compute Stick.

Please read the following guides as a precursor to the next steps:

Export TensorFlow Model Checkpoint into a Frozen Inference Graph

  1. Copy the latest checkpoint to the cloned TensorFlow models\research directory
  2. Execute the following command within PowerShell:
PS C:\models\research> py .\object_detection\export_inference_graph.py `
>> --input_type image_tensor `
>> --pipeline_config_path C:\PathToYourPipelineConfigFile
>> --trained_checkpoint_prefis model.ckpt-PREFIXNUMBER `
>> --output_directory PathToOutputDirectory

This command will output multiple files to your specified output directory, we will be using the frozen_inference_graph.pb file for our next step.

Install OpenVINO on Dev PC

If you haven't done so already, install OpenVINO on your Dev PC.

Convert the Frozen TensorFlow Graph to Optimized IR

Navigate to the installed Intel OpenVINO directory and execute the following command within PowerShell:

PS C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer> py .\mo_tf.py `
>> --input_model C:\PathToYourFrozenTFModel\frozen_inference_graph.pb `
>> --tensorflow_use_custom_operations_config C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json `
>> --tensorflow_object_detection_api_pipeline_config C:\PathToYourPipelineConfigFile
>> --data_type FP16

Take note of the line: --data_type FP16, the Myriad VPU (Neural Compute Stick (2)) currently only supports 16-bit precision. If the line is left out, the converted model will not work with your compute stick(s).

Executing this script will output 3 files into the directory you ran the command from:

  • frozen_inference_graph.bin (model weights)
  • frozen_inference_graph.mapping
  • frozen_inference_graph.xml (model configuration)

We're primarily looking for the model weights (.bin) and config (.xml) files for deployment.

Integrate the Optimized IR Model

Now that you have your IR Model, you can now deploy it into a script by using OpenCV and/or the OpenVINO SDK.

Install Raspbian on Raspberry Pi

If you're unfamiliar with the Raspberry Pi platform, follow this official guide to set your Pi up. Be sure to download Raspberian for your OS.

Install OpenVINO on Raspberry Pi

The next step is to install OpenVINO on your Raspberry Pi, please follow this guide to do so.

You should see the following message within your Raspberry Pi terminal once you've completed the install:

[setupvars.sh] OpenVINO environment initialized

setupvars.sh initialized

Clone this Repository to the Raspberry Pi

Connect to your Raspberry Pi (via SSH, RealVNC, or locally) and navigate to your preferred directory to store projects in. Then perform a git clone within the terminal:

pi@raspberrypi:~$ git clone https://github.com/keith-E/Porky.git

Replace IR Model within Cloned Repository

If you've trained your own machine learning model, replace the frozen_inference_graph.bin and frozen_inference_graph.xml files within the src directory with your own. If you didn't train your own model, you can utilize the provided model... but be aware that the provided model was trained on a stuffed piggy 🐖 and may not give you the best results.

Testing

During the lifecycle of your robot project, it's a good idea to develop and maintain some sort of testing strategy. This section demonstrates how to use the provided testing scripts and their purpose.

Hardware Specific Tests

Test the Camera

To test if the camera is powering on correctly:

  1. Ensure USB cable is properly connected to the Raspberry Pi.
  2. Provide power to the Raspberry Pi.
  3. Wait a couple seconds.
  4. The blue LED on the right of the camera (facing the lens/microphone array) should light up.
  5. Type the following command within your Raspberry Pi terminal:
pi@raspberrypi:~$ lsusb
  1. You should see a USB device listing similar to the following if you're using a PS3 Eye Camera:
Bus 001 Device 006: ID 1415:2000 Nam Tai E&E Products Ltd. or OmniVision Technologies, Inc. Sony Playstation Eye

To test if the camera is providing good feedback:

  1. Ensure the USB camera is connected.
  2. Navigate to the Porky/tests/ directory.
  3. Run the following script:
pi@raspberrypi:~/Porky/tests $ python3 camera_test.py
  1. A window should eventually pop up if you're accessing the Raspberry Pi's display.
  2. Press the 'q' key on your keyboard to quit the script.
Test the Servos

To test the servos:

  1. Ensure the PWM/Servo Controller is connected to the Raspberry Pi properly and external (battery or wall) is being delivered to the controller.
  2. Navigate to the Porky/tests/ directory.
  3. Run the following script:
pi@raspberrypi:~/Porky/tests $ python3 pan_and_tilt_test.py
  1. The terminal will display the test status.
  2. While the test is running, observe the servos and ensure they are moving to the appropriate positions.
Test the Motors

To test the motors:

  1. Ensure all of the connections are properly wired.
  2. Ensure the power switch is turned in the 'on' position and the Sabertooth motor-controller LEDs are lit up.
  3. Navigate to the Porky/tests/ directory.
  4. Run the following script:
pi@raspberrypi:~/Porky/tests $ python3 motor_test.py
  1. The terminal will display the test status.
  2. While the test is running, observe the motors and ensure they are moving in the correct directions.

Unit Tests

Test the ML Model
Test the Camera Process
Test the Detection Process

Integration Tests

Test Detection with Pan and Tilt
Test Detection with Pan and Follow

🐷 Deploy Porky

Configure your robot (see: Live Deployment Setup) and ensure the following if you've built something similar:

  • All bolted connections are properly tightened.
  • The Raspberry Pi is powered on via the portable powerbank.
  • The PWM/Servo Controller is powered on via the 4xAA Battery Holder.
  • The Motor-controller is switch is turned on and powered via the Li-Po battery.

If you've built an identical robot, issue the following command via a terminal:

pi@raspberrypi:~$ cd ~/Porky/src/
pi@raspberrypi:~/Porky/src $ python3 run.py

If your robot does not utilize a Pan and Servo Kit and/or Motors, you can run the program without those processes:

pi@raspberrypi:~/Porky/src $ python3 run.py --pantiltstate 0 --motorstate 0

Porky Deployed

Observations

  • Development
    • The most complex process was the machine learning portion by far.
    • Frustration was mostly observed while attempting to configure and train a model on TensorFlow and the Google Cloud Platform. This is most likely due to the guides I followed being outdated and using Linux as their platform.
  • Performance
    • The object detection for the first iteration of the piggy model is not as accurate and fast as I would prefer. This is most likely due to the lack of diversity within the dataset. I'll need to be more observant about gathering/capturing more lighting situations and overall environments in the future.
    • The piggy detection model works best where more natural light is present.
    • The object detection in the current iteration is very rebound-y and which causes the motors (and therefore the whole bot) to become very jittery.

Feedback

I tried my best to detail all of the processes I used to get this project off the ground, but I may have missed some key steps along the way or you may have experienced some frustrations trying to follow along. With that being said, please don't hesitate to drop me any comments, questions or concerns. I promise to do my best to address your issues.

References and Acknowledgements

Professor Becker and CS390 For guiding and permitting this class project.

leswright1977/Rpi3_NCS2: leswright1977's bottle-chasing robot introduced me to the Intel NCS2 and its ability to integrate machine learning models for real-time applications.

PINTO0309: PINTO0309's MobileNet-SSD-RealSense project introduced me to using multiprocessing with OpenCV and Intel's CNN backend in order to achieve faster results.

Fritzing - An Open Source Diagram Design Tool

TensorFlow Object Detector API Readme

How to train your own Object Detector with TensorFlow’s Object Detector API: This article was great for providing a great easy-to-follow reference for creating the dataset(s) for this project. The xml_to_csv.py and generate_tf_record.py scripts were also utilized from the author's github repository

Creating your own object detector: This article provided a good reference and filled some blanks for preparing a dataset to be utilized for TensorFlow training. With a combination of the official TensorFlow documentation (Object Detector API Readme), Dat Tran's article (provided above), and this article, I was able to successfully train a customized machine learning model with TensorFlow and transfer learning.

Running TensorFlow on the Cloud

OpenCV Docs: The official documentation for OpenCV. Necessary for gaining a strong foundation of using OpenCV to build your application.

Adafruit Pixy Pet Robot: Adafruit's guide on creating color vision following robot using a Pixy CMUCam-5 vision system and Zumo robot platform. This guide was very helpful for learning how to integrate a PID (Proportional-Integral-Deravitive) control feedback loop for the motion mechanisms.

PyimageSearch Adrian provides great tutorials that are in-depth and easy to follow. This website is a great resource to learn about applying computer vision to your next project.

About

The real-time object detecting autonomous robot. Create your own real-time object detection project using only a Raspberry Pi 3 B+ paired with an Intel Neural Compute Stick 2!

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages