Obstacle Avoidance Robot

The goal of the project is to implement obstacle avoidance using a machine learning and utilize encoders to keep track of the distance the robot has traveled. The robot uses a Jetson Nano to process the images from the camera for the trained model to determine if it has a clear path or a blocked path. The robot also uses an Arduino Uno to process the information from the wheel encoders. The information is then transmitted to the Jetson Nano to determine the distance the wheels have travel to determine if it has reach the goal distance.

Video Overview


Poster Presentation


The robot uses Jetson Nano to collect images of free and blocked path to train the AlexNet Model to execute and avoidance maneuver.

The image shows the GUI display used to collect images and manually indicate if the there is a free path or a blocked path which will be used to train the model.

After collecting the images, we use pretrained AlexNet Model which is originally trained with 1000 class labels but for this project is had only two (free and blocked). After training the model we write a small program that will displays what the camera views and see if it consider the image as free or blocked.

Image shows, the GUI that displays the image from the camera, the right side of the image display a scroll that indicates what value it associates the image with and if its blocked or free. The scale is from 0-1, as the number approaches 1, it is marked as blocked.

Now we use the numbers to create a threshold and command the robot to maneuver away once it reaches that threshold.


The robot uses wheel encoder to detect the distance it travels. The encoders keep track how much a wheel turns, this information is captured by the Arduino and send to the Jetson Nano. The Jetson Nano then uses this information and compares it with the goal distance. Once the wheels have cross the goal distance the motors will stop.

Flow Chart

Beginning: User is ask to enter the goal distance where the robot should travel.

Encoder: Every time the wheel move, the encoder takes encoder values from Arduino and convert is to cm. If the distance is equal or greater than the goal distance than the robot, then it means the wheels traveled the distance the user input and should stop motors.

Process Image: Image is received from camera, it gets processed and gets assigned a blocked probability value to assess if an object is present on the image.

Free Path: If the probability value is greater than .75, this indicates that the object present but also that the object is close to the robot. 
Blocked Path: If the probability value is less than .75, this indicates that an object may or may not be present and that robot should continue move forward. 


Results of the Project

There is still much work that it is needed to be done to improve the project, the object detection feature works well under good lighting conditions. When the robot was tested during conditions where the environment was less brightly lit, the camera captured more shadows and the robot associated the shadows as objects. This is due to the dataset that I collected for training the model. A fix to this problem would be to expand the dataset where it includes images of the presence and absence of objects under difference lighting conditions. Currently the robot experiences a bit of drift due to having different voltage going to the motors. The encoder are currently used to keep track of the distance that the wheels have traveled but for future features, the encoder data can be used to keep the motors rotating at the same speed to reduce the amount of drift.

Leave a Reply

Your email address will not be published. Required fields are marked *