Title
Application of Convolutional Neural Network Image Classification for a Path-Following Robot
Contributing USMA Research Unit(s)
Robotics Research Center, Electrical Engineering and Computer Science
Publication Date
Fall 10-5-2018
Publication Title
IEEE MIT Undergraduate Research Technology Conference (URTC)
Document Type
Conference Proceeding
Abstract
In discrete control systems, a variety of sensors are used to ensure that a robot completes the desired task as robustly as possible. Many tasks that could be readily completed by a human operator typically require a complex combination of sensors and code for a robot to complete autonomously. The purpose of this research was to develop a system in which a robot learned to follow a line the same way that a human would. This was accomplished by training a classification model using recorded images from tele-operation trials by a human operator. The training images were obtained from a forward-facing camera and each image was automatically labeled as one of three steering classes (i.e., straight, left turn, or right turn) based the button pressed on the joystick at the time the image was recorded. The convolutional neural network was built and trained using the Keras Python library with TensorFlow as its backend. The results showed that a model trained to 82% accuracy was able to navigate the course in all 40 trials that were conducted. The ground truth of each trial was recorded via a motion capture system, and a comparison of the human operator trials and the network trials showed little difference between the paths taken. The results also showed that the numerical accuracy of the network was not the best indicator of how well it would imitate human operation.
Recommended Citation
Born, W.K., Lowrance, C.J., “Application of Convolutional Neural Network Image Classification for a Path-Following Robot”, IEEE MIT Undergraduate Research Technology Conference (URTC), Oct. 5-7, Cambridge, MA, 2018.
Record links to items hosted by external providers may require fee for full-text.