Throughout the semester, we have been enthralled with the idea of using a 2D lidar to more efficiently detect obstacles in front of robotic platforms. Frequently it is impossible or unsafe for Humans to go into disaster zones. As such, autonomous robotic navigation in unknown environments is essential for safe and efficient search and rescue after a disaster. This is where the Machine Learning Obstacle Detection system comes in.

What it does

A Hokuyo UTM-30LX-EW 2-dimensional lidar provides a line of data of where the ground is. In the way of the ground appear rocks or other objects which show up at different points in the data. This data, in the form of X, Y, and Intensity points, is routed over UDP to a Python script running on a small single board computer. The Python script analyses and converts the data into a machine-readable format and passes it into a machine learning network. This convolutional neural network recognizes the patterns in the data and accurately displays the location of each obstacle onscreen.

How we built it

We started by capturing 150+ scans of a real environment containing concrete rocks as obstacles. We used Keras, a deep learning Python API, to construct and train our convolutional neural network. The task involved labeling all of our training data and importing it into Keras. We also wrote a MatLab script to aid the data labeling process.

Challenges we ran into

Outlier point within the dataset proved to be a larger issue than we expected. Until these points were properly dealt with, our deep learning algorithm was basically non-functional. Our original goal was to run our algorithm on one of the Qualcomm DragonBoards, but trying to compile any of the external libraries we use ended up turning into a huge roadblock.

CMake cross-compilation for arm CPUs on non-arm CPUs proved to be a feat we were not prepared to tackle and ended up getting scrapped in favor of python. Our goal for cross compiling our packages was to render our code for object detection capable of being run on the DragonBoard 410c to demo execution on a low power embedded system. The settings required for CMake cross-compilation are not as well documented as we would have liked and we ended up spending far too much time on this aspect.

Accomplishments that we're proud of

What we learned

Mean IoU. It stands for the mean intersection over union of two data sets. This was a very important metric for determining how accurate our deep learning algorithm was.

What's next?

As members of the University of Alabama Astrobotics team, we are interested in continually bettering the efficiency of our award-winning robot. We believe this system that we developed today will allow a significant improvement in our software.

Built With

Share this project: