-
-
Drone's POV with the model
-
Check graph title(shows accuracy of our model)
-
Math formulas we created to implement the probability model
-
An example of the object detection model MobileNet
-
Example of our data preprocessing
-
The architecture for our steering deep learning CNN model
-
The progression of our MSE over 25 epochs of training
-
See caption
Inspiration
The COVID 19 pandemic has affected all of us. Living in one of the epicenters of the outbreak in the US has been eye-opening. When we take a walk outside, no one out on the streets, playing in parks, or living their lives. Life at home has been even harder: buying groceries is frightening, purchasing necessities from online sites takes ages, and the entire online delivery system is strained and on the verge of collapse. People who are sick and in self-quarantine can not even exit their house due to the highly contagious nature of COVID-19. Likewise, the healthy and elderly are too scared to exit and buy essentials, such as food, due to fears of getting sick. Our project’s goal is to incentive everyone to stay home by making life a little easier and convenient. We want to return to normalcy as quickly as possible, and our method to aid social distancing, we thought, was the best way to accomplish this. Current autonomous drone navigation systems are neither robust or adaptable due to their heavy dependence on external sensors, such as depth or infrared, and we decided to take a deep learning approach and tackle both this and the COVID-19 problem together.
What it does
Using deep learning and computer vision, we were able to train a drone to navigate by itself in crowded city streets. Our model has extremely high accuracy and can safely detect and allow the drone to navigate around any obstacles in the drone’s surroundings. We also developed an app that allows users to communicate with the drone and send his/her coordinates to a real-time database.
How I built it
To implement autonomous flight and allow drones to deliver packages to people swiftly, we took a machine learning approach and created a set of novel math formulas and deep learning models that focused on imitating two key aspects of driving: speed and steering. For our steering model, we first used gaussian blurring, filtering, and kernel-based edge detection techniques to preprocess the images we obtain from the drone's built-in camera. We then coded a CNN-LSTM model to predict both the steering angle of the drone. The model uses a convolutional neural network as a dimensionality reduction algorithm to output a feature vector representative of the camera image, which is then fed into a long short-term memory model. The LSTM model learns time-sensitive data (i.e. video feed) to account for spatial and temporal changes, such as that of cars and walking pedestrians. Due to the nature of predicted angles (i.e. wraparound), our LSTM outputs sine and cosine values, which we use to derive our angle to steer. As for the speed model, since we cannot perform depth perception to find the exact distances obstacles are from our drone with only one camera, we used an object detection algorithm to draw bounding boxes around all possible obstacles in an image. Then, using our novel math formulas, we define a two-dimensional probability map to map each pixel from a bounding box to a probability of collision and use Fubini's theorem to integrate and sum over the boxes. The final output is the probability of collision, which we can robustly predict in a completely unsupervised fashion.
Challenges I ran into
We faced many challenges while creating our project, and two of the main problems were controlling our drone with a computer and optimizing runtime of our models. As for interfacing, the Parrot Bebop 2 drone is meant to be controlled with a mobile app that the Parrot company officially designed. However, there was no clear-cut way to control a drone with our computer, and finding a way to perform this took a lot much more effort and time than we had ever anticipated. We attempted using many libraries and APIs, the main ones include Bebop Autonomy, Olympe, and PyParrot. We first tried using Bebop Autonomy, a ROS driver specifically for Parrot drones and found that it wasn’t compatible with either of our computers. Then we discovered Olympe, which was a Python controller programming interface for Parrot drones and computers. We were able to get this working, but we found that the runtime of its scripts took too long and the level of complexity of its scripts was a little too much for us to handle. Finally, we found PyParrot and compared to Olympe the scripts were much easier to write and the API was overall a lot more user-friendly with a wide range of examples. Not only that, but it was open source meaning we could directly edit built-in functions, which was extremely convenient.
Another major challenge we face is runtime. After compiling and running all our models and scripts, we had a runtime of roughly 120 seconds. Obviously, a runtime this long would not allow our program to be applicable in real life. Before we used the MobileNet CNN in our speed model, we started off with another object detection CNN called YOLOv3. We sourced most of the runtime to YOLOv3’s image labeling method, which sacrificed runtime in order to increase the accuracy of predicting and labeling exactly what an object was. However, this level of accuracy was not needed for our project, for example crashing into a tree or a car results in the same thing: failure. YOLOv3 also required a non-maximal suppression algorithm which ran in O(n3). After switching to MobileNet and performing many math optimizations in our speed and steering models, we were able to get the runtime down to 0.29 seconds as a lower bound and 1.03 as an upper bound. The average runtime was 0.66 seconds and the standard deviation was 0.18 based on 150 trials. This meant that we increased our efficiency by more than 160 times.
Accomplishments that I'm proud of
We are proud of creating a working, intelligent system to solve a huge problem the world is facing. Although the system definitely has its limitations, it has proven to be adaptable and relatively robust, which is a huge accomplishment given the limitations of our dataset and computational capabilities. We are also proud of our probability of collision model because we were able to create a relatively robust, adaptable model with no training data.
What I learned
Doing this project was one of the most fun and knowledgeable experiences that we have ever done. Before starting, we did not have much experience with connecting hardware to software. We never imagined it to be that hard just to upload our program onto a drone, but despite all the failed attempts and challenges we faced, we were able to successfully do it. We learned and grasped the basics of integrating software with hardware, and also the difficulty behind it. In addition, through this project, we also gained a lot more experience working with CNN’s. We learnt how different preprocessing, normalization, and post processing methods affect the robustness and complexity of our model. We also learnt to care about time complexity, as it made a huge difference in our project. Finally, we learned to persevere through the challenges we faced.
What's next for Autonomous Drone Delivery System
Despite the many challenges we faced, we were able to create a finished working product. However, we believe that there are still a few things that we can do to further improve upon. To further decrease runtime, we believe using GPU acceleration or a better computer will allow the program to run even faster. This then would allow the drone to fly faster, increasing its usefulness. In addition, training the model on a larger and more varied dataset would improve the drone’s flying and adaptability, making it applicable in more situations. With our current program, if you want the drone to work in another environment all you need to do is just find a dataset for that environment. Currently, the drone uses its own wifi signal to communicate with the computer. This means that the drone could only fly as far as its wifi range allows it to. If we could move all the processing from our computer onto the drone however, it would give the drone an unlimited range to fly in making it even more adaptable in situations. Our GUI for our mobile app is pretty plain, and we hope to later implement and create a better app with a better and easier to use design. Lastly, integration with a GPS would allow long-distance flights as well as urban city traveling.
A self-flying drone is applicable in nearly an unlimited amount of applications. We propose to use our drones, in addition to autonomous delivery systems, for conservation, data gathering, natural disaster relief, and emergency medical assistance. For conservation, our drone could be implemented to gather data on animals by tracking them in their habitat without human interference. As for natural disaster relief, drones could scout and take risks that volunteers are unable to, due to debris and unstable infrastructure. We hope that our drone navigation program will be useful for many future applications.
Log in or sign up for Devpost to join the conversation.