Inspiration

The inspiration for this project came from a tragedy in the past of one of our members. The accident of someone close to their family caused by a distracted driver and subsequent emotional wound took a deep toll on our teammate and he put forth the idea of trying to do some good for the world by trying to prevent anything like what happened to him happening to others. As high-schoolers being slowly introduced to reality, it sometimes becomes hard to look past the fortune we experience in our lives to help the less fortunate and we attempted to do that with this project. 1 in every 5 automobile deaths are caused by distracted driving and our goal, motivated by the lessons of the past, was to design and create a safe guard against distracted driving using the current technology. Ultimately, our goal was to save lives. What happened to our friend's family and the unfortunate demise of a young man robbed of his future by a distracted driver, must never be allowed to happen again.

How we built it

We built the app using CoreML. I used transfer learning to finetune a Resnet 50 network and then exported it to CoreML. I then used Apple's Vision API and AVFoundation and AVKit to process video frames and then run each frame through the deep learning model to provide live predictions of whether or not a person is distracted while driving or not. Some examples of being distracted would be if a person is texting, or doing their hair/makeup, or talking with a different person. We trained our deep learning model with a dataset from StateFarm. The involvement of a major insurance company as StateFarm demonstrates how critical of an issue driver safety is for insurance companies and the general public. We also hosted our data with Microsoft Azure. We proceeded to build an app so that our software would be highly scalable. After developing our app, we decided that we should tackle this challenge even more and moved to develop a suite of applications. As a result, we used the LeapMotion sensor to track hand motions. Our hands are one of the most versatile and expressive organs and utilizing this agility would be helpful in enabling the driver to communicate with technology around them in a more efficient way. For instance, a key product we developed with LeapMotion was automated call rejection and acceptance. We used the Twilio API to automate sending calls and with very intuitive grasping and release motions, we are able to send and decline calls. This efficiency enables the driver to focus on driving and not getting into accidents.Ideally, we wanted our project to use both positive, and negative reinforcement in preventing Driver Distraction. The alarm and AI monitoring would deter future transgressions via negative reinforcement and the minimally invasive, hands-free driver user interface would prevent both the need and the motive for being distracted in the first place.

Challenges we ran into

One of the key challenges we faced throughout this project was working with the Leap Motion device. Many of the specific development tools we used were faulty and outdated, and there was not much documentation available online. Most of our time went into configuring the device and developing a custom gesture processor, as the one provided by the API was not working properly. Also, there were several occasions where the Leap Motion device would not recognize any input at all. In general, the device was very flimsy, and it would take multiple computer restarts to be able to use the device again.

Another issue we faced was dealing with the vast amount of data needed for training the deep learning model. We did not have much experience with dealing with that much data in the past, so we had to use a new platform: Microsoft Azure. Understanding this new platform took a while to learn. Additionaly, we had to use core ml for the first time. We ran into an issue where the image on the iphone would be rotated when it shouldn’t be. Solving this problem took up a large chunk of our time as well.

Accomplishments that we're proud of

We are proudest about our ability to come together as a team. We met each other at a high school coding enthusiasts club and TeenHacksLI was our first hackathon together. We are also very proud of our ability to use technology we have never used before. For instance, we used the LeapMotion. Before TeenHacksLI, we never used a LeapMotion or the developer SDK so it was very interesting to learn this new technology. Another example is Twilio. We have heard of Twilio before TeenHacksLI but never found an opportunity to use it. Our project of developing safer driving tools gave us a perfect opportunity to use Twilio to streamline a person’s driving experience and we were amazed by how well Twilio worked and how applicable it was. Additionally, we are very happy that we were able to port our deep learning model to iOS with CoreML. It was our first time using CoreML and seeing how scalable Pytorch models could be was amazing.

What we learned

Throughout the course of the last 24 hours, our team as a whole has definitely evolved for the better. We come out of the challenge as a stronger and more adaptive team. The many problems we faced with the finicky Leap Motion sensor not only required extraordinary patience but also a sort of mental flexibility that we began to gradually grow accustomed to. In addition, we definitely gained a lot more technical knowledge in programming, especially concerning adaptive AI and motion tracking. Above all however, we feel as though the competition ultimately brought about personal changes within us that have permanently altered us for the better. The wish to do good for the general populace definitely made us more receptive and empathetic to the world around us but also began to take a toll on us with the stress. However, our team grew under the stress and learned how to effectively deal with it and task manage. Although on a tight deadline, we adhered to our two main goals: doing good, and having fun and came out as better people.

What's next for DriveSuite

Over the last 24 hours, we tried to develop the idea as far as it would go and ultimately managed to accomplish most of our goal: We created an Adaptive AI that, through the utilization of a national distracted driver database was ultimately able to detect and respond aggressively to distracted driving. In order to deter drivers from engaging in such actions in the first place, we brought the idea of a minimally invasive, hands-free Driver Interface to a reality, creating a proof-of-concept that we believe showcases the potential world of opportunities made available by our technology. Ideally, we would like to take this software from a proof of concept into something that we can eventually design and produce on a mass-market scale as both an app and a related product in order to improve the general quality of life of drivers. We believe that a happier and more focused driver ultimately means a happier society. After all, no matter how small the effect, we hope that we can leave some kind of positive impact on the world.

Built With

Share this project:
×

Updates