Project that looks at the behavior of a driver to detect if they are distracted
The goal of our project to help reduce distracted driving by providing positive reinforcements to drive safe. This tool can be used by companies like insurance companies to predict the risk factor of drivers and reward drivers that are safer. The end result of a tool like this would be safer transportation, especially considering that the leading cause of death for U.S. teens is motor vehicle accidents.
While pondering about what our ride home will be like after being up for over 24 hours, we thought about the issue of distracted driving. We looked up some statistics and saw a lot of major accidents occur as a result of distracted driving, either due to cellular usage or dosing off. We took inspiration from devices insurance companies already have in many peoples cars, which monitor speed, braking sensitivity, and fast acceleration. Our goal is to measure a different and more impactful metric.
What it does
We measure if a driver is looking forward at the road or if they are looking down at their phone, back at their children, or dosing off. If a driver completes a day without being distracted, they receive a certain number of points. Once they accumulate enough points, they can use them as cashback on their credit card. If they are distracted while driving, they would lose points and hurt their score. For the purposes of this demo, we have a day set to 30 seconds, where a good driver starts the day with 50 points and can earn a maximum of 100 points a day and a minimum of 0 points, depending on their level of distraction.
How we built it
This project uses a camera from a computer or a Raspberry Pi fitted into a device. We took advantage of the OpenCV library and a convolutional neural network to detect what a human looks like when they are facing the road versus when they are looking away from the road. This app also integrates with TSYS developer API to keep track of user rewards to help reward them with extra credit card rewards for safe driving. We connect our face detection library (written in Python) to a Google Compute Function to process the data and send it to our database and to the API we are writing to.
Here is an image of a driver that is distracted. We looked at certain facial points to detect the direction of a person to see if they are facing forwards or not. Here is an image of a driver that is not distracted. As you can see, the facial points show the direction of the persons eyes as forward.
Challenges we ran into
- Reading Google's documentation for newer features like Google Cloud Functions
- Testing these functions locally when you are using a Google database
- Older OpenCV libraries not having compatibility with OpenCV 3
- TSYS post library authentication errors with a development account
- Making sure the face detection metric worked for different people at different positions
Accomplishments that we're proud of
- Having a working project that connects with all of the tools we wanted to use
- Creating a demo video and site to feature what we built
- Finding an alternative solution to issues with distracted driving
- Learning a lot of new technologies that we have never used or heard of before yesterday
- Handing the image processing locally and in real-time so we do not have to rely on organizations handling these private images safely
- Building a backend that can be used by different insurance companies and banks
What we learned
- Google Cloud Functions
- Designing an aesthetic website
- Team organization and delegation of tasks among members of different skill levels
What's next for My Angel Sight
- Raspberry Pi device that can fit in a car
Detailed Tech Stack
- Google Cloud Functions
- Google MySQL
- TSYS API (for reward points)
Face Detection Algorithm
- OpenCV image processing library
- Haarcascade classifiers
- Convolutional Neural Net pose estimator
- Google MySQL
- HTML/CSS/JS for presentation site
- Adobe Photoshop, Premiere Pro, Audition
- Affinity Publisher Beta