Driving for long distances can be difficult -- especially when the roads are predictable and traffic is moving at the same pace for prolonged periods of time. According to the AAA Foundation for Traffic Safety, drowsy driving is responsible for upwards of 328,000 motor vehicle crashes per year. These are incidents that can easily happen to anyone -- the driver could be a tired parent, a graveyard shift worker, or even just someone who's skipped their afternoon coffee. However, the impact can be devastating. Thus, this app was created to keep people awake especially when they don't even realize they're falling asleep.
What it does
Driver Drowsiness Detection uses facial recognition to process input from the front-facing camera (while the phone is ideally situated on a phone mount) to identify the driver's eyes and track their blinking frequencies and durations. The app is programmed to emit a short ring when the driver is exhibiting slightly drowsy behavior (prolonged blinking durations), as a warning to the driver that they may be getting sleepy. However, if the driver exhibits drowsy blinking consistently over the course of at least a minute, the app will conclude that the driver is falling asleep despite the rings. In this scenario, the app will emit an ongoing loud alarm that halts when the driver opens their eyes. It displays an alert on the screen notifying the driver that they are falling asleep and offers to provide navigation through Google Maps to the nearest rest stop. In addition, this alert and navigation change also pops up when the driver closes their eyes completely for an abnormal duration (indicating that they are asleep).
How we built it
We used the Google Face API to process ongoing input from the front camera, and wrote our own algorithm to analyze blinking. Based on how frequent the user blinks (and the duration of each blink), we programmed the app to give auditory and visual feedback to alert the driver and wake them up (the sounds are pulled from the default notification sounds library). In addition, we used the Google Directions API to offer automatic navigation to the nearest rest stop every time they cross our self-defined threshold of sleepiness.
Challenges we ran into
We ran into some challenges with connecting this feature with other API functionalities; we spent a lot of time trying to connect two API's, but it proved to be extremely tedious to do by hand. We eventually decided to focus on just one API and really try to hone the functionality and intuition on facial recognition. While focusing on configuring the blinking, it was difficult to define what constitutes "drowiness". Since this is usually different for each person, there was no specific statistic that we could rely on. Therefore, it involved a lot of trial and error to discover what drowsiness actually looks like to a camera.
Accomplishments that we're proud of
We're proud of handling platforms and API's that we've never worked with before. All of us are new at Android development and have never used any Google API's for a project. We're very proud of how much we've learned, and how much we've been able to contribute to a project of our own vision.
What we learned
We definitely gained more intuition on how API's work, and more importantly, how their documentations work. We learned how to use API's and how to piece the functionalities together to create one cohesive project. In addition, we also learned patience and stamina as we spent many hours debugging and googling things on Stack Overflow.