Ontario Provincial Police say distracted driving continues to be the No. 1 cause of accidents in the province. According to AAA, distracted driving was a factor in nearly 6 out of 10 moderate-to-severe teen crashes. This personal element was a contributing factor in our decision to pursue this project. Through DriveToArrive, we hope to reduce the number of crashes as a result of distracted driving and encourage safe driving practices.

What it does

A video stream is taken in from the camera. The video frames are continually analyzed to detect the presence or absence of an individual’s face and eyes. If a face or eyes are unable to be detected, an audible message is played alerting the driver to pay attention to the road.

How we built it

DriveToArrive implements Python and OpenCV. A multitude of facial image data is provided to the program. This data is comprised of images that contain arbitrarily good or bad images of a face. Incoming frames from a video stream are analyzed using this pre-trained model, and the logic-based decision-making process is executed.

When an individual frame is analyzed if a face or eyes is detected a 1 is stored within a continually updated array. If not detected, a zero is stored. This array is used to calculate a rolling percentage of consecutive frames in which the driver is distracted. 75% was chosen as the threshold for determining whether a driver is distracted. The driver is alerted with a different audible message depending on if the program determines an absence of a face or the presence of a face but the absence of eyes.

Challenges we ran into

Initially, we had planned to use a Raspberry Pi to run our application, but when borrowing hardware a critical cable was missing. This held up the progress for a long period of time as the cable had to be purchased to progress. The other challenge was getting access to a monitor as a limited number were available at the facility, all of which were on high demand. As we were trying to resolve these issues we lost a great deal of time, which limited the time invested into the project.

Additionally, we were going to use IR sensors to detect, more accurately, if a persons’ eyes are opened or closed while driving, but the IR sensors did not behave as expected. This once again resulted in a great loss of time as implementing and testing the sensors was time-consuming and did not end up working. While using OpenCV, our team was having difficulties distinguishing between if a driver had their eyes closed or face turned away. This needed to be handled by OpenCV after the IR sensors were not an option in order to allow us to detect the eyes’ position.

Accomplishments that we're proud of

As a group, we are very proud of the accuracy and speed at which the script can identify the lack of face and eyes in the image stream.

We are also proud of our resilience when faced with the challenges provided by the usage of the Raspberry Pi. As described in the previous section, we ran into multiple issues with the Raspberry Pi including not having the proper equipment to utilize the versatility that it provides.

What we learned

Most of the team had never programmed in Python before, leading to skill development.

OpenCV was a library that none of us knew before attending this hackathon, so this was a new skill all of us learned as well.

What's next for Drive2Arrive

In the future, we are looking to continue development on DriveToArrive, extending the service to a mobile application to increase the scalability of the project.

Built With

Share this project: