Webcam uses Computer Vision to analyze real time video
Notifications sent to police, friend, and family members
OmniSci dashboard updates real time analysis based off the user
In 2018, over 3,500 people were killed in distraction-related crashes. About 424,000 people were injured in crashes involving a distracted driver. 10% of all drivers ages 15 to 19 involved in fatal accidents were reported to be distracted at the time of the crash. Unfortunately, over a month ago, someone at our very school was killed by this. To put an end to this preventable source of injury and death, we put together SafeDrive, an integrated toolkit using Artificial Intelligence, Computer Vision, Cloud Computing, and Data-Driven/Real Time analysis to save lives.
What it does
SafeDrive analyzes the driver's behavior while they are driving. It takes into account the driver's head location and orientation, the driver's eyes, and what items the driver is holding up to decide when a driver is distracted. Look away from the wheel for more than a few seconds while on the road? SafeDrive picks up on this, and can nudge you back in the right direction. Hold up a phone while driving? SafeDrive sends you an Angry text about it to remind you to be a responsible citizen. Eyes getting heavy while driving? SafeDrive picks up on this, and more, and takes the corrective response. This can be used by people who want to hold themselves accountable by wanting a safety net, people who want to watch their kids so that they do not drive unsafe, or even by the courts so that alcoholics don't pop bottles while driving (yes, SafeDrive picks up on this too.) Together, we can make our communities safer and stronger.
How we built it
SafeDrive was built using many modern technologies to effectively alert drivers and ensure their safety. For Computer Vision, we used OpenCV, Numpy, Tensorflow, and various statistics modules. Once the computer notices an accident or a dangerous condition, the Smartcar API is utilized. Smartcar API is used to locate the driver's vehicle as well as give insightful information such as the vehicle model or year. This data is passed onto the police officers or firefighters to easily identify the vehicle in the accident. In addition, Smartcar API allows us to automatically unlock the car when a crash is identified. This is for when the driver is unconscious or unavailable to open the car due to outstanding conditions. With the vehicle location provided by the Smartcar API, Google Maps API is used to provide the authorities the most efficient route to arrive at the scene as quickly as possible to possibly save their lives.
With the information of the vehicle and the efficient route from the Google Maps API, AWS Amplify is used to send emergency text messages to police and firefighters. Not only that, but friends and family members of the driver is also notified of the incident as soon as the car crashes. The process of having someone call 911 is automated which can save time in very drastic situations.
All this information, expanding from the data collected from the webcam, to the vehicle information and location, is stored into a highly scalable AWS RDS backend server. This data spans from daily driver behavior to the vehicle correspondence to collisions. This data is incredibly important as it gives super great insight on how to alert users on what distractions they should focus on to informing car manufacturers on collision rates on their vehicles. Companies such as Uber and Lyft can see how their drivers' behaviors and parents could watch their children. Not only that, but the data is the core backend to the OmniSci dashboard!
The OmniSci dashboard is used to visualize real time feeding data from the backend SQL server which is located in the AWS database. The dashboard makes clients easily view their daily driving behaviors as well as find important correlations to their safety and driving habits. The OmniSci visualization tool is amazing to promote safe driving which can be easily understood by anyone.
APIs used - AWS, Smartcar, OmniSci, Google Cloud
Challenges we ran into
We learned a lot about server queries and how to operate programs on the cloud. We ran into a lot of trouble getting used to new libraries and technologies that we have never used before, as cloud computing and API usage was a big emphasis during this Hackathon. We learned a lot from the pressure that was put on us to grow as programmers.
Accomplishments that we're proud of
We worked together to collaboratively learn how to use various cloud computing and server technologies. None of us had touched server programs before, but we are proud of how far we have come, and if nothing else, showing ourselves that nothing is impossible through effort.
What we learned
We learned a lot about Computer Vision, Image Processing, asynchronously managing multiple databases, learning design patterns of unfamiliar APIs, and how to adapt to unfamiliar territories.
What's next for SafeDrive
We hope to see SafeDrive in the front seat of many cars. As self-driving cars continue to become more and more popular, we cannot forget that the prime burden of protecting our community is on us, and that we should do everything possible to take a step towards a safer society.