Inspiration
A friend of one of our team member’s recently drove home very unsafely due to drowsy driving. This incident seemed to be one in which all of our team member's could relate to in some way, and we realized that this was a common problem. Our team agreed that smarter cities have a transportation system in which we have smarter, safer drivers.
What it does
Before the driver begins to drive, the user uses a phone app to take a reaction time test. This test determines how fit an individual is to drive by comparing the collected reaction times to the reaction times required to make possible decisions on the road. This test is then subsequently given every 40 minutes [Time based on a 1984 Scripps Clinic study to combat the ceiling effect] during a recommended break period.
A hardware component determines during the drive whether the user has their eyes open or not, and if their eyes have been closed for too long [indicating they’re falling asleep], our device alerts the individual with a vibration.
Depending on the results of the test and the hardware component, if the driver feels uncomfortable continuing their drive, they can have the app contact one of their friends with their location. Or, if they complete their drive, they can tell their friend that they’re safe.
How we built it
We used React Native framework to build our mobile app in JavaScript. We used an edited open CV for facial recognition, and added the python code onto a Rasberry Pi that powers the camera and uses a light sensor to check darkness and light the user’s face if needed. We used Amazon Web Services (AWS) to set up our server. Our text features are powered by Twilio.
Challenges we ran into
Getting the facial recognition API to work correctly on the web camera was difficult. The web camera used was not high quality, which lead to a slower frame rate. To help with this issue, we periodically need to clear the data from the server. In addition, the Raspberry Pi 3 we were using for most of the hackathon could not handle TensorFlow. This was fixed when we switched to the Raspberry Pi 4.
In addition, we had multiple challenges with determining how sound was going to work with our app. We wanted to help wake up sleeping users, but not scare them into awakeness. Finding balances in creating sounds and using vibrations instead of sounds at certain points was a challenge.
Accomplishments that we're proud of
We used and combined a lot of different technologies together. Syncing data from the web camera and database of contacts to the app was a big accomplishment for us. In addition, getting the drowsiness recognition API to function was an achievement.
What we learned
We learned a lot about React Native, Twilio, AWS, and connecting hardware and software components together. We also learned about using TensorFlow and OpenCV to detect drowsiness. We gained knowledge of Django and Python for the creation of the RESTful API, and we further utilized Python for collecting data from our light sensor.
What's next for ASTIR
There are many improvements that can be made to make ASTIR better, including adding more data collection methods, creating more options for safely getting off the road, helping drowsy drivers find less populated routes, and giving awake drivers alerts about the presence of drowsy drivers around them. In addition, in the future, we would like to expand ASTIR to be more of a generalized safe-driving application instead of just focusing on drowsy driving.
Built With
- amazon-web-services
- camera
- css
- django
- javascript
- keras
- opencv
- python
- raspberry-pi4
- react-native
- tensorflow
- twilio
Log in or sign up for Devpost to join the conversation.