New consumer technologies and devices in modern cars have caused a sharp rise in driver distraction. With the growing automotive market and the demand of private transportation, this rise is expected to grow over the coming years. Our product aims to help counter this threat to road safety.
What it does
Our program is a virtual Heads Up Display for automotive applications that can project important vehicle data and navigation UI behind the steering of a car, on the windscreen. All inputs by the driver will be vocal. There will be no need for physical interaction with a touch display in the middle of the dashboard that can help reduce driver distraction
How we built it
We used the Google Maps SDK for Android and the Google Maps Direction API. Android Studio served as our platform for building our software where we developed custom language processing logic along with speech to text and text to speech algorithms for the virtual assistant.
Challenges we ran into
Integrating google location services to our API was a big challenge along with natural language processing (parsing). However we used the help of experienced mentors and other people in the WordStream workspace to help debug our issues.
Accomplishments that we're proud of
Getting google maps to run on our API was a proud moment for us along with successfully building a speech to text program. Natural language processing is a wide field with various possible inputs, by using keywords and idioms we were able to run NLP on our API.
What we learned
Team work and time management are key to an organized and well-thought out STEM project.
What's next for Auto-VHUD
This device can serve as a platform and springboard for future implementations. Augmented reality and Artificial Intelligence could be incorporated into the virtual assistant in the coming future to better fit the needs of various drivers and driving styles. Such a virtual assistant design can be implemented in home/kitchen utility robots and connected devices (IoT)