Inspired by Google Lens and the Google Glass, we wanted to create a device that removed your phone as an intermediary in the interaction between people and the world around them.
What it does
FourthEye is a stylish wearable device that allows people to interact with the world around them through hand gestures and voice recognition. Users can point to objects to search them up, shop for them, etc.
How we built it
We used a Raspberry Pi with a Logitech webcam (for the microphone) and camera connected to develop voice recognition and gestures using OpenCV and Google Cloud APIs. We also used a pair of headphones for audio output purposes.
Challenges we ran into
OpenCV took forever to install and froze the Raspi. We tried to use sound sensor modules and mini spy cameras to connect to the Raspi.
Accomplishments that we're proud of
Being able to use a webcam as a microphone and implementing that in the last 30 minutes before submission. Creating voice recognition.
What we learned
We learned to use OpenCV, Raspi, and Google Cloud APIs.
What's next for FourthEye
We hope to add more gestures, better training data for more accurate recognition, and more specific object detection. We also hope to implement wireless interaction with other users of the device. Furthermore, we'd like to make the design more simplistic, sleeker, and stylish.