Inspiration
Jarvis from Iron Man. Google Glass initial release video: https://www.youtube.com/watch?v=ErpNpR3XYUw.
What it does
Augments reality by providing easy access to information. You can tilt your head up to access the menu, receive incoming messages and notifications and respond to messages all without taking out your phone.
How I built it
- We used
getUserMediafrom the HTML5 API to access mobile camera video feed - We used Three.js, a webGL library, to create a stereoscopic scene using the video feed (to make it compatible with Google Cardboard)
- Used HTML5 canvas to draw on top of the scene
- Used Facebook Graph API to pull display picture for the user
- Used Twilio API to send and receive test messages
- Used OpenCV for facial detection
- Image pixel analysis was used to track fingers for user input
- Used Face++ API to determine race and gender of faces seen in video feed
- Hosted on Linode Ubuntu instance with a .CO domain
Challenges I ran into
- Background interference when tracking position of finger
- Originally hosted on Heroku, but they were having technical difficulties
- Trying to make video feed feel lifelike due to limitation of only one camera (compromised by zooming out, but having a somewhat blurred image)
- Canvas element animation
Accomplishments that I'm proud of
- Was able to accurately track finger position through video stream
- No performance issues given the large amount of processing required
- Natural feel of head tilt gesture
What I learned
- Canvas elements are tainted when cross origin images are painted onto them
Log in or sign up for Devpost to join the conversation.