Ever thought about being Iron Man but don’t have JARVIS to help you? Well fret no more! Introducing project JARVIEEES, a program that does what JARVIS does in Iron Man! Being the next Tony Stark might not seem like a farfetched idea anymore!
Another inspiration we got was from Nicholas James Vujicic, being less-able should not stop you from achieving your dreams, hence we incorporated iris tracking program into JARVIEEES.
What it does
We created a program using python, opencv and c++, utilising the webcam that is pre-installed on most laptops. Using the webcam, we then track the user’s movement which will be translated by JARVIEEES into commands which the laptop then executes. The user is able to customise various commands that respond to various motion that the user does.
With inspiration from Nick Vujicic, JARVIEEES now tracks your iris and translate eye movements into executable commands for the laptop.
How do I build it like Tony Stark?
Using opencv as our library and with some experimentation and prototyping using python, we coded the program in c++ to track the user’s fingertips and iris as vertices. Hence, the vertices will then be used as a reference point to track the motion of the user's hand and iris.
The major hurdle that we faced were how to track our user without any hindrance from the background so that the webcam is able to identify our motions with higher precision. Without any sensors nor Kinect for depth perception (z axis), we had to overcome the 3-dimensional issue and make do with our laptop’s webcam which is only able to detect objects in 2-dimensions.
Accomplishments that we're proud of
Stepping out of our comfort zone to explore new alternatives to tackle this issue. We had to make do with what resources we had and figure out how to cross the hurdle. Being able to see the demo working. Yay~~!
What we learned
How to use opencv, python and c++.
What's next for JARVIEEES
Voice recognition and interaction Adding more gestures that is recognisable by JARVIEEES Accessibilities options for the mute by converting Sign Language into On-screen Text and converting the text into speech.