Inspiration
Wanted to do a computer vision / machine learning project that could help those in need by providing new adaptive ways for communication.
What it does
Tracks the pose and orientation of the user's hand and compares it against a database of know signs.
How we built it
Using the leap-motion SDK in python we wrote our own logistic regression model for machine learning. Also wrote our own heuristic functions for comparing hand orientation/poses.
Challenges we ran into
Getting the motion-leap set up was one of the toughest difficulties. After that we were able to use the SDK to effectively track the pose/orientation of a hand. One of the difficulties is hardware related, the cameras can only capture information that is directly in front of it, meaning camera could not see fingers blocked by the user's palm or other fingers.
Accomplishments that we're proud of
Very, very accurate and responsive. Running at about 60fps but only sampling at 1/10 that rate. Robust and quantified recognition.
What we learned
Logistic regression is our friend and the more cameras we can hook up and use as input, the more accurate we can be. Got a refresher on the skeletal anatomy of the hand.
What's next for Sign Learning
Integrating more leap-motion cameras and a user friendly interface! Also, building out the gesture/sign language library!
Log in or sign up for Devpost to join the conversation.