Strokes are hard to recover from mentally, let alone physically. Stroke patients are often put through the same mundane task of moving their body's when simple motor controls haven't even been established yet. So we wondered, what could be cool and inspiring for stroke patients to move? Answer: what if a robot hand moved along with you? If a person observes actual progress and has a goal, they succeed faster, the goal here is to move the hand quicker, and the progress is easily shown through the robot hand.
A second application is sign language detection. As a means to spread an aloof language we decided to use OpenCV to detect correct hand signals and move an arduino powered robotic hand if the hand position is correct, not only is it cool but by habit it will be easier to learn.
What it does
The brains of the operation comes from the image recognition algorithm along with the computer camera to detect certain hand symbols, which then is displayed through an arduino powered hand, if you clench you're fist, so will the hand, if you release it, so will the hand, and by extension other symbols can be copied in real time.
How we built it
First we started off with the image detection, by using computer generated data sheets (in the form of .xml files), and computer vision algorithms based on adaptive thresholding and countor's (such as the canny contour) alongside histogram pixel balancing, we were able to make an efficient detector for the human hand. After that we used the mqtt (for eclipse) library to send the data from the computer vision algorithm to the arduino to process it. (this was one of the hardest pieces of the project to put together).
Challenges we ran into
Many core problems such as insufficient hardware stopped my team from fully building the arm, unfortunately there were no H-Bridges. Furthermore all the 3D printers broke, so we used cardboard instead, unfortunately these obstacles completely stopped our mechanical progress, however we have decided to continue progressing the project with our own materials.
Accomplishments that we're proud of
Having the learning algorithm's work 100% of the time, and almost at instant recognition efficiency is one of our greatest accomplishments.
What we learned
We learned how to create cloud servers, hold servers with clients and transact data between them. New openCv and image detection algorithms, and the fireBase google cloud library.
What's next for Sign Language Detection
We hope to expand to different uses of hand symbols, perhaps to assist with driving, or even to assist in auto detection and robotics communities.