We were interested in learning about computer vision and wanted to implement something sign language related. Although this program doesn't have much to do with sign language it sort of evolved into what it is now!
What it does
Allows users to draw on a screen without hardware based on color detection, feature recognition and contouring. The drawing is done by mapping pixels to xy coordinates in a 2-D plane.
How we built it
Using OpenCV and with help from friends, we were able to effectively use the library in python.
Challenges we ran into
Since our algorithm has a dependence on color recognition, the program must be calibrated with certain range of RGB values to identify a certain color. With different lighting (being outside vs inside) those RGB values can change drastically and a dynamic calibration solution is needed.
Accomplishments that we're proud of
Finishing the project! Built with blood, sweat, tears and no sleep, I think our team was happy to achieve what we did. As second year computer science students, this is the most interesting project we have worked on so far and are proud to finish as much of the project as we did.
What we learned
Computer vision is a huge field that we were almost completely unfamiliar with before this hackathon. Using openCV with python really showed how much possibility there is for programs like this, even if users don't know anything about openCV or computerVision.
What's next for Hack-a-Hand
Once gesture recognition is added, sign language recognition (and therefore translation) becomes a possibility. For the future, expanding this to a third dimension for a hardware free 3D-modeling (iron man like?) experience could be a potentially powerful contribution.