Inspiration

Language is the most powerful source of community. In today's globalized world, countless programs and tools like Rosetta Stone and Duolingo help bring communities together by helping expand a language's reach. But what about American Sign Language? Our team sought to create a bridge for this gap.

What it does

Our program utilizes a python-based OpenCV framework to capture video in real time. Using a series of image processing, we are able to isolate components of the hand and pinpoint edges on fingers. We then feed this through a neural network that's trained on the ASL alphabet. Currently, the program tries to identify the letter the user is signing, and prints it to the console.

How I built it

To build this project, we relied heavily on OpenCV libraries to perform much of the recognition and detection. We applied our custom processing in order to better fit the module's abilities.

Challenges I ran into

In the limited time span of a hackathon, it's difficult to procure a well-trained, non-biased model. Our model isn't very accurate, but presents a step forward in the right direction.

Accomplishments that I'm proud of

Fantastic edge detection and display in the processed image.

What I learned

Machine learning requires extensive and dedicated training to actually be as effective as people think it is.

What's next for Rosetta Sign

Our original intent was to create a tutoring app that can pose a letter or gesture to the user, recognize when the user signs a specific level, and adjusts the next level accordingly. Eventually, the user should be able to gain a decent grasp over ASL.

Built With

Share this project:

Updates