Make sure you check out part two in the description!!

Inspiration

I (Nidhi) have a few classmates that are hard of hearing and need interpreters during our online classes over zoom, and sometimes the internet connection goes out, or the interpreters are unavailable. Orsa is developed to help deaf and hard of hearing communities feel more included in conversations that happen over video calls (real time), which allows them to be free and independent and take initiative on their projects and assignments.

What it does

Orsa is a real-time hand gesture detector. It tracks the major landmarks on one's hands, shoulders, and elbows. Then, it will compute whether or not you're using your left, right, or no hands. This technology strives to enhance the communication between deaf and hearing people. Orsa may detect and caption the sign language gestures.

How we built it

We based our code on the foundations given from Nacholas Renotte's open source code on github. After that, we used different elements from mediapipe and cv2 to incorporate things like headers. We also used Figma and Zoom for our presentation and recording

Challenges we ran into

We had a really hard time filming our presentation as the application for Orsa wouldn't work with zoom in the background. However, we tried our best and persevered and are proud of what we have been able to accomplish in such short amount of time :)

Accomplishments that we're proud of

We think our final project itself was a huge accomplishment. Our Figma presentation as well as our demo were incredibly hard to do. We had to really put our minds together to push through this competition.

What we learned

As a developers, we learned a lot! Incorporating the different elements of each import was taxing. we think as a group, we also learned the important of time management.

What's next for Yellow Team- Orsa (Nidhi, Alina, Victoria)

We definitely want to push this project forward. Our efforts today were a minuscule in the grand scheme of things. Next, we aim to increase our libraries for the sign-language recognition. If possible, we also want to incorporate more neural networks for better live translations.

Check out our Figma link for the full presentation! Make sure to check the Video Description!

Built With

Share this project:

Updates