Inspiration

The inspiration for SignBridge stems from the desire to promote inclusivity within the deaf and hard-of-hearing communities. Traditional forms of learning sign language can often be limited and inaccessible. In light of this concern, we aimed to create a platform that can utilize emerging technology to provide an interactive and personalized learning experience. By leveraging artificial intelligence, our website allows users to learn sign language at their own pace with individualized hand recognition, streamlining the learning experience. We root ourselves in the right to effective communication, which through this platform, we aspire to help promote understanding, empower individuals, and create a more inclusive society where everyone will be able to connect and communicate freely.

What it does

Our project allows anyone to be able to learn ASL through an interactive approach. The website allows the user to sign in, and have access to a learning and partner mode and the ability to friend request. Learning mode lets the user learn characters of their choosing using their webcam to validate their form, and can collect points for their account as they go. Partner mode allows 2 users to learn sign language together using real-time webcam displays. Lastly, users are able to connect with friends through friend requests.

How we built it

For the backend of our site, we used FastAPI to handle API calls and used Firebase Firestore as our database. We authenticated with Firebase Authentication, and our FastAPI calls enabled us to write to a database to log important user data, such as their earned points or unique IDs. We also used FastAPI websockets when working with Real-Time Web Connections.

For the front end, we made a React App with Typescript, CSS, and Tailwind CSS. The website was developed heavily on the front end with pages, but most of the processing for core functionality was sent to the backend to APIs to call the Machine Learning Model.

For the machine learning model, we utilized MediaPipe’s hand landmark system, which was converted into coordinate vectors and fed into kNN and Random Forest Classifiers from Scikit-Learn. OpenCV was used to access the webcam and search for the user's hand, which would preprocess and run frames through the model, classifying the sign held.

Challenges we ran into

Accomplishments that we're proud of

Deployed the frontend and backend using ngrok Successfully applied an ML model to a real time video connection Successfully trained model across ~16,000 images with 30+ classifications

What we learned

Implementing a database and its elements throughout an entire site Working with websockets and video streams in real time. Website Certificates are no fun.

What's next for SignBridge

Acquiring more data to create a more robust model Include more data fields like hand gestures and arm motions to learn phrases Implement quiz sections to the learning mode Utilize a confidence metric to correct the user’s form

Built With

Share this project:

Updates