Inspiration

Communication barriers can isolate millions of deaf and hard-of-hearing individuals. We were inspired to build SignForDeaf to foster inclusivity by bridging the gap between sign language users and the hearing world using AI.

What it does

SignForDeaf is a real-time AI-powered web application that detects hand gestures through a webcam and translates sign language (A–Z) into readable text and speech using a deep learning model.

How we built it

Frontend: Built with React and TailwindCSS, integrated with MediaPipe to detect hand landmarks.

Backend: Created using FastAPI and PyTorch to serve the trained LSTM model for sign recognition.

Model: A custom LSTM neural network trained on 3D hand landmark datasets for each alphabet (A–Z).

Integration: Axios is used to connect the frontend with the backend /predict API for real-time inference.

Challenges we ran into

Collecting and cleaning consistent gesture data for all 26 alphabets.

Model accuracy and misclassification for similar-looking signs.

Integrating MediaPipe with React and syncing webcam input with backend predictions.

CORS and API integration issues during frontend-backend deployment.

Accomplishments that we're proud of

Successfully built an end-to-end system translating sign gestures into text and speech.

Integrated real-time hand tracking using webcam.

Trained a working LSTM model with minimal data.

Deployed the backend API and React frontend in a functional prototype.

What we learned

Hands-on experience with MediaPipe, FastAPI, and PyTorch model training.

How to bridge frontend and backend in real-time AI applications.

Understanding the importance of clean, consistent training data.

Deployment strategies and teamwork in a time-bound hackathon environment.

What's next for SignForDeaf

Improve model accuracy with more diverse training data.

Support complete sentences and multi-sign recognition.

Add support for multiple sign languages (e.g., BSL, ISL).

Deploy on cloud and make it accessible via mobile devices.

Collaborate with organizations supporting the deaf community.

Built With

  • and
  • and-tailwindcss-for-a-responsive-frontend.-the-app-uses-mediapipe-hands-to-detect-real-time-hand-landmarks
  • be
  • can
  • deployed
  • like
  • platforms
  • render
  • typescript
  • using
  • we-built-signfordeaf-using-react
  • which-are-sent-to-a-fastapi-backend.-the-backend-hosts-a-pytorch-lstm-model-trained-to-recognize-a?z-sign-language-gestures.-we-used-axios-for-api-calls-and-vite-for-fast-development-builds.-the-project-is-managed-with-github
Share this project:

Updates