Inspiration Communication barriers affect millions of people worldwide. According to the World Health Organization, over 70 million people use sign language as their primary form of communication. However, most people don't understand sign language, creating a significant gap in accessibility and inclusion.
I was inspired to create this ASL recognition system after learning about the challenges deaf and hard-of-hearing individuals face in everyday interactions. What if technology could bridge this gap? What if anyone with a webcam could instantly translate ASL letters into text? This project aims to make communication more accessible and help people learn ASL in an interactive, engaging way.
What it does ASL Webcam is a real-time American Sign Language (ASL) fingerspelling recognition system that uses computer vision and machine learning to translate hand gestures into letters. The system consists of three main components:
Data Collection Tool - Allows users to record their own ASL hand signs by showing different letters to the webcam. The system captures 21 hand landmarks (fingertips, knuckles, wrist, etc.) and normalizes them to be independent of hand size and position.
Model Training Pipeline - Uses the collected data to train a Support Vector Machine (SVM) classifier that learns to recognize different ASL letters based on hand shape and finger positions.
Real-time Recognition - Available in both desktop and web versions, the system detects hands in real-time, extracts landmarks, and predicts which ASL letter you're signing with a confidence score.
The web application makes it accessible to anyone with a browser and webcam - no installation required!
Log in or sign up for Devpost to join the conversation.