ASL Translator
American Sign Language (ASL) translator using computer vision and deep learning. Recognizes both individual letters (A-Z) and common gesture phrases (HELLO, GOODBYE, THANK_YOU).
Inspiration
Wanted to develop something involving video tracking and cognitive training.
What it does
- Letter Mode: Detects and translates individual ASL letters in real-time
- Gesture Mode: Recognizes full gesture phrases like "HELLO", "GOODBYE", and "THANK YOU"
- Text Building: Combines letters and gestures to form complete sentences
How it was built
- MediaPipe: Hand landmark detection (21 points per hand)
- TensorFlow: Deep learning models
- OpenCV: Real-time video processing
- Python: Core implementation
Models:
- Letter Recognition: Dense neural network trained on 80,000+ images
- Gesture Recognition: LSTM network for temporal sequence analysis
What's next
- Add more gesture phrases (YES, NO, SORRY, HELP, etc.)
- Sentence auto-completion and word suggestions
- Multi-hand support for two-handed signs
Try it out
git clone https://github.com/acd5849/ASL-Translator-HACKPSU cd ASL_Translator
Setup virtual environment
python3 -m venv venv source venv/bin/activate
Install dependencies
pip install -r requirements.txt
Run the detector
python3 python/combined_detector.py
Built at HackPSU 2025
Built With
- kraggle
- mediapipe
- opencv
- pytho
- tensorflow
Log in or sign up for Devpost to join the conversation.