What it does
HandSpeak AI is a real-time American Sign Language (ASL) translation tool that uses computer vision to recognize hand motions and translate them into text. Designed for individuals of hearing loss, mute, or other communication difficulties, it captures dynamic ASL gestures, instantly converting them into readable text for seamless communication. HandSpeak AI is hands-free, accessible, and easy to use, making everyday interactions simpler and more inclusive. This AI-powered tool enables ASL users to communicate directly with non-signers in real-time, bridging language barriers in a more efficient and accessible way.
How we built it
We trained a model through NN and CNN for hand detection and hand landmark placements, then FCNN for classification. Using libraries such as opencv-python, mediapipe, numpy, etcs, we were able to use device webcam to open a live video stream and capture images of hand gestures. The classification is then sent to Arduino through the library pyserial enabling communication between the model and Arduino codes. The classification is then displayed on an LCD screen. An additional ultrasonic sensor detected when to open the camera to look for gestures. It acts as the cue to show the next gesture.
Log in or sign up for Devpost to join the conversation.