This project was inspired by a goal I had for a long time: building a system capable of translating American Sign Language (ASL) into English and converting it into speech using AI and computer vision. I first tried to build something like this on my own in the past, but I was not able to fully make it work. This hackathon gave me the opportunity to return to that idea, this time with a team, and finally bring it to life. We built a real-time ASL translation system using a camera, a machine learning model I trained myself for gesture recognition, and a speech output system that converts translated words into spoken English. The project combines AI, computer vision, and speech synthesis to create a tool that can help make communication more accessible. Building the project was far from easy. We faced many challenges throughout development, including model training issues, debugging errors that took hours to resolve, integrating multiple components into one working pipeline, and handling time constraints during the hackathon. At one point, one of our team members arrived 10 hours late due to work, which meant the rest of us had to adapt quickly and redistribute tasks to stay on track. Despite these obstacles, we kept pushing forward.
Log in or sign up for Devpost to join the conversation.