Inspiration
There are around 11 million (3.6% of the population) deaf or hearing-impaired individuals in the United States, eager for a fully immersive experience at the 2028 LA Olympics. However, due to hearing loss, understanding live sports commentary can be a challenge. This commentary often includes vital information, such as athlete introductions and the excitement of key moments in the game. By providing accessible information, we can enrich their experience at this landmark event.
Furthermore, for those attending the event in person, communication can be difficult. A tool that translates spoken language into sign language is essential. This would empower volunteers and fellow attendees to assist deaf individuals effectively, improving organizational efficiency and service quality during the LA Olympics.
What it does
The Talk to Sign tool translates spoken or written English into American Sign Language (ASL). It features:
· Sign Language Video Generation: Creates animated sign language videos based on input text or audio.
· Human-Like 3D Models: Utilizes lifelike 3D avatars to demonstrate sign language gestures.
· Mobile-Friendly Interface: Designed for accessibility on both mobile and desktop devices.
· Adjustable Model Size: Users can customize the window size of the signing model for better visibility.
How we built it
· Frontend (User Interface)
- HTML: Provides the structure of the web pages for users to interact with.
- TailwindCSS: Ensures a clean, modern, and responsive design with minimal code.
- JavaScript: Enables dynamic interactions, such as capturing audio input and triggering speech-to-sign translation.
· Audio Recording
- MediaRecorder API: Facilitates recording audio through the browser’s microphone, ensuring smooth input collection.
· Speech Recognition
- AssemblyAI API: A cloud-based API used to convert spoken audio input into text with high precision.
· Text Preprocessing
- Natural Language Toolkit (NLTK): Tokenizes, cleans, and processes the recorded text, ensuring it’s ready for sign language conversion by removing unnecessary words or punctuation.
· 3D Animation for Sign Display
- Blender 3D: Used to design and animate a 3D character that visually demonstrates the corresponding sign language based on processed text.
· Backend and Database Management
- Django Framework: Handles backend logic, including login, signup, and session management.
- SQLite Database: Stores user information securely, such as usernames and passwords, ensuring proper authentication.
Challenges we ran into
· It took a lot of time for us to find a free and public dictionary database for American sign language.
· Efficient recognition of text from audio is needed for a fast translation. We use two existing APIs to solve this problem.
· American sign language uses a different syntax compared with English language. We need to reorganize the sentence detected instead of translating word by word.
· Not every English word have a corresponding sign language pose, especially when it comes to the name. We solve this issue by finger-spelling each letter in the name.
Accomplishments that we're proud of
We are proud to have developed the Talk to Sign tool, which addresses a significant need for the deaf and hearing-impaired community during major events like the 2028 LA Olympics. Key accomplishments include:
· Innovative Translation: Successfully creating a tool that translates spoken English into American Sign Language using a 3D model, making the experience more accessible for users.
· User-Friendly Interface: Designing an intuitive interface that works seamlessly on mobile devices, allowing easy access for a wide range of users.
· Integration of Advanced Technologies: Leveraging the MediaRecorder API and AssemblyAI API for high-quality audio capture and speech recognition, ensuring accurate translation.
· Robust Backend Framework: Building a reliable backend using Django and SQLite, ensuring secure user management and data handling.
· Effective Problem-Solving: Overcoming challenges such as finding a suitable ASL dictionary and adapting translations for different syntactic structures, demonstrating our ability to innovate under constraints.
What we learned
Throughout the development of Talk to Sign, we gained valuable insights, including:
· The Importance of Accessibility: Understanding the critical role of accessibility tools in enhancing experiences for the deaf and hearing-impaired community, especially at large events.
· Technical Integration: Gaining hands-on experience in integrating various technologies, from audio recording to 3D animation, and recognizing the importance of choosing the right tools for specific tasks.
· Language Nuances: Learning about the complexities of American Sign Language, including its syntax and the need for contextual understanding when translating from English.
· User-Centric Design: Realizing the significance of user feedback in creating a functional and enjoyable interface, and the necessity of testing with real users to refine our product.
· Collaboration and Adaptability: Recognizing the value of teamwork and adaptability in overcoming challenges, such as finding suitable resources and adjusting our approach based on technical limitations.
What's next for Talk to Sign
Looking ahead, we aim to evolve this tool into a real-time translator, allowing users to continuously access sign language translation more conveniently. Additionally, we plan to develop a multilingual translator, expanding beyond English and American Sign Language, so that more individuals with hearing challenges worldwide can benefit from this technology.

Log in or sign up for Devpost to join the conversation.