๐ ASLgorithm
Bridging CS Education Through ASL
๐ก Inspiration
Computer science is already hard โ imagine trying to follow a fast-paced lecture without access to clear signs for "linked list", "recursion", or "binary tree." Many CS terms have no standardized ASL, and interpreters often struggle to keep up. So we built ASLgorithm: a real-time tool that makes computer science more accessible for Deaf and hard-of-hearing students.
๐ What It Does
ASLgorithm listens to CS lectures and instantly shows:
๐๏ธ The ASL sign for key CS terms as approved by real ASL experts from ASLcore
๐ A visual/diagram to explain the concept
โฑ๏ธ Timestamp of when it was mentioned
Students get a live feed of concept cards as the lecture unfolds โ perfect for real-time learning and building ASL-CS fluency. These concept cards are analyzed by AI for their importance and frequency in the lecture, and students can sort the concepts by most recent and by importance. They can also upload recorded audio from lecture to play back in their own free time.
๐ ๏ธ How We Built It
๐ง Chirp API transcribes lectures with high accuracy
๐ Gemini 1.5 Flash analyzes transcripts and identify key CS concepts in context
๐ Atlas Database maps CS concepts to their corresponding ASLcore approved signs and visual explanations
๐ฅ๏ธ React + Tailwind powers a clean, real-time UI
โ๏ธ Flask + Google Cloud manage file uploads and processing
๐งพ MongoDB Atlas stores terms and metadata
๐ต RealTimeSTT and SocketIO create a server that listens for realtime live audio from lecturers
It's a pipeline: live / uploaded audio is captured โ transcribed to text โ analyzed for CS concepts โ matched with ASL signs โ displayed in the real-time feed.
๐ Challenges
Many CS concepts don't have standardized ASL signs. We had to use signs approved by ASL experts from ASLcore to ensure accurate representation of technical terminology.
Balancing processing speed with accuracy was challenging. We needed to ensure the ASL visualizations appeared quickly enough to be useful during lectures.
Many CS terms have different meanings in different contexts (e.g., "tree" in data structures vs. general usage). We spent significant time refining our AI prompts to identify the correct technical usage.
Handling potentially large audio files required thoughtful implementation of storage and caching strategies.
Coordinating between different services (speech recognition, AI analysis, and frontend display) required careful architecture planning.
RealtimeSTT live audio server is not consistent in its transcription, to be implemented further in the future.
๐ฎ Whatโs Next
๐ Add more advanced topics (ML, security, SWE)
๐ฅ Live Zoom/Meet integration
๐ต Teachers wear microphones that connect to our live transcription tool
๐งโ๐ซ Instructor tools for uploading custom terms
๐งโ๐คโ๐ง Community-contributed signs from Deaf developers
โ๏ธ Built With
React ยท Tailwind CSS ยท MongoDB Atlas Google Cloud Platform ยท Chirp ยท Google Gemini ยท RealTimeSTT ยท SocketIO

Log in or sign up for Devpost to join the conversation.