๐ŸŽ“ ASLgorithm

Bridging CS Education Through ASL

๐Ÿ’ก Inspiration

Computer science is already hard โ€” imagine trying to follow a fast-paced lecture without access to clear signs for "linked list", "recursion", or "binary tree." Many CS terms have no standardized ASL, and interpreters often struggle to keep up. So we built ASLgorithm: a real-time tool that makes computer science more accessible for Deaf and hard-of-hearing students.

๐Ÿš€ What It Does

ASLgorithm listens to CS lectures and instantly shows:

๐Ÿ–๏ธ The ASL sign for key CS terms as approved by real ASL experts from ASLcore

๐Ÿ“Š A visual/diagram to explain the concept

โฑ๏ธ Timestamp of when it was mentioned

Students get a live feed of concept cards as the lecture unfolds โ€” perfect for real-time learning and building ASL-CS fluency. These concept cards are analyzed by AI for their importance and frequency in the lecture, and students can sort the concepts by most recent and by importance. They can also upload recorded audio from lecture to play back in their own free time.

๐Ÿ› ๏ธ How We Built It

๐Ÿง  Chirp API transcribes lectures with high accuracy

๐Ÿ” Gemini 1.5 Flash analyzes transcripts and identify key CS concepts in context

๐Ÿ”— Atlas Database maps CS concepts to their corresponding ASLcore approved signs and visual explanations

๐Ÿ–ฅ๏ธ React + Tailwind powers a clean, real-time UI

โ˜๏ธ Flask + Google Cloud manage file uploads and processing

๐Ÿงพ MongoDB Atlas stores terms and metadata

๐ŸŽต RealTimeSTT and SocketIO create a server that listens for realtime live audio from lecturers

It's a pipeline: live / uploaded audio is captured โ†’ transcribed to text โ†’ analyzed for CS concepts โ†’ matched with ASL signs โ†’ displayed in the real-time feed.

๐Ÿ˜… Challenges

Many CS concepts don't have standardized ASL signs. We had to use signs approved by ASL experts from ASLcore to ensure accurate representation of technical terminology.

Balancing processing speed with accuracy was challenging. We needed to ensure the ASL visualizations appeared quickly enough to be useful during lectures.

Many CS terms have different meanings in different contexts (e.g., "tree" in data structures vs. general usage). We spent significant time refining our AI prompts to identify the correct technical usage.

Handling potentially large audio files required thoughtful implementation of storage and caching strategies.

Coordinating between different services (speech recognition, AI analysis, and frontend display) required careful architecture planning.

RealtimeSTT live audio server is not consistent in its transcription, to be implemented further in the future.

๐Ÿ”ฎ Whatโ€™s Next

๐Ÿ“š Add more advanced topics (ML, security, SWE)

๐ŸŽฅ Live Zoom/Meet integration

๐ŸŽต Teachers wear microphones that connect to our live transcription tool

๐Ÿง‘โ€๐Ÿซ Instructor tools for uploading custom terms

๐Ÿง‘โ€๐Ÿคโ€๐Ÿง‘ Community-contributed signs from Deaf developers

โš™๏ธ Built With

React ยท Tailwind CSS ยท MongoDB Atlas Google Cloud Platform ยท Chirp ยท Google Gemini ยท RealTimeSTT ยท SocketIO

Share this project:

Updates