AUDIENT was born out of a moment of empathy. In a quiet corner of Rutherford Library, his hand trembled as he reached for the keyboard; every keystroke a struggle of mistypes and retries. What’s effortless for most was, for him, a fight for access. Epiphany: technology that empowers can also excludes. We created AUDIENT, an AI-enhanced accessibility system that helps individuals with physical or speech disabilities interact with their devices effortlessly. AUDIENT learns from each user’s unique habits, adapts to their style, and personalizes its responses over time. Through intelligent voice control, users can browse, create, and communicate freely—transforming limitation into independence. Unlike existing systems that are bound by fixed commands, AUDIENT adapts and grows with each user — the cornerstone of true accessibility. Throughout development, our team explored the full range of physical disabilities and existing assistive technologies — studying how they work, where they fall short, and how AI could bridge those gaps. Using a mix of libraries, frameworks, and adaptive models, we built an early prototype of AUDIENT to demonstrate how intelligent, on-device learning can make assistive tech vastly more responsive and inclusive. One major challenge we faced was navigating CAPTCHAs. Since most CAPTCHAs block automation, designing a way for AUDIENT to guide users through them without breaching security pushed us to innovate within strict boundaries. Building AUDIENT taught us that accessibility begins with empathy. Our next step is to expand its adaptive learning and bring it to more systems—making independence universal, one voice at a time.

Built With

Share this project:

Updates