Inspiration: Imagine a tool that empowers users to control a website using only their voice, breaking down barriers to digital accessibility. Our project is a voice-controlled web assistant that translates spoken commands into real-time actions on a webpage. It's more than just accessibility; it’s about giving everyone a voice to control their digital world.

What it does: Picture this: you say, "Modify this webpage to make it accessible to people with color blindness " and it will change the website colors accordingly. With a few words, users can adjust colors, translate text, highlight important sections, or even magnify content—all hands-free. Our solution leverages JavaScript's versatility to allow anyone—especially those with disabilities—to navigate, adjust, and work with web content seamlessly.

How we built it: The application is powered by a nodejs backend which manages the communication between the frontend website built using simple HTML, and the LLM facilitating text-to-javascript conversion.

Challenges we ran into: One challenge we faced was integrating the backend GPT module to connect seamlessly with our prompts. Implementing this GPT model was a new approach for us, and it required troubleshooting to achieve smooth, responsive interactions.

Accomplishments that we're proud of: Although the project appears simple and intuitive for users, it required extensive internal work to make this possible. From integrating the GPT module to ensuring seamless real-time actions, we tackled many technical challenges to create an accessible, user-friendly experience

What we learned: This challenge helped exercise your ability to think critically with a strict time bound forcing us to prioritize our core functionalities and delegate work efficiently.

What's next for Hands: HANDS is a proof of concept that illustrates what's possible with the help of LLMs. The north star for HANDS would be the creation of a browser extension that can facilitate this functionality on any webpage.

Share this project:

Updates