Inspiration
Navigating safely is one of the greatest challenges faced by people with visual impairments. Existing assistive tools, like canes or wristbands, offer limited feedback and often fail to detect sudden drops or trenches. We wanted to build a system that could expand this sense of spatial awareness using affordable hardware and intelligent feedback. VisionAssist was inspired by the idea that technology should amplify perception and make movement safer, easier, and more independent for everyone.
What it does
VisionAssist is a wearable assistive system that helps visually impaired users detect obstacles and trenches in real time. Two ultrasonic sensors mounted on an Arduino measure forward distance and downward depth, sending continuous readings to a web interface over Bluetooth. The interface translates these measurements into visual bars, live metrics, and instant voice alerts such as “obstacle ahead,” “trench ahead,” or “clear.” By combining spatial sensing with immediate auditory feedback, VisionAssist enables users to navigate their surroundings with greater confidence and safety.
How we built it
We built VisionAssist around an Arduino connected to dual ultrasonic sensors and a Bluetooth module. The device streams raw sensor data directly to a Next.js web application using the Web Bluetooth API, eliminating the need for a backend server. On the frontend, we designed a reactive dashboard with TypeScript and TailwindCSS to display live distances and generate smooth, color-coded bar animations. A custom state management system synchronizes incoming Bluetooth data with the interface, while the browser’s Speech Synthesis API provides real-time audio cues. Together, these components form a compact, self-contained assistive ecosystem.
Challenges we ran into
Our biggest challenges came from achieving reliable Bluetooth communication and consistent real-time updates. Establishing stable connections between the Arduino and the browser required repeated testing and careful JSON parsing to ensure valid sensor data. We also faced calibration issues while tuning the ultrasonic sensors to detect obstacles and trenches at different angles and ranges. Managing asynchronous Bluetooth events while keeping the UI smooth and responsive demanded careful control of timing and data flow.
Accomplishments that we're proud of
We are proud to have created a fully functional prototype that delivers instant, accurate feedback without any external server. VisionAssist successfully merges hardware sensing, live visualization, and audio response in a single interface. We built an intuitive experience that demonstrates how accessible design can be elegant, minimal, and technically robust. Achieving stable real-time Bluetooth streaming inside a browser was a major milestone for the team.
What we learned
Through this project, we learned how to harness the Web Bluetooth API for real-time data visualization and how to manage asynchronous streams effectively. We explored techniques to smooth noisy sensor readings and built a better understanding of accessibility-centred interface design. The process taught us the value of rapid iteration, clear teamwork, and empathy-driven engineering—skills that will carry over to future projects in hardware, software, and human-computer interaction.
What's next for VisionAssist
Next, we aim to expand VisionAssist beyond audio feedback by incorporating haptic vibration modules for users who prefer tactile alerts. We plan to explore GPS and object detection integration for outdoor navigation and extend the system into a mobile Progressive Web App for greater accessibility. In the long term, we envision adding a lightweight machine learning model to classify terrain types and refine navigation responses. VisionAssist is just the first step toward a broader platform that merges sensing, intelligence, and inclusivity in assistive technology.
Log in or sign up for Devpost to join the conversation.