Inspiration

Our inspiration is to help blind and low vision people to know about places so that they will feel more inclusive wherever they go.

What it does

Videre is a modular smart cane system using Arduino-powered cane to pick up obstacles through ultrasonic sensing and responds with vibration patterns and beeps that get stronger and faster the closer you get The companion iOS app uses Apple LiDAR to scan the full scene at 60fps, giving a much wider picture than any single sensor can. We also use Gemini AI to look through the camera and describe what's ahead in plain-spoken language. The interaction is simple by design. One button press asks where you are. Another asks what is ahead. Everything is spoken aloud. The screen never needs to be touched. It works with the phone locked, plays nicely with VoiceOver, and the cane and app can be used together or independently, depending on what the user needs.

How we built it

The hardware is an Arduino UNO with an HC-SR04 ultrasonic sensor, two vibration motors, a buzzer, and an HM-10 Bluetooth Low Energy module, all mounted on a physical cane with the iPhone clipped to the shaft facing forward. The Arduino runs a smart alert system with zone-based vibration scaling, a direction filter that only alerts when objects are approaching, and crowd mode that automatically reduces noise in busy environments. The iOS app is built in Swift with CoreBluetooth receiving the cane data, ARKit and Apple LiDAR capturing depth and spatial position, Gemini 2.0 Flash analysing camera frames, and Supabase storing scan payloads. The backend team built Supabase Edge Functions to ingest LiDAR scan data and process it into accessible maps. The entire stack communicates through a clean JSON contract.

Challenges we ran into

Getting BLE reliable was our first wall. The HM-10 splits JSON across multiple packets and iOS CoreBluetooth doesn't buffer them, so we had to build a custom string buffer that waits for a closing brace before parsing. Balancing the alert priority queue was harder than expected when distance alerts, LiDAR depth warnings, Gemini descriptions all want to speak at the same time, and a blind user needs them in the right order without overlap. The scan payload was also complex to collect trajectory points, keyframe images, depth binaries, and camera intrinsics simultaneously while keeping the app responsive, which required careful async architecture. Finally, designing for BLV users meant rethinking every interaction, no tap targets, no visual feedback, everything communicated through sound and vibration.

Accomplishments that we're proud of

We are proud that the entire system works end to end from the Arduino cane vibrating to a spoken Gemini description. The crowd mode auto-detection that silences non-critical alerts in busy environments feels genuinely useful and was entirely hardware-driven. The LiDAR scan pipeline collecting trajectory, depth, and keyframes simultaneously and uploading to Supabase in a structured payload is something we think has real potential. Most of all, we are proud that we built something a blind person could actually use, with VoiceOver labels on every element, voice input for room names, and a button interface that works with the phone mounted on the cane out of reach.

What we learned

We learned that designing for accessibility from the start is completely different from adding it at the end. Every decision from button placement to JSON payload structure to voice alert wording had to be made with a blind user in mind. We learned that ARKit and LiDAR are extraordinarily powerful tools that are still largely untapped for accessibility applications. We learned how to split a complex real-time system across four people without blocking each other. And we learned that hardware adds a dimension to a hackathon project that pure software cannot.

What's next for Videre

The community map is the most exciting frontier. We want to process the LiDAR scan payloads into navigable indoor graphs so blind users get turn-by-turn directions inside buildings, not just outdoors. We want to add persistent user accounts so scan history, hazard reports and preferred sensitivity settings follow the user across devices. The Gemini vision pipeline has room to grow, we want to add traffic light state detection, crosswalk identification, and bus number recognition. On the hardware side we want to miniaturise the electronics into a purpose-built handle and add a second ultrasonic sensor angled upward to detect head-height obstacles like open cabinet doors. Longer term, Videre should become a platform as an open map of accessible spaces built collectively by the people who need it most.

Built With

Share this project:

Updates