Inspiration

Two of our teammates have close family members who are visually impaired, and their stories inspired us to imagine a world where accessibility isn’t an afterthought, it’s built into the fabric of everyday life. We wanted to design something that could restore independence, instill confidence, and make daily navigation safer and more intuitive. That vision became the foundation for our project: a hands-free, AI-powered assistive device for the visually impaired.

What it does

Our device is a real-time, hands-free visual impairment aid that translates the user’s surroundings into haptic and auditory feedback. Using a stereo vision camera and spatial audio, it detects nearby obstacles, identifies their distance, and communicates directional information through a strap embedded with an array of PWM haptic motors.

The system also integrates OCR to read text aloud, whether it’s a sign, label, or document, and provides audio-based navigation cues using GPS. The result is a sensory “language” that allows users to feel and hear their surroundings rather than see them.

The human skin is incredibly sensitive to vibration frequency and spatial patterns, a property we leverage through spatiotemporal haptic encoding. By varying both the intensity and location of vibration across the strap, users can perceive not just how far an obstacle is, but where it is in space. For example, stronger, faster pulses indicate closer objects. Sequential pulses across the strap simulate motion or direction. Steady vibrations in one area correspond to stationary obstacles or walls.

How we built it

We began by repurposing a back strap purchased online, then stitched and reinforced custom pockets to mount our haptic actuators. We 3D components like our speaker housing and battery holder. Using a Jetson Orin Nano SDK, we handled all AI computation and control, interfacing through a PCA9685 PWM driver board to control an array of 12 haptic motors (arranged in a 2×6 grid).

Each motor was soldered to our custom harness and connected via jumper cables. The Jetson managed multiple peripherals — a stereo vision camera, microphone, speaker, and GPS module, to sense the environment, interpret spatial data, and output corresponding haptic and audio signals in real time.

We used OpenCV for depth mapping and object detection, while custom Python scripts converted these depth values into PWM signals. This effectively created a low-latency perception-feedback loop, allowing users to feel spatial depth as vibration patterns.

Challenges we ran into

As first-time hackers, we faced more challenges than we could count. From learning to sew at Michael’s (and stabbing ourselves a few times in the process) to debugging late-night solder joints, every step tested our resourcefulness (and patience)

At one point, our PCA board refused to communicate with the Jetson, forcing us to rewire the entire circuit at 4:30 a.m. Fixing the issue involved scouring datasheets online and running around, trying to find a replacement multimeter. We also discovered mid-build that we had purchased the wrong connector type for our sensors, which led to frantic scavenging for spare jumper wires and manual soldering. All in all, the hackathon culminated in a hectic, sleepless night that lasted so long that it ended after the sun rose.

Accomplishments that we're proud of

Looking back, this project was challenging because we compiled so many features into one product, especially in such a short period of time. We’re proud that we were able to complete something that feels this complete in its functionality (and just completing the hackathon in the first place).

What we learned

On the software side, we learned how to use new APIs such as fish.audio and how to best implement them into our project. This also included learning to use Claude and OpenRouter to access foundation models for image detection and Q&A. Finally, we implemented getting information from a camera and converting it to motor signals.

For the hardware side, getting our PCA9685 to properly communicate with our array of motors was a big learning experience, involving sending bytes to a motor in python and determining the right amount of current to send to a circuit. And last but not least, during the creation of the final version of our Theia, one of our team members learned to sew!

What's next for Theia

Our goal is to take Theia beyond the strap, and create a version integrated with smart-glasses. We believe implementing smart glasses that can take calls for blind people, use beam forming to directly speak into your ear, and have a built in onbaord PCB would be a cool addition. We also want to use the haptic feedback strap and convert it into an entire vest, with more haptic motors and more battery life.

Built With

Share this project:

Updates