Inspiration
We wanted to build something minimal that could genuinely change someone's daily life without drawing attention to itself. Hearing loss affects over 1.5 billion people worldwide, and existing solutions are either expensive, phone-dependent, or bulky in ways that signal difference rather than enable independence. We asked what it would look like to give deaf users real-time spatial sound awareness through something they could wear like a necklace.
What it does
HearLink is a wearable necklace that captures environmental audio through four directional microphones, classifies sounds in real time using AI, and delivers that information to the user through a glasses-mounted OLED display and haptic buzzers embedded in the collar. The display shows a compass-style map of active sounds rotating with the user's head, and the buzzers fire in the direction of the sound source so the user physically feels where it is coming from. The buzzers are driven by a square wave PWM signal at the resonant frequency of the buzzer, which is what produces the physical vibration felt against the skin rather than just audible noise. Users configure which sounds count as dangerous through a companion web app, which also logs detected sounds on a map for later reference.
How we built it
We 3D printed a four-part necklace with a velcro system that houses nearly all components internally, with wire channels routed through the structure. The ESP32-S3 serves as the embedded hub, reading four INMP441 microphones over two I2S buses and streaming audio over WiFi to a laptop running our Python pipeline, which uses delay-and-sum beamforming and Google's YAMNet classifier to identify sounds and their directions. Haptic feedback is delivered through passive piezo buzzers driven by transistor driver circuits we designed, milled, and hand-soldered on custom PCBs, with a square wave PWM signal tuned to each buzzer's resonant frequency to produce skin-felt vibration. We built incrementally, breadboarding and validating each subsystem before integrating, moving through milestones from single mic capture to full four-mic four-buzzer operation.
Challenges we ran into
We originally planned to build around the Raspberry Pi Pico W and had already mapped out all our pin assignments and power architecture around it, so switching to the ESP32-S3 mid-hackathon meant redoing a lot of that work from scratch. On the PCB side, we milled our own buzzer driver boards on site and ran into lifted copper pads when soldering the SMD resistors, which brought us from 8 functional boards to 4, reducing our buzzer count from 8 to 4 in the process. After testing thet PCB's our buzzers failed to work, finally leading to us droppigng the PCB's from the final design. Getting clean audio classification over a live WiFi stream took a lot of back and forth, we had to figure out that the 32-bit I2S samples needed specific bit-shifting before YAMNet could make sense of them, and tuning the beamformer to give consistent directional output in a real noisy room took longer than expected. Additionally, we were not able to fully fit all the wiring inside of the housing, leading to a housing that could not be closed completely.
Accomplishments that we're proud of
We got a complete end-to-end system working in 36 hours, from microphone capture to haptic buzzer feedback, with direction detection that holds up in real-world conditions. which we are genuinely proud of as a physical engineering achievement. The companion web app, sound logging map, and user-configurable alert preferences make HearLink feel like a product and not just a prototype.
What we learned
We learned quickly that passive buzzers connected to DC just click once and go silent, and that getting actual vibration requires a square wave at the right frequency, which was not obvious going in. We also learned that milled PCBs are less forgiving than manufactured ones, and that SMD soldering requires a lot more temperature control than through-hole work. On the software side, figuring out why YAMNet was returning garbage classifications led us to discover the bit-shifting issue in our I2S audio conversion, which was the kind of bug that only shows up when you actually stream live audio end to end.
What's next for HearLink
On the hardware side we want to eliminate external wiring, reduce the collar profile, and move toward a more discreet form factor that does not look like a prototype. The glasses display is currently an OLED mounted on the lens and the longer term goal is a transparent augmented reality overlay anchored to real-world sound positions. On the software side we want to improve classification accuracy, add sound icons to the display, complete the companion app integration with live sync, and explore personalized danger threshold learning so the system adapts to each user's environment over time.
Built With
- easyeda
- micropython
- numpy
- nx
- thonny
- yamnet
Log in or sign up for Devpost to join the conversation.