Inspiration

Project BrailleMary was inspired by a simple but important problem: many deafblind people still face major barriers in real-time communication, especially in situations where the people around them do not know braille-based tactile communication systems or do not share any direct accessible channel. We wanted to build something that could act as a live bridge between spoken or signed communication and a tactile output the user can directly feel.

The idea was to go beyond a basic translator. Communication in the real world is messy. Speech can be unclear, sign input can be ambiguous, and different situations require different pacing, replay behavior, and confirmation. That pushed us to imagine a system that is not just converting inputs, but intelligently helping communication happen in a way that is reliable, transparent, and accessible.

What it does

Project BrailleMary is a multimodal assistive communication system for deafblind accessibility. It takes in both spoken language and sign language through open-source voice and sign recognition systems, then converts the final chosen message into exact tactile braille-style output using a 6-dot vibrotactile hardware patch.

Claude acts as the communication intelligence layer of the system. Its role is not to rewrite or paraphrase the message, but to orchestrate communication across modalities. It compares speech and sign inputs, resolves conflicts, decides when confirmation or replay is needed, and adapts playback speed and delivery mode to the user. Once the final text is selected, it is preserved exactly, encoded into braille dot patterns, and sent through a Maker UNO-based controller to the tactile hardware. The result is a real-time communication bridge for situations where a deafblind user and another person do not already share a direct accessible method of communication.

How we built it

We built BrailleMary as a multimodal pipeline with three main layers: perception, communication intelligence, and tactile output.

The perception layer uses open-source implementations for voice recognition and sign recognition. These systems capture spoken and signed input and convert them into text candidates. That information is then passed to Claude, which serves as the communication intelligence layer. Claude does not modify the meaning or rewrite the message. Instead, it compares the available inputs, checks for consistency, handles ambiguity, decides whether the system should confirm or replay information, and selects the final exact text that should be delivered.

After that, the chosen text is mapped into braille-style dot patterns and transmitted to a Maker UNO-based controller. The controller drives a 6-dot vibrotactile hardware patch that presents the message through tactile stimulation, allowing the user to feel the braille-style output in real time.

Challenges we ran into

One of the biggest challenges was making the system a communication layer rather than just a translation pipeline. A simple input-to-output system is not enough when real-world speech and sign input can disagree, be noisy, or arrive with different confidence levels. We had to think carefully about how the system should compare modalities, decide which signal to trust, and determine when a confirmation or replay is necessary.

Another challenge was preserving the exact text. Since accessibility and trust are critical, we did not want the intelligence layer to paraphrase or “helpfully” rewrite what was said. That meant designing the system so it could assist decision-making without altering the final content. On the hardware side, translating chosen text into reliable 6-dot tactile patterns and delivering them in a way that is clear, timed well, and comfortable for the user was also a major design challenge.

Accomplishments that we're proud of

We are proud that BrailleMary is not just a concept for translation, but a true multimodal communication bridge. It combines voice input, sign input, intelligent conflict resolution, and tactile output into one end-to-end assistive system.

We are especially proud of the role of Claude in the system. Instead of using AI to rewrite content, we used it in a more careful and transparent way: to orchestrate communication, preserve exact wording, and support confidence-aware decisions. We are also proud that the system ends in a real hardware output path through a Maker UNO-based tactile patch, making it more than a software demo.

What we learned

We learned that accessibility systems need more than accurate recognition models. Real usability comes from handling uncertainty, timing, replay, and trust. Translation alone is not enough; communication support requires orchestration between multiple signals and careful decisions about when to confirm, slow down, or repeat information.

We also learned that preserving exact text matters a lot in assistive communication. Users need confidence that what they receive is what was actually intended. Finally, we learned how powerful multimodal design can be when AI is used not to replace the message, but to help deliver it more reliably and accessibly.

What's next for Project Braille mary

The next step for Project BrailleMary is to build a more transparent helper interface with confidence-aware decisions, so the system can clearly show when it is confident, when it detects conflict between modalities, and when confirmation is needed. We also want to improve personalization by adapting playback speed, tactile pacing, and delivery style to different users.

Looking further ahead, we want to make the system more robust in real-time conversations, improve the tactile hardware experience, and expand support for more natural multimodal interaction. Our long-term vision is for BrailleMary to become a dependable real-world communication bridge for deafblind users in everyday conversations, public spaces, and assistive care settings.

Built With

Share this project:

Updates