๐ Inspiration: From Prototype to Platform
TactileText started as a scrappy V1 prototype built to prove a simple idea: a digital braille display should not cost 5,000โ15,000 dollars when most visually impaired learners in countries like India live on less than 5 dollars a day. V1 successfully combined a basic Arduino-driven braille cell with a simple app to convert text into braille and audio, but it was very much a quick "maker project" built under hackathon-style timelines.
TactileText V2 takes that original proof-of-concept and turns it into a fully thought-out, documented, and structured platform designed for real deployment in schools and organizations. The mission is unchanged โ affordable braille for the 253 million visually impaired people globally โ but the execution has matured into something replicable, maintainable, and ready to scale.
๐ What It Does Now vs Before
V1 (First Prototype) ๐งช
- Core workflow: input text โ convert to braille โ actuate a 6-dot braille cell, with basic audio support.
- Main focus: "can this even be done for under 20 dollars?" more than reliability, architecture, or documentation.
V2 (Upgraded System) โ๏ธ
- Still under 20 dollars total hardware cost, but now a clearly designed, two-part integrated system: mobile app plus hardware braille cell.
- Mobile app evolved into a multi-modal accessibility hub:
- Typed text, camera-based OCR, and speech-to-text all feeding the braille display.
- Cloud-based cognitive services powering OCR, text-to-speech, and speech-to-text across many languages.
- Hardware braille cell refined into a performance-tuned device:
- 6 electromagnetic actuators, microcontroller board, driver array, and 12V power optimized for fast response time and very high character accuracy.
V1 proved the concept; V2 turns it into a coherent product that can handle real-world edge cases and day-to-day learning scenarios.
๐๏ธ How It's Built: V1 Hack โ V2 Architecture
V1 Mindset ๐ง
- "Just make the hardware move and the app talk": heavy focus on microcontroller tinkering, light focus on clean app architecture and long-term maintainability.
- Minimal structure for the app; logic, networking, and UI were more entangled and harder to scale.
V2 Implementation ๐งฑ
- Clear architecture layers: presentation, business logic, service layer, and hardware interface, instead of everything living in one place.
- App reorganized with:
- Dedicated state management and secure local storage for user data and preferences.
- Clean separation between screens, braille conversion logic, Bluetooth communication, and API calls.
- Custom serial protocol introduced:
- Byte-framed structure (with a header, command, data, and checksum) for reliable Bluetooth communication between the phone and the braille hardware instead of ad-hoc serial writes in V1.
- Security and privacy layer defined:
- Privacy-first behavior, no cloud storage by default, encrypted local storage, and explicit security considerations in the design.
V1 was a functional demo; V2 is a properly engineered system that other developers and organizations can understand, extend, and deploy.
๐ก Reliability & UX: V1 Prototype โ V2 Accessible Product
V1 Limitations โ ๏ธ
- Very happy-path focused: if Bluetooth dropped, if OCR failed, or if latency spiked, the user experience could easily break.
- Exception handling, retries, and clear user feedback flows were basic or missing.
V2 Improvements โ
- A dedicated phase focused entirely on robustness: stronger exception handling, performance optimizations, extended user testing, and fixes driven by real feedback.
- Accessibility-first UX:
- Interfaces inspired by accessibility guidelines, clearer flows, and structured screens for Bluetooth, OCR, voice input, and file handling.
- Offline-first approach:
- Core text-to-braille pipeline designed to work without constant internet, with cloud services used when available for OCR and speech features.
- Troubleshooting and support:
- Detailed installation and troubleshooting instructions for issues like Bluetooth disconnects, OCR failures, and app crashes โ support that simply did not exist in V1.
The shift in V2 is from "show the tech" to behaving predictably and respectfully for visually impaired users in day-to-day use.
๐ Open-Source & Impact: V1 Solo โ V2 Community-Ready
V1 Reality ๐
- Mostly a one-off project: the code existed but onboarding a new contributor, school, or NGO would be difficult without direct guidance.
- Limited documentation and no clearly structured roadmap for how the project would evolve.
V2 Status ๐ฑ
- Comprehensive documentation:
- Clear overview, installation instructions, architecture description, roadmap, contribution guide, and community guidelines make the project understandable and contributor-friendly.
- Long-term roadmap:
- Initial foundation completed; stability and robustness in progress; later phases targeting global expansion, advanced features, and large-scale impact across many countries and users.
- Open-source maturity:
- Permissive licensing, clear contribution paths, and specific areas where developers, educators, and hardware makers can help (localization, hardware variants, UI/UX, testing).
V1 showed that one person could hack together an under-20-dollar braille system; V2 shows that this can grow into a community-driven platform capable of meaningful global impact.
Log in or sign up for Devpost to join the conversation.