We wanted to leverage the awesome customizability of 3D printing towards an idea that could improve the lives of people living with blindness or impaired vision. Braille seems like a natural fit for the tech - the endless variations of text combined with the capability to print durable objects to read. With little to no experience working with 3D printers, we wanted to explore the possibilities afforded by the full customization allowed by the medium and learn about their limitations. Similarly, we wanted to explore the space of text recognition from images and the possibilities of combining the techs.

What it does

To highlight this tech, we've created labels for food products that contain basic info like product name, price, and nutrition macros. We process text to generate 3D printable models automatically. Braillr is intended to be a proof of concept demonstrating one of the many possibilities for printing custom, legible braille.

How we built it

We used Python to interact with the Text2Braille website to automatically convert text to .slt files, and to manipulate those files to make them ready for 3D printing. Our goal was to implement character recognition through Unity and Vuforia, and feed parsed characters into the Python engine. We worked with 3D printers over the course of the hackathon to test and print various prototypes of our labels.

Challenges we ran into

We're used to working with a bigger team, with more brainpower available. We spent more than half of the hackathon fluctuating between different ideas and we were starting to run out of steam by the time we settled on Braillr. The Python library selenium was used to input data into the Text2Braille site, and we struggled to find a way to automatically save the files downloaded - it currently requires a step of user intervention to click the save button. We spent time trying to use the matplotlib library to render images of the created stl files for the demo, but eventually abandoned that dead end. The major time-crunch and lack of experience working with AR meant that that element of the project never fully came together.

Accomplishments that we're proud of

We developed what we believe to be an innovative and useful concept for 3D printing, with the capability to help those with blindness or visual impairment, as well as to help retail stores better suit the needs of their customers. We

What we learned

We primarily learned the functionalities, workflow, and limitations of 3D printing. We gained experience interacting with websites through Python and writing AR apps through Unity and C#.

What's next for Braillr

We would like to fully implement our concept. With a more relaxed timeframe, we'd like to implement the Text2Braille utility locally, and finish and polish the text scanning functionality, giving us the ability to parse pre-existing labels into 3D models of braille text with the push of a button.

Share this project: