One of our group mates saw her friend use a screen reading program to translate the text on the page and she realized people who are visually or audibly impaired don't "see" computer screens like we can. Designing products that can be used by people with a wide range of abilities and disabilities is called "universal design." In contrast, many software producers focus on the characteristics of the "average" user. For example, in a survey of twenty-five award winning companies who produce pre-college instructional software, only two of the nineteen that responded indicated they were aware of accessibility issues. Sixty-five percent of the remaining seventeen companies were not aware of accessibility as an issue, 100% were not currently addressing accessibility in their product development, and 88% had no plans to address accessibility in the future (Golden, 2001). By building this app, we hope to raise more awareness of this issue.
We decided to create a program which analyzes pictures on computer screens. Similar programs only orally read the information back to the visually impaired person. Those that are both visually and audibly impaired have no way of actually "reading" the text. Therefore, we wanted to extend the functionality to other audibly impaired people by having a program that would actually output the Braille for them to read. Due to missing haptic parts or responsive pegs, we used LEDs to recreate these similar features.
What it does
The Digital Braille program takes a picture as input. Using Microsoft's Computer Vision API, the program analyzes the image and outputs the text in the picture. From there, our program uses Fun Translation's API to translate the text to Braille. Once the Braille, outputted as a JSON string, is obtained by the IntelEdison board, the program parses the data and sends the signals to the LEDs to recreate the Braille display.
How we built it
In hardware, it uses an IntelEdison paired with a base shield compatible with Arduino. We used LEDs to recreate the array of dots used in Braille letters. The IntelEdison was then connected to a 3x2 matrix of LED pins. In software, we used the Ionic framework for the frontend. The backend uses the APIs to analyze the image and outputs the Braille to be outputted in LEDs.
Challenges we ran into
We ran into a lot of challenges due to the time crunch and the limited amount of hardware. We had to improvise and make use of what we had. And setup took a large amount of time due to our unfamiliarity of everything we were using. We actually had to start over from scratch around 1/3 into the hackathon because the Arduino 101 was not powerful enough to do http requests to the web.
Accomplishments that we're proud of
It was everyone's first hackathon where we submitted a project we all contributed to. Building something that is meaningful and would help people made this project very satisfying.
What we learned
We learned about new hardware and APIs. None of us have ever worked with any of these APIs nor the IntelEdison.
What's next for DigitalBraille
We're hoping to be able to integrate with and expand to existing apps like text messages or messaging apps to extend the capabilities of people who are disabled. It would allow them to be able to communicate more efficiently and aid them in being independent.