We are working to implement a speech to Braille translator to empower individuals who are both blind and deaf. The core of our voice-to-text technology is IBM Watson, and our device produces a mechanical output in which pins raise and lower according to the corresponding Braille characters. Currently, we have the ability to display a single letter at a time using our grid of 6 pistons.
Sequentially, the first component in our system is a Matrix Voice. This powerful board has 8 microphone inputs which it relays to a Raspberry Pi 3. The Raspberry Pi then sends the audio file as a .wav to the IBM Cloud for analysis. The Watson service returns back a transcript of the audio. A Python script analyzes this text and converts each character to an array of 6 binary values indicating whether each pin should be up or down. This data is sent using GPIO to an Arduino, which actuates the corresponding servo. On each servo are two small “links” made of lasercut acrylic. These links are connected to 3D printed pins which form the tactile output. This pins sit inside an acrylic laser cut top, and holes allow the pins to rise above the top when the servos move.
Currently, each of the individual components of our project work, but we have not successfully integrated them together. Our first plan of action is to debug everything to get a reliable connection between each of the individual components of the project. Next, we will remake our hardware, making the pins easier to insert and downsizing the product by using smaller servos. This will achieve a scale closer to that of a standard Braille display. Finally, we plan to expand our display so that entire words or sentences could be spelled at once.