I have known several brilliant people with autism or other speech disorders that have great ideas and would make fantastic entrepreneurs, but due to their disabilities, couldn't share or communicate with their peers. vrbl seeks to solve that problem by creating a software interface to give those people a voice.

What it does

vrbl utilizes pre-made phrases and words, sorted by type, to allow users to construct sentences and speak them using a synthesized voice.

How we built it

We used Xcode with Apple's AVFoundation framework to process input from phrase buttons and speak them using Apple's library of voices.

Challenges we ran into

The biggest problem when synthesizing voices is realism. The best solutions to voice synthesis are very expensive, so we used the best free library that we could. Another challenge is UI design. It needs to be easy to use, with large fonts and buttons, but at the same time able to display lots of information. We opted for a very simplistic, friendly design.

Accomplishments that we're proud of

Parsing the input from the user and feeding that into AVFoundation's AVSpeechSynthesis class.

What we learned

Realism is IMPORTANT. It's very hard to give a synth voice an identity, to attach it to a face. That technology requires serious advances before it will become seamless.

What's next for vrbl

• Expand further with iPad and Android apps. • Get the product into the mainstream, and establish it as a formidable opponent to the expensive solutions currently on the market. • Raise awareness about autism, Down's Syndrome, and other mental disabilities.

Share this project: