We were inspired to build an application that could assist people with dyslexia and reading disabilities.

What it does

Alice, your personal reader, converts text within an image taken on a smartphone to audio.

How we built it

Google Vision and Text-to-Speech APIs for Android, FireBase, ML Kit, AWS, HTML for web dev.

Challenges we ran into

We had trouble integrating APIs and debugging and formatting front-end of all three ios, android and web apps. Having our domain setup under AWS S3 web hosting also had its difficulties.

Accomplishments that we're proud of

We are happy to present the application on iOS and Android platforms, although at a beta stage, because of its pertinent real world applications and being able to launch our first website.

What we learned

We learned much more about cloud computing services, api's and how to integrate them to Swift and Java. We have learned how to launch the camera and create a website using

What's next for Alice: Personal Reader

In the future, we foresee further development in giving the application capabilities to translate and read aloud foreign languages as well as to read off what is being displayed live on the phone screen. Alice will also become a web browser extension and be able to read what is on a computer screen.

Built With

Share this project: