We like machine learning and computer vision and wanted to help the visually impaired with day to day activities.

What it does

Takes in an uploaded image, parses it into text, parses the text into speech, and then plays back the speech to the user.

How we built it

Many API's and libraries along with many hours of debugging.

Challenges we ran into

Browsers have a built in native file system protection service which returns a protected path which is unusable in an Ajax file stream, so I had to parse the image as a 64 bit data string and the url also as a 64 bit data string and then connect them back together in the format of a "blob" which is a file format that is undefined in javascript but needed in order to send data through an Ajax connection.

Accomplishments that we're proud of

Figuring out how to parse and splice "blobs" in 64bit file streams after almost giving up after debugging for 4-5 hours on the same problem. Also I like my website's color scheme, very modern.

What we learned

I learned way too much Ajax, the fact that Materialize css isn't as responsive as advertised, and that I'm pretty adept at javascript at this point.

What's next for 1000Words

Angle, Depth, and Facial positioning recognition to allow the user to rotate their head in the correct direction to face the person talking to them or making eye contact back.

Built With

Share this project: