We like machine learning and computer vision and wanted to help the visually impaired with day to day activities.
What it does
Takes in an uploaded image, parses it into text, parses the text into speech, and then plays back the speech to the user.
How we built it
Many API's and libraries along with many hours of debugging.
Challenges we ran into
Accomplishments that we're proud of
Figuring out how to parse and splice "blobs" in 64bit file streams after almost giving up after debugging for 4-5 hours on the same problem. Also I like my website's color scheme, very modern.
What we learned
What's next for 1000Words
Angle, Depth, and Facial positioning recognition to allow the user to rotate their head in the correct direction to face the person talking to them or making eye contact back.