Our team member that came up with the idea, Marin Ito, lived with a blind host family. She found while living and interacting with them, many items were in Braille (e.g. recipe books). She would use existing Braille translators but they were extremely hard to use and outdated technology. That's why she proposed to us to create an app that allows users to take images of Braille on their phones and to use image recognition to translate it to English (or any other language). We all want the world to be inclusive of all people, and so we hope this app can help to bridge the gap between communication among blind and non-blind people.
Challenges I ran into
-Main Issue: Using our own dataset and inputting it into the model, which introduced a bunch of Tensor and dimension errors.
What I learned
- How to use an existing dataset and input/prepare for a CNN model.
- How to train and evaluate the dataset and output accuracy using the model.
- Created a website using HTML and CSS and established our brand.
What's next for AEyeAlliance
-Find/prepare a data set for braille. -Recognition of letter -> word -> line -> paragraph. -Auto-detection of a language that braille is in. -Build it into an app.