Inspiration
We wanted to improve accessibility with items already at our disposal.
What it does
Readefine is a smart glove consisting of a Raspberry Pi and camera that allows users to point at a text and hear the words spoken out loud.
How we built it
We first take a picture with the Raspberry Pi camera module and crops the image to remove extraneous text clearly not in the centre of the image to reduce processing time. Then, the image is sent to the Google Cloud Vision OCR API which returns a JSON of all words in the image and their locations. The JSON is processed to find the word pointed to by looking at the location of words relative to the centre of the image, where the finger sits. Then the Unix text to speech system outputs the spoken word.
Challenges we ran into
Hardware
- Designing the glove structure to position the finger in the centre of the image placing the Raspberry Pi and breadboard on the glove
Software
- Reducing latency between time finger moves to a new word and time when word is read
- Speeding up the OCR calls by reducing words on the page with image preprocessing
Accomplishments that we're proud of
We're really proud to have completed a hardware intensive project with successful integration of all the processes. In particular, we are proud to have made a product that both furthers our personal development and is a meaningful application of our skills. Friendship
What we learned
We learned to use the Raspberrry Pi and the camera. We also learned all the lyrics to Mr. Brightside.
What's next for Readefine
Further reduce latency Collect the other Infinity stones
Built With
- google-cloud-vision-api
- python
- raspberry-pi
Log in or sign up for Devpost to join the conversation.