Inspiration

Google Glass and other various types of wearable tech do not have features that make the world more accessible to the visually impaired. Our idea of recognizing the expanding world of wearable tech and intelligent AI, using such to make a prototype that is affordable and powerful.

What it does

Upon the press of a button, a camera take an image of the user's environment and detects text and objects. When it detects something recognizable, it will convert image data to text, and it will read aloud (audio) the environment/text to the user.

How we built it

Used an IDE and multiple libraries to develop the program. We used AutoCAD and a CNC machine to manufacture the enclosure of the processor and power-supply unit.

Challenges we ran into

Initially getting our development environment setup (Python3, pip issues, library path issues) Conflict of interest: Online or offline? Powerful and expensive or weak and portable. Power demands: Should we use Arduino or Raspberry Pi Physical dimensions: How could we make this convenient to use and transport.

Accomplishments that we're proud of

Successfully creating image processing software that can interpret its environment. Being able to implement these programs into the real world. Creating the model of the enclosure from scratch.

What we learned

Better programming techniques and use of libraries. First time experience with AutoCAD and Fusion 360. The CNC machine. Ways to implement hardware into software First-hand experience with AI API.

What's next for Sight for the Blind

Utilizing AI to intelligently interpret the camera's environment, and possibly include more features such as filters for the camera to look for. For instance: What kind of room is the user in? What does the signs say? Face recognition for familiar people?

Built With

Share this project:

Updates