We live in a world that largely assumes that people can see, meaning that those with visual impairments or blindness are almost always at a disadvantage. There are resources – like guide dogs – that can help, but no perfect solution exists. For example, there's no good way for blind people to identify others from across a room, or get a quick survey of the objects on a desk.

Enter Blindsight, a computer platform leveraging deep learning, computer vision, and the voice recognition and synthesis capabilities of the Amazon Cloud, to help the visually disabled gain access to aspects of the world that so many of us take for granted.

Blindsight is composed of three parts: an attachment to a standard probing cane, containing a Bluetooth Low Energy (BLE) beacon; a high-resolution camera smaller than the human iris; and an internet-connected System on a Chip (SoC).

When a user activates the beacon, it triggers the camera, which scans the vicinity for objects and people. Depending on the user's choice, it can search for people they know, describe the people in the room, identify objects, and even read signs and labels.

Blindsight means that a visually impaired user can wait for a friend on a park bench and be notified when that friend arrives. Blindsight is also capable of understanding facial expressions, recognizing celebrities and landmarks, and identifying a variety of sight-based social cues.

It's operated with a single button, and the power of the user's voice, and can run on battery power for nearly a week without recharging. In short, Blindsight is virtual vision; it increases independence, and helps level the playing field for the visually impaired in a sight-driven society.

Share this project:
×

Updates