Inspiration

We were inspired by the notion of creating something easily accessible for a lot of people who need it— we thought about making the most about what we already have (underconsumption core trend) and how we can use current technologies in more ways than they’re currently being used.

One of our friends broke her leg, and it was difficult to see her struggle to maintain her previous independence. We recognize how independence is much easier when you have something to “anchor” yourself—whether that a crutch, a walking cane, or a friend by your side. We wanted to make something that would help “anchor” those who need it and can get that help from something they may already have—a modern smartphone.

What it does

Anchor is an Android app meant to aid the visually impaired by utilizing the dual lens/wide angle camera feature found on most smartphones these days. The app uses the slight angle difference between the lenses to generate a disparity map and apply a deep learning object recognition process in order to detect when certain objects surpass a certain threshold of proximity to the phone camera. In turn the user can be warned of incoming objects early on, such as walls and street lamps. The user can place their phone in a convinient location like their front pocket, so the camera captures photos regularly of the space in front of the user. It serves as a discrete and easily accessible virtual cane.

How we built it

We used Android studio with the OpenCV library, as well as Camera2 API and the Gradle API.

We used the Camera2 API and OpenCV for the simulatenous double image capture. We process image the images by first converting to grayscale to make the disparity more clear, and then applying the StereoBM algorithm to compute the disparity in terms of pixel distance. We use deep learning for object detection and identify the values that surpass the threshold for “closeness” which connects to a vibration feature on the app.

Challenges we ran into

Issues with IDE installation and opencv dependencies.

Accomplishments that we're proud of

We are proud of being able to find a use for something that so many people already have in their pocket!

What's next for Anchor

In the future, we want to implement ML algorithms that can discern between different objects so that redundant alerts like a falling leaf or a bird don’t unnecessarily alert the user.

Share this project:

Updates