Inspiration

We wanted to learn more about computer vision and use our skills to create a useful application that could help others. We were inspired by the ability of ours eyes that can perceive depth and thought we could mimic a similar method in technology to help the blind.

What it does

eyePercept uses a camera to take in live video footage and send haptic feedback to the user on whether an object is close to them or not. Ex: If they are about to step on a lego!

How we built it

We used OpenCV as our main library and Python.

Challenges we ran into

We had initially wanted to use only one camera to measure the distance and depth, but we later learned that we would not be able to without depth sensors or using two cameras. We had to remodel our plan many times in order to satisfy the resources that were available to us. Also, it was quite difficult to take on a challenging project while learning about it at the same time.

Accomplishments that we're proud of

We are proud of the skills and knowledge we gained throughout this experience, and our ability to create while learning new topics in a short amount of time.

What we learned

We learned about OpenCV and computer vision, while improving our skills in python.

What's next for eyePercept

eyePercept needs a lot of improvement in areas such as accurate object recognition, calibrating images, depth maps, implementing the haptic feedback, applying an API, and more. We also want to make our program more accessible by implementing it into a mobile application. Lastly, we believe the technology used in eyePercept will be very relevant in the future since more phones are coming out with depth sensors.

Built With

Share this project:

Updates