Inspiration

Someone once mentioned that if it's possible to drive a Tesla autonomously at 70 mph without human supervision, but why isn't it possible for a blind person to have navigation at 1 or 2 mph? This was the main source of inspiration that led us to push ourselves and the technological boundaries we know to develop a potentially life-changing application for visually impaired people. It is gut-wrenching to see innovation leaving behind the disabled, and we decided that we needed to change that and even the playing field.

What it does

EyeLead scans the user's surroundings and describes the world around them in detail. It can recognize objects, people, and text, as well as measure the distance between the user and the object.

How we built it

We used Python and GCP APIs for Voice-to-Text & Text-to-Voice conversion and integrating a simple react app with pre-trained TensorFlow models.

Challenges we ran into

Challenges we ran into:

+adding microphone to be able to interface with google collab

+figuring out how to transition the speech to text API and text to speech API capabilities into react.js

+selecting the fastest API

Accomplishments that we're proud of

We were proud of building such an inspirational idea, from a vision to a high-fidelity prototype. I think the collaboration, the coming together of ideas, and all of the phenomenal challenges we overcame are something to be proud of. From strangers to a unified vision of dreamers.

What we learned

How to access Microphone through Google colab virtual environment

One cannot access the microphone straight from Google collab because there is no microphone attached to the virtual machine which executes Python code in Colab. Instead, you want to access the microphone of the computer running the web browser. Then, capture data there, and pass it back to the virtual machine for processing in Python.

The technical research shows huge potential with the integration of pre-existing technologies with the relatively newer ones and eventually helping our target customers. The research shows that most of the existing applications lack humane touch and it's definitely something we want to work upon with the new and upcoming tech.

What's next for EyeLead - The Most Futuristic Visual Aid Assitant

The next steps for EyeLEad would be to program EyeLead's voice and include indoor and outdoor navigation to expand on the vice features that the user can ask EyeLead.

Built With

Share this project:

Updates