Inspiration
A non-invasive approach to enabling the visually impaired to see the world in real time.
What it does
It enables the user to view the beauty of this world without a surgical procedure to implant a new pair of eyes
How we built it
We built it using python code, image processing, raspberry-pi, and google cloud vision API. It captures the real-time image and converts it to digital signal data, this signal is then converted to brainwave signal samples of low frequency that are transmitted to the optic nerve with the help of transcranial magnetic stimulation.
Challenges we ran into
Converting digital signals to analog brainwave signal samples as they are of low frequency and brainwave signal samples are hard to decipher manually.
Accomplishments that we're proud of
Solving a real-world problem statement and increasing the ease of the user via a non-invasive technique to gain everything the world's beauty has to offer.
What we learned
We learned to apply our theoretical knowledge about brainwaves and image processing models.
What's next for Vision4U
As the current model is based on the google glass model, the next step is to decrease the size into a compact contact lens form.
Built With
- google-cloudvision-api
- google-glass
- python
- raspberry-pi
Log in or sign up for Devpost to join the conversation.