VISION ASSISTANCE
INSPIRATION:
We observed that there is a demand in assisting the blind people and make them individually competent in this fast-paced society. The current commodities in the vend are not favorable because those items lack in accuracy and poor design. Some of them are not cost-efficient. So we took an initiative to work on this muddle thus came up with this idea- a portable device.
HOW IT WORKS?
The system is designed in such a way that it gathers the image of the direction which the camera is pointed and notifies the person with reference to the surrounding via narrator .The image is taken and it is initialized to befitting our model. The preprocessed image is streamed through various layers of our CNN (Convolution Neural Network) model to improve the efficacy of the output like normalization and feature extraction techniques. After this, the result is given in form of text and here we use gTTS (Google text to speech). This is a python library and CLI tool which succors in notifying with the detected objects through narrator.
CHALLENGES:
We definitely faced many challenges. Our first mix-up is the size of the device. To make it portable, we used Nvidia's Jetson Nano developer kit to run our model. Since the main objective runs around the object detection , we must use the best to fit quality cameras to capture the object. Finally the power supply , we have temporarily giving the source with a power bank .
Built With
- camera
- cnn
- deeplearning
- gtts
- jetson-nano
- python
- speaker
Log in or sign up for Devpost to join the conversation.