Globally the number of people of all ages visually impaired is estimated to be 285 million, of whom 39 million are blind. People 50 years and older are 82% of all blind. The biggest challenge for a blind person, especially the one with the complete loss of vision, is to navigate around places. Blind people roam easily around their house without any help because they know everything in the house. But what about the outside areas? Our idea gives an easy solution to this problem. I built a hardware product using raspberry and leveraging deep learning technologies to assist blind people in getting an idea about their surroundings by continuously detecting objects and giving them voice commands to navigate.

What it does

A deep learning model developed to detect nearby objects for the blind peoples. A wireless camera will be connected with Raspberry Pi which will give the image inputs which will be used to predict which object is in close proximity. A voice assistant can be used by the user to give commands to the device in addition to this voice assistant can give feedback to the user about the surroundings and report about any task which was assigned by the user.

How I built it

I have used Deep Learning libraries Tensorflow and Keras for object detection and person identification. For object detection, I trained the model on COCO Dataset set after training the model I converted it into TensorFlow lite format using TensorFlow lite converter. To improve the accuracy of the model I have used transfer learning and data augmentation. I have used MobileNet V2 which is trained on ImagenetDataset for transfer learning. I deployed deep learning model in android studio to showcase the prototype of my idea using TensorFlow lite.

I built some prototypes of the idea

  1. Object detection model
  2. Person identification model I am using TensorFlow and TensorFlow lite so our deep learning models work without even connected to servers or without internet connectivity it can work on

Challenges I ran into

Due to costly hardware to run deep learning models, I deployed our models in android device but for the commercial purpose, I will use hardware to run our models.

What I learned

I have learned the transfer learning techniques and data augmentation which really helps us in reducing overfitting and increasing the accuracy of our models. I also learned to deploy Machine learning models to production using TensorFlow lite.

What's next for Third Eye

Currently, our models are working on Android device apks are present in GitHub repo. In future, I will use our models in Hardware devices like Rasberry pie/Jetson Nano/ OAK-D modules.

Built With

Share this project: