Our inspiration has always been to help as many people as possible with a single project. In this case, we focused on blind people because we thought that the cane most of the times isn't enough. Some questions that assured us that this was what we wanted to do were: How can blind people avoid obstacles that are at a certain height and cannot be detected with their cane? If they're at a restaurant and they want food... How can they read the menu? or even worst if they need to sign a contract... How can they do it without reading it by themselves first?

What it does

Blindr gives you the opportunity to solve all this problems by an implementation of algorithms that would allow the person, with this pair of glasses, to detect objects and the distance at which they are located by audio so they can avoid them. They will also allow you to detect text and read it by transferring the text to voice message as well as to detect faces.

How we built it

In order to build our impressive project, first we had to design it from the ground up, a sketch, a few ideas, with a view to the aids that could be offered to a disabled person, first we focused on performing object detection, with Artificial Intelligence, and OpenCV, for this, we used algorithms like Yolov4, algorithms specialised in object detection. But first we had to process some test images and train the model to make it more efficient iteratively. Once done, we focused on reading text in images, again, with Computer Vision and the Pytesseract-OCR python library, which is done in a similar way as object detection, since we have to create a window with the camera that tries to detect a specific image. Once done, we use the Pyttsx3 library for the text-to-speech conversion. And finally the Face detection was done with the famous Face detection technique and the dataset offered by the OpenCV Haarcascade library and a color conversion to perform the image processing.

Challenges we ran into

The first problem we tackled was hardware, as we needed a webcam. The ESP32 camera was not working and finally we had to use the IP of the mobile phone. Text detection was not reading well but by training the project better, it got better by the time.

Accomplishments that we're proud of

We are so proud that only with two people and in 36h we were able to accomplished such a big project but not only that but all the experience we gained and the people we met during this process. We are joyful to say that despite having some problems, as the esp32 camera did not work and we could not get its IP, we managed to get the project forward and have another idea that was the use of the VR merge with the phone and the IPwebcam application to do the same function, to go showing our surroundings to detect objects, text and people.

What we learned

We learned about our mistakes how to approach problems with a different and more effective way. We have promoted teamwork because after all we were all in the same thing and each person has different skills and always something to contribute. We have learnt to work as a team, because with a limited time being more than one person, the best results were achieved, to have perseverance even if there are many mistakes that perhaps cannot be fixed, more about technology and to follow through to the end.

What's next for Blindr

Blindr won't stop here we want to go higher with this project and be able to help people. We want to optimize it and share it with the world. We want to make it true, for it to be our new reality, getting closer to the main purpose, make blind people see again.

Built With

  • gtts
  • haarcascade-frontalface-default
  • ipwebcam
  • maskrcnn
  • matplotlib
  • mergevr
  • numpy
  • opencv
  • pyaudio
  • pytesseract
  • python
  • pyttsx3
  • scipy
  • tensorflow
  • torchvision
  • visual-studio-code
  • yolov4
Share this project: