Inspiration
There is a lot of technology to help people in need whether it is to fight natural disaster or to ease the difficulty of physically challenged people. This project is a health-related open innovation to challenge the difficulty faced by visually impaired / blind people. Having an app in the smartphone where you hang the smartphone around the neck and the phone camera can detect all the surrounding and inform you, this could be a lot easier for blind people to walk and get to know about the surrounding environment.
What it does
1.An android/iOS app that can help people with visual problems navigate through the crowd better.
2.Phone's camera can be used to see if people are coming on a footpath and give voice instructions to the user to change their direction.
3. We can also set the field to 360-degree cameras which will help better navigation as tech gets cheaper.
4.Writing texts manually are also integrated to convert into the speech that will help the speech-impaired person as well.
5.We have integrated text-speech API which converts object detected - text into speech. It also measures whether the object is far or near and gives voice accordingly.
6. We have trained the model with the coco dataset and SSD mobile net v2 and Yolo. Besides that, it has a proximity sensor to sense the surrounding and vibration functionalities when an object is detected.
7. Plus we even enabled the mobile proximity sensor so that if there is something near to the sensor or the phone, the person will get an Alert command with the vibration. we can even implement this logic to the stick so that the person always gets alert whenever there are some obstacles.
How we built it
We built it using TensorFlow, Android Studio, voice assistant, AI/ML
Challenges we ran into
Integrating TensorFlow dependency in android for the first time was quite a tedious task.
It took a little time for us to understands and we did it.
Object detection was not working easily, camera API was also not supporting properly and the app was stopping several times.
It was difficult for us to train the ML model for object detection.
Also, the proximity sensor has the very little capacity and we couldn't increase that part.
So these were the challenges we ran while building the project and it never demotivated us. We found the bug and then we fix it anyhow. It was only about time constraint which bug took how much time to fix it.
Accomplishments that we're proud of
Implementing the basics in the right way and completing the prototype which works on a 95% accuracy rate.
We're proud of the contributed hack to the society for social good and to help visually impaired person by detecting object and converting to speech as well as speech-impaired person with Text To Speech facility.
Our team is proud that we completed everything on time. As our team was international, we had to contend with significant time differences, but we were still able to connect with each other and complete our project.
What we learned
We learned the MLmodels to be used in for our hack. Also, completed the basic knowledge check with TensorFlow. Overall, the hack prototype is an application of our newly learned skills.
What's next for HELPBRIGHT
- To make it more interactive by adding voice chats.
- Addition of google maps voiceover.
Built With
- ai
- android-studio
- ml
- tensorflow



Log in or sign up for Devpost to join the conversation.