Inspiration

We had a lot of inspiration while creating this app. Being blind is a very big disability that affects around 39 million people around the world. Our team knows that these people have trouble identifying items and so that is why we built Describer. With Describer blind people or even the visually impaired do not need to struggle to identify an item any more.

What it does

Describer is very simple, The person simply goes on the app and clicks a button while their phone is pointed at something. The camera will take an image and then will label the objects found in the image and then the text is turned into an audio output. If there is text that is found in the image it will ask the user if they would like to hear the text in the image. If they choose to hear the text it will provide the text as an audio output. The app also has a very simple, easy to use UI which allows it to be great for the users it is directed at. This makes it much easier for someone who is visually impaired to identify anything.

How we built it

We built it in Android Studio, with the main language as kotlin and the UI with XML and we also used Google-Vision-API from the Firebase ML Kit. Having a Cloud API allows better image labelling compared to an On Device ML kit.

Accomplishments that we're proud of

We are very proud to be able to use a Cloud API, and that too from Google. We are also so happy that we were able to make something that could help so many people around the world that have to face the problem of being visually impaired.

What winning this hackathon means to us

Winning this hackathon means a lot to our team. Firstly it gives us motivation to keep working hard and to keep on creating applications that people can benefit from. Winning this hackathon can also encourage younger audiences because the devolpers for Describer are only teens. This can motivate them to learn to program at a young age and to create applications that can help shape the future. This is what winning means to us.

What we learned

Through this hackathon we learned soooooo much. It was our first time using an api from the cloud. In the making of the project we had to read the docs for the Google-Vision-API and watch videos about it. We also used the wonderkiln CameraView to display the camera preview on the screen, in case anyone else wants to use the app, and to capture an image from the camera. Also we were able to do text to speech, for taking the labels from the image and saying it aloud, and speech to text when asking if they would like to listen to the text.

What's next for Describer

Our team is hoping to get Describer on the app store in the near future so that people from around the world can benefit from it. We are also planning to make more features in it like customizing the voice, offline access which would use Google's ML Kit on device, and add face recognition.

Built With

  • google-vision
  • kotlin
  • xml
Share this project:

Updates