Inspiration

We were thinking about our encounters with blind people on the street and wanted to come up a better way for them to interact with the environment in real time.

What it does

The mobile app, which is currently developing in Android, uses the camera to detect objects. It uses image recognition to identify and returns a voice that says the name of the object.

How we built it

We utilized Android Studio for usability, Git, and Github for real-time collaboration. We also used Teachable Machine platform by Google to train models.

Challenges we ran into

Android Studio's compatibility with our systems are not the best. The majority of our teammates were residing in different countries with different time zones.

Accomplishments that we're proud of

We are able to develop a product that would greatly benefit those who are blind.

What we learned

We learned how to integrate machine learning models into the mobile app. We also learned how to utilize Android Studio. We learned how to collaborate efficiently despite our different time zones.

What's next for Descriptive Voice for Blind

Improvements! Improvements! Improvements! We are interested in improving the app to have a better and smoother user experience. We are also working to implement voice in the app.

Built With

Share this project:

Updates