Inspiration
Visually impaired people are having very hard time recognizing their friends, relatives and their objects.
What it does
Our solution 'Dristi' provides the description of objects in front of him and also name of his friends and relatives.
How we built it
We used Flutter and Microsoft Azure cognitive services to recognize / describe objects an people. We also used google TTS to convert text into speech.
Challenges we ran into
Providing the name of the objects which looks similar was very hard to troubleshoot. Adding new people to the model was kind of a challenge in the pre trained model.
Accomplishments that we're proud of
We succesfully checked our app by adding 3 persons and recognizing laptop, smiling faces, tables and outdoors.
What we learned
We learned how to use Microsoft Azure Cognitive Services, got a lot of hands-on-experience on android native and flutter.
What's next for Dristi
We look forward to make users add models to our model themselves ( crowd sourcing). And we also look forward for better approach towards providing object recognition to the model in real time and offline as well.
Log in or sign up for Devpost to join the conversation.