We were inspired by FIU's mission for international and multicultural connection, so we decided to build an app that can help _ anybody _ describe their surroundings no matter what language they speak. mylingual allows users trying to communicate in or learn another language to upload a picture of something and see each object in the picture labelled with its translation. This allows users to _ instantly _ be able to communicate about their surroundings with people of any language.
What it does
mylingual takes an image from the user, analyzes it using Google Cloud Vision, detects objects, and labels them in both the user's native language and the one they are trying to communicate in.
How we built it
Challenges we ran into
Accomplishments that we're proud of
We're super proud of the vision we've created for unifying people of different languages. It's our belief that diversity fuels innovation, especially in a hackathon setting. We're proud of the fact that our app can be used on almost any device, by speakers of any language.
What we learned
Going in, nobody on our team had used Wix or the Google Vision API. After 36 hours we've learned a ton about both, including the ins and outs of Wix's infrastructure and the power of Google's Machine Learning APIs.
What's next for mylingual
Looking forward, we want to make mylingual even more accessible and more helpful for more users. We plan to expand our number of supported languages vastly and implement a translation history for users to be able to refer to in the future. These changes will help drive us towards our goal of being accessible and useful to every human being.