Inspiration

When learning a new language, one of the first and the most difficult steps is getting a hold of the vocabulary. Our app aims to help with that by providing users with direct translations of the names of the objects that they can see.

What it does

A mobile app that allows people to learn new languages while not feeling too alienated! Our app allows the user to point at any object, and it will return what that object is to the user! The user can also choose what language the app wishes to output

How we built it

We used Google Cloud Vision API to help with the labelling and the IBM Wattson API for the translations. The app was built using Unity and C#

Challenges we ran into

Working with JSON return values gets a little annoying after some time. It was also challenging to figure out a way to smoothly switch between different languages, the structure of the methods called combined with yeilds in return statements made it hard to find a good spot. In the end, we kept a few parts separate which resulted in a very simple and elegant solution.

Accomplishments that we're proud of

Simply building out an app that can recognize things and translate (i.e. work) as we planned was very exciting. Some of the teammates did not have development or Unity experience, so being able to have something built, even integrating in new tools such as the IBM and Google Cloud APIs, in such a short time is already a wonderful blessing. On a more serious note, our app being able to run, in the hopes of letting people learn language in a new way, is also what we're very proud of.

What we learned

During the course of this project, we acquired comprehensive insights both in terms of coding and design. From a coding perspective, we mastered the utilization of Unity's capabilities to develop a mobile application. This encompassed script development, UI jargon, and the efficient integration of 2 different APIs. On the design front, our preliminary conceptualization overlooked certain critical aspects, including user-centric design and technological limitations. However, through consistent iteration and rigorous testing, we identified and rectified these challenges to achieve our objectives.

What's next for what's it called

using IBM Wattson assistant V2 or ChatGPT to provide meaningful context driven meanings and usages. Think a better kind of dictionary. We might also try to make a VR port

We were in the process of integrating a chatting assistant in the app, so when a person learns of a new word in their surroundings, that AI assistant will be able to make conversations with the user to give context based examples, provide further clarifications, discuss the origin, etc. This process of learning how to communicate about their surroundings is in line with our goals. We might also try to make a MR port in the future.

Built With

Share this project:

Updates