Inspiration

Canada is currently accepting thousands and thousands of new refugees yearly. There are a countless number of people currently living in Canada that are struggling with basic English. We all have seen this on a first-hand basis. All of our (team member's) parents are immigrants. We see the difficulty that people like them face every day when it comes to using the English language, even if they've been living in Canada for over a decade. Coming to Canada can be intimidating to these newcomers. Not only are they most likely coming from a far place, but they face a huge language barrier on their day to day life. We wanted to try to find a way to make the lives of these Canadians easier and help them feel a little more at home in Canada.

What it does

This project is a prototype of what would be a fully functioning real-time app where the user can constantly learn about what objects they're surrounded by. Often when newcomers come to Canada, they learn by interacting with the environment around them. Our prototype makes this even easier by allowing the user to identify everything that they don't recognize in their field of view using their phone camera! It can even translate it back to their preferred language so that they can compare the two and understand what the English word means.

How we built it

Our team used various principles of machine learning, app development and augmented reality to come up with the most cohesive prototype we could create given our skill level and time constraint. We used Android Studio for the primary app development. Within Android Studio, we used different tools such as ML-Kit and Firebase Cloud API to develop our app. There are essentially three parts to our app. The image capturing, image recognition and labeling, and finally, augmented reality.

Challenges we ran into

When we first came into Hack the North, we had this ambitious idea that we were going to use augmented reality, virtual reality and machine learning to make the lives of ESL citizens in Canada, easier. However, we were brought to reality a mere 12 hours after beginning to work on our hack. We had to adapt our idea to make it something feasible and attainable by our group members. Another challenge we had was that implementing our idea is harder in Android than it is in iOS so we switched back and forth a lot which ate a lot of our time. Implementing the app with augmented reality posed a whole new level of challenge. There were no mentors who were specifically for augmented reality which meant a lot of the work and research had to be done by our team members. We had four members at the beginning of the hackathon but one of our group members left halfway through which was a minor setback. Out of our three remaining members, only one was proficient at using Android Studio which meant that there was a lot of learning to be done. In the end, we overcame all the challenges that we faced and finished the prototype of our project.

Accomplishments that we're proud of

Given that our group members weren't the most experienced when it came to things like machine learning, or using some of the API's that we used, we are extremely proud of the fact that we were able to finish our project in the way that we did. As we listed in the challenges section, we faced a lot of challenges and setbacks. We had to think on our feet and find solutions that we could implement ourselves, given our skill levels.

What we learned

There were a lot of things we learned as a group. Time management was one of them. We didn't have our final idea until much later into the hackathon which forced us to plan. We also learned how to work in a team. It was the first time all of us were working with a lot of the technologies that we used so we had to constantly help each other and learn as we went. That was the most enriching part for us as a team. Besides the actual computer programming skills that we learned, we learned how to identify problems which inevitably helps us understand what problems are important to us and various ways we can solve them.

What's next for EasyLang

EasyLang is currently in its prototype phase. We want to implement the project into virtual reality to allow easy use of the app. Right now there are constraints when it comes to language, labeling accuracy, and translation. If we were to further develop this project, we would allow for higher accuracy, a larger variety of objects, and increase the efficiency and perhaps features of the app. We would also like to develop a better user interface so that the overall aesthetic of the app is much nicer.

Built With

Share this project:
×

Updates