Canada, without a doubt, is one of the greatest countries to live in. Maple Syrup. Hockey. Tim Hortons. Exceptional nice-ness. But perhaps what makes us most Canadian, is our true appreciation of multiculturalism.
Many of us are or have families which have traveled far and wide to be a part of this great country. Lately, this has become even more common, with many refugees choosing the safety of our borders. But as receptive as Canada has been, there still exist difficulties for these individuals, many of whom have little prior exposure to English.
Introducing, bridgED. Fast, practical, and educational translations allow our new friends to understand the environment around them, right from one of the most universally used tools, our phones.
Using IBM’s Watson visual recognition and language translator, we’re able to identify and then translate objects. Utilizing photos not only make it faster than typing, but it allows the identification of items without obvious translations in native tongue. Convenient descriptions and wiki links allow them to then quickly understand the object on a deeper level; making bridgED a speedy learning alternative, especially for those who find it difficult to learn the language on a formal level, like many hard-working workers who lack the time.
But of course, this may still not be a totally practical alternative, given the time spent raising and lowering the phone. That’s why we also have a soon-to-be-integrated feature utilizing AR, to provide near instant translations, optimal for travel. High-importance signs such as “Dead-End”, tourist areas with a high-density of unique cultural goods, or a way to quickly understand the variety of local cuisine; are all things that bridgED can help with.
bridgED was designed with goal to help keep us all together. Because what’s better than enjoying Tim’s, poutine, and hockey? It’s doing it through the collective understanding of a nation united among differences. Eh?
How we built it
For our project, we utilized the specialized skills of the entire team, and split our work force in two. One would use node JS as the base platform and use react-native to build the core educational functionality of our app, utilizing IBM Watson. Our other group would make another app through the use of Unity; implementing an AR framework to provide a more immersive, alternative experience, focused on speed and quick practicality.
Challenges we ran into
We initially ran into some difficulty dividing work evenly, as some of us were much more experienced using certain frameworks than others. While they both provided unique challenges, we ended up sticking through the difficulties and ultimately decided to go with BOTH applications. Through the use of libraries, integrating our React App directly into our Unity app should be possible, later allowing us to provide a more complete individual package.
Accomplishments that we're proud of
We ran into a lot of trouble getting started, especially with Unity, as our members experienced with Unity were a rather rusty and had little experience doing initial set up of projects. We ran into a lot of small issues as well with versioning, our android deployments, and the usefulness of our APIs; but ultimately, we believe we over came those challenges and came up with a pretty good product and strong proof of concept.
What we learned
While learning and trying out new frameworks are great, we were able to get much more done than we probably would have by optimizing our work distribution.
Also, we have a little bit to go still with image recognition, before it can become truly reliable.
What's next for bridgED?
- Smarter general image interpretation would greatly improve the usefulness. We could attempt to integrate google image searching for more consistent results.
- More features to emphasize and support learning. Can take user voting to determine effectiveness of the algorithm's guesses, improving it for the future.
- Finalizing and streamlining into a single application package