Augmented Reality learning
quickly build a vocabulary
Only shows words you already "saw" through AR :)
Practice on the words you previously saw in other images.
Every time the blue bar fills, you absorb a new word in your vocabulary, which adds to your challenges. Everything here is done offline.
An example of the dataset captions, pretty intense! Image source: Visual Genome Project
To contrast the surge in demand for language education spurred by the current refugee crisis, affected countries are training a surplus of instructors and teachers who's employment will only last 5-10 years due to the transient nature of the crisis - not a sustainable workforce, unfortunately. Education is one of the major costs in the integration and inclusion of refugees, and once successful, it will invalidate the thousands of teachers and skill hours developed to contrast the temporary crisis. Countries like Germany and Sweden are turning to automation - Germany for example, developed an official app for refugee language education, and is using $5,000 NAO robots to supplement education supervised by teachers.
What it does
AI Sensei is the quickest way to start learning a language, simply wave your phone in front of items and see the translation of said item in front of you. Using computer vision and AI, this hack can create a truly personalised experience that can be scaled to the hundreds of thousands of refugees who have access to a smartphone in 2016.
When you "see" an item in augmented reality, the app stores it in your short-term memory, so it knows that you've seen it recently and know what it's called in English, but haven't quite learned it fully. You can then do challenges to sharpen your short-term memory and turn it into long-term memory by finding those items in pictures depicting real-life situations.
What's special about this is that the users give back to a good cause, as using the app helps train an artificial intelligence from which we can all benefit from. Not only does it automate education reducing the need of training more teachers, but it also becomes self-sustainable as the users train the AI for each other with use.
How we built it
We used an open-souce 1000-class convolutional neural network with some custom libraries to make it work on mobile phones, and a few filters to make it work especially well with objects. The neural net is using Torch as a framework and works through an iOS (Swift) app. The data for images used in challenges are stored in an AWS server.
Challenges we ran into
How to make the best use of visual memory. How to make the community train the AI as they use the app (challenges do this). Torch on iOS is not easy. Visual genome images are not perfectly captioned.
Accomplishments that we're proud of
This may be the first augmented reality AI-enabled education app? We've also made the computer-vision side of it work completely offline, so it's not dependent on an internet connection.
What we learned
- Refugees get 510 hours of English language education in Australia, but many consider this not to be enough, depending on their mother tongue and country of origin.
- Building software to help refugees learn a foreign language doesn't mean building something radically different from other language education software, as long as it's accessible and inexpensive.
What's next for AI Sensei
We'll integrate more languages and fix a few bugs - add optical character recognition, improve the clickability of bounding polygons in the challenges, add a few animations and gamified experiences, and social features - challenge your friends to finding items!