Inspiration
We are inspired by the history of narrative based language immersion education and pioneering enhanced reality ESL programs like MondlyVR and immerse.me.
What it does
We create AR experiences that use empirical principles of learning design to enhance learning outcomes for second language acquisition. Our technology has many potential use cases, but for this project we have chosen the familiar situation of participants traveling to an international hackathon.
Before arriving to the Hackathon, our user can utilize our technology to practice asking for directions in their native language. Upon arriving, our user can scan the MIT Reality Virtually logo using Vuforia to see a volumetric avatar which will tell them where the Hackathon is located, and how to find it.
Our AR solution can be layered over any conference logo. Entire FAQs could be integrated into our structure to streamline the on-boarding experience for international attendees, as well as help users practice foreign language skills in preparation for international events.
How we built it
We built the AR component using the Vuforia augmented reality SDK and Unity. We recorded the volumetric video using Depthkit and a Kinect. Special thanks to the Bok Center at Harvard for giving us professional studio access, and to the small nook on the 5th floor of the Media Lab for giving us impromptu studio access.
Challenges we ran into
Originally we envisioned an entire mixed reality curriculum targeting immersive language learning with use cases taken from everyday life. Based on the Gradual Release of Responsibility model (GRR) fr 2nd language learning, our would have three phases. First, a period of modeled instruction, in which users would watch first person 360 videos in VR of common travel situations, such as asking for directions. Next would be a section of guided instruction, where they would be put in the scene again, and it would only progress after they accurately accomplished the learning objective with a prompt. Lessons would then be reinforced through an independent learning model, where users could project avatars from the situations they watched into AR using Vuforia. Even without a headset, users could practice the situations with realistic avatars on any flat plane. Stretch goals included a peer learning aspect with VR video-conferencing, where multiple non-native speakers could replay each situation with a partner, as well as a full immersion based chat room.
Unfortunately, we ran into technical challenges from the beginning. No one on our team is a professional coder. Our only trained engineer left the team originally, came back, and then was sick again on the final day. Because of this we were not able to integrate 360 video into Unity or incorporate any of the voice recognition or branching conversational paths as we had planned.
We pivoted to focus on the AR component using Vuforia, and were unable to get it to work on Android phones. All hope seemed lost as nothing was working the night before submission.
What we learned
Without any prior experience with volumetric video or AR, we recorded over fifty interactions with six different avatars in six out of the 35 languages represented here today. We learned about volumetric capture, AR development, and deployment on mobile phones.
What's next for Cadence
We're going to build our original vision!
Log in or sign up for Devpost to join the conversation.