Inspiration

People were obsessed with caring for their virtual pets on the little devices they carried around in the late 90's. Caring for a pet helps us to forget about our own problems. Virtual and augmented reality don't have the ability to create the immersion factor of having a pet in your own home, interacting with your stuff but the Magic Leap One changes the game. Now, your virtual pet can have virtual objects injected into a real space, move behind and under your furniture and between rooms and listen to your voice. I really enjoy the process of creating emotive interactions and a pet game is a great way to cultivate a longer term interactive relationship with the user.

What it does

The idea is for the user to begin by picking one of the available pets, each with a different personality profile. Some dogs will get angry and aggressive with a person's anger and aggression while others will trend towards submissive behaviors. By combining personality profiles with tonal analysis of user dialogue, a wide variety of interactions can be supported. For the hackathon, I focused on just one pet, a Corgi dog with a fixed personality profile that reacts positively to happy tones and more timidly to angry and negative toned messages.

An introductory process has the user meet the Corgi dog for the first time by spawning it into an environment that is being characterizing by the Magic Leap spatial mapper. Eventually, a small tutorial on how to do the basics to care for the dog will be offered. The dog has a set of behaviors that are triggered by the user's voice such as sit down, lay down, go to the bathroom, jump, explore the room, come to me, aggressive barking when too close with hands and a few more. The user does not have to say specific commands but instead the intent is determined from the spoken words using the IBM Watson SpeechToText and AI Assistant services. Also, the IBM Watson Tone Analysis service is run to determine the emotional trend of the user's dialogue. I built an IBM Watson AI Assistant workspace that let me put in example sentences for each intent and populated a number of intents for the dog behaviors.

How I built it

I built a Unity project starting with a foundation of the Dodge sample project and the Magic Leap Lumin SDK package for Unity to learn about spawning the dog into the world using a pinch gesture. All the various example projects really helped me learn a lot about how the Magic Leap platform is expected to work with Unity using the available packages and helpers. The documentation and examples are a step above and some are just amazing.

For the Corgi dog 3D model, I used a package from the Unity Asset Store (https://assetstore.unity.com/packages/3d/characters/animals/dog-corgi-70082) since I do not yet have 3D modeling in my tool belt. The Corgi asset comes with a number of animations to take advantage that I was able to mix and match with the user voice intents. An outer state machine tracks the basics for whether the dog is aggressive should someone get too close. A more fine-grained event-driven loop triggers based on voice dialogue intents. A motion controller script helps move the dog's rigid body around, adjusting the active animation using the Animator state machine and triggers while trying to stay on the Magic Leap generated mesh, a custom sound behavior script allows for dog sounds based on manual or animation triggered events. The motion script has various modes such as explore the room, come to the user, walk straight, run, etc. Overall, the basics were demonstrated with some more work needed for better navigation on the mesh, animation transitions, more sounds, etc.

I've worked previously with the IBM Watson services in Unity and with python and nodejs. The Unity asset makes it fairly straightforward to take advantage of the neat services and the example apps offered open source by IBM make it easy to jump back in to get things working. The Magic Leap was not terribly painful for getting access to the net and microphone. With some AR/VR devices this can really be a pain. Here I had to add a few privilege requests and use a privilege script offered by Magic Leap and documented on the developer forums. I also asked the local Magic Leap developers that were supporting the hackathon to ensure I had all the network rights covered and they were super helpful the whole time. The network did slow during the live demos, which is a bit annoying when using external APIs. Maybe next iteration I should do a combination of an embedded ML model that backs up the AI APIs.

The plan for later is to make the project into a full game with scoring, levels, money to purchase things like brushes, dog beds, leashes, etc. A web server and database layer would track all data such as pet location and health, water/food levels, environment objects (virtual/real), etc. A light service layer on top would allow for fetching and updating each user as well as tracking gaps in play, notifications to other devices for reminders, etc.

Challenges I ran into

I found working with the generated Magic Leap mesh both difficult to quickly comprehend and at the same time easier than expected for the basics. Briefly, Magic Leap is constructing the world as it goes and making my Corgi dog a rigid body with gravity and surrounded by a collider was enough to get the expected behavior. That said, I wasn't sure how to convert the mesh over to a NavMesh (mainly I've baked static meshes for AI-controlled agents previously in Unity) and had big problems with the Corgi falling over or falling through the floor using a mesh collider or when spawned too close to the ground. The weakest part of my demo is definitely the navigation and that was frustrating and a time sync. I should have just generated a large plane for the dog to stand on that gets updated by certain mesh features above the floor. The exploration by the Corgi is super cool because of the occlusion showing off it's amazingness with the Magic Leap. I highly recommend checking out the Drive sample they have where you can fly a prop plane, helicopter or drive an RC car. That's what I spent time in to learn more about the meshing and mesh visibility.

Accomplishments that I'm proud of

I'm proud that I got something functional during the hackathon that is fun to use. Sometimes you get stuck on loose ends or sabotaged by some random problem that takes hours and ruins the demo. While not the polished greatness I had envisioned in my head, I think the demos and experience show off what is possible and set the stage for some really killer experiences in the future.

What I learned

This was my first time with the actual Magic Leap One hardware in my hands and first time developing for it beyond using the Magic Leap Remote (simulator) running the example projects. I learned that Magic Leap is much further along on the holy grail of MR than I previously thought and I might just have to get my hands on one of these devices. By working through the Magic Leap examples, I learned a lot more about Unity features that I don't use much and got a taste for a different style of using Unity, furthering my push to be a better Unity programmer.

What's next for Mixed Reality Pet

The concept of variations in personality profile (character archetypes / personas) mixed with intent and tonal sensitive response by NPC's opens up a rich world of interaction for the AR, MR, VR space. Using machine learning, we could automate the process of learning the mapping for people and animals to certain stimuli, given their personality profile. Just like the various breeds of dog or cat, the various personalities of human should vary the interactive storytelling narrative and experience. A friend and I have been discussing these ideas and more of his notes are on his blog about Character Cartridges and Embodied Identity https://dreamtolearn.com/ryan/cognitivewingman/18/en.

Participant Info

My name is Jacob Madden and I am a Georgia Tech grad, full stack software developer, artificial intelligence & machine learning researcher exploring VR/AR/MR, robotics, Unity, iOS. https://github.com/jagatfx

Past experience with Mixed Reality and/or Spatial Computing development or design

I have developed several apps/games for virtual reality and augmented reality. I won a hackathon using Unity and an Oculus Rift with a mounted Leap Motion controller where the user used their hands to play the game Simon. For fun I also added interactions with spawned spheres and zombies. After that experience, I bought a powerful machine and pre-ordered the first batch of HTC Vive and ported the game to the Vive. I later created a VR experience for customer archetype analytics where using spoken words with the IBM Watson Conversation service (now called Watson Assistant) and speech-to-text to request commands that summoned certain customer types (humanoid avatars) to an interview location where you ask basic questions. The avatar responded with answers based on customer analytics data and text-to-speech. I ported the experience to AR using ARKit in Unity. I created Magnet ViewAR, an app for launching augmented reality experiences (image spreads, image boards, videos, 360 videos, 3D models, animated GIFs) when recognizing target images. This was done in Swift in XCode using ARKit, the Vision Framework and CoreML. I also created a prototype for Magnet that performs indoor mapping using Bluetooth beacons overlaid in AR as well as face recognition while running augmented reality using a CoreML model in order to recognize people arriving to an event. Most recently, I created an AR portal app that is publicly available called GorgeKeep Portal for the Ithaca Wizarding Weekend, available for both Android (using ARCore) and iOS (using ARKit). It was developed by me in Unity. I integrated spatial audio experiences while inside the portal and an agent (orc creature) that follows you around and is triggered to say things. The guardian locations are triggers for effects and audio from voice actors.

Here are some store links for GorgeKeep Portal:

Built With

Share this project:
×

Updates