Inspiration:

The project is important because it lowers the barrier of entry for artists to tell a story. It reduces the need for deep experience in Unity in order to visualize what the storyteller imagines. Currently we are able to populate a scene based on the storyteller inputting sentences; we would like to incorporate AI and natural language processing to have interactions and animations between characters and viewer. This includes if/then statements, so stories can have a in interactive progression.

What it does:

In its current state, the user types in a series of scenes separated by period they'd like to see in VR. Then, they put on the VR and each of the scenes plays out, by pulling up 3D objects of the elements named in each scene.

How we build it:

We created scripts in Unity to search for keywords within user imputed phrases. We populated the build with 3D assets that were modelled during the hack and made them toggleable to groups of related keywords.

Challenges we ran into:

Our team was very ambitions and wanted to incorporate animations attributed to characters and AI natural language recognition, but ended up having to limit the scope of the project.

Accomplishments that we're proud of:

We all brainstormed well together and were able to come up with a concept that we all believed in and were excited about. We are proud to have created the ability to "spawn" characters and objects within a 3d space by inputting phrases. Our team members gained a great appreciation for each-others unique contributions. We also had some AI magic that wasn't integrated into the final that we were super proud of!

What we learned:

We learned lots about Blender and Unity and Python!

What's next for The Fantastic Wizards Apprentice:

An intro screen running on a computer (outside of, but connected to VR) will play an intro sequence of an animated young wizard, who will get text inputs from the player, who is their professor quizzing them. At the end of the animated skit, the young wizard will prompt the player to put on their VR headset.

Once you’ve written your scene, you can control the various objects you’ve generated by bringing them to life and interacting with them directly. If the AI had a written description in natural language (the sentence formula starting with nouns, followed by verbs applied to them, and finally if/then statements: "there is a pond and a princess and a frog. The frog is jumping around. If the frog touches the princess, it turns into ice cream") it would play out the entire scene with the AI figuring out what animations to apply to which models and their movements throughout the scene.

An alternate path would be to treat the elements like VR toys, which could be moved around the scene on tracks that are recorded by the player in VR, while assigning animations to them (ie: grab a dragon, move it around the sky with a "fly" animation).

We would like to add more assets to allow us to tell even more enriching stories and incorporate animations and characters that will interact dynamically within the story.

Credits (in alphabetical order):

  • Tahnee Gehm: 2D animation, narrative, coordination, video
  • Doug Hamilton: narrative, coordination, Unity
  • Anthony Lowhur: AI development
  • Koi Ren: 3D models, design
  • Simon Swartout: Unity, VR development

Tech used:

  • Meta Quest 2
  • Unity
  • Foundry
  • AI (Python)
  • TVPaint
  • After Effects

Built With

Share this project:

Updates