Project Overview

Our project utilizes motion capture produced by the Dost AI model to create gesture renderings on Blender. These gestures are based on the emotion and content of different speech samples.

The potential to use these animations to enhance realistic experiences in VR, entertainment, and social healthcare is expansive. Because these movements were generated with an AI model that took speech input, you can create digitally expressive storytelling experiences and more!

What it does

Our project bridges the gap between output from Dost AI and the final video product. We have taken the motion capture files (.bvh) and converted them to video form. This project is vital to Dost AI because it takes the .bvh output and converts it to a usable product that is easy to visualize.

How we built it

Our project was built using the latest version of Blender and the BVH addon, bvh retargeter. We started by making our own 3D model and skeletal structure to map the motion capture data onto. Then, we used the BVH addon to map the motion capture data onto our custom model. We then rendered the project into its final video form.

Challenges we ran into

Our biggest challenge was mapping the .bvh file onto our model. We went through many different methods before finding the best one. First, we tried using pre-made models. Then, we tried using separate skeletons. Finally, we resorted to making our own custom skeleton for the mapping process. This custom skeleton allowed us to map each individual bone to the .bvh file, giving us maximum control over every motion. This allowed for the smoothest and most accurate product.

Accomplishments that we're proud of

We’re very proud that we got to map our .bvh files onto our models. This was a very hard task to accomplish with many errors standing in our way. We’re also proud that we learned how to use Blender literally overnight. None of us knew how to use the software before, yet we made almost our whole project from it.

What we learned

We learned a lot about how to use Blender. None of us had used Blender before, so a lot of our time was spent learning how to use the software and its addons. We also learned about how to code in Python. Although we did not implement Python in our final project, we did learn a lot about how to use it in correlation with Blender throughout the project.

What's next for Digital Humans

While it was something we initially tried to tackle as part of our current project, finding a way to automate our current process would be the next immediate goal. By scripting Blender commands in Python, we could streamline the procedure for making custom models and mapping gesture files onto those models through automation.

Built With

Share this project:

Updates