The COVID-19 Pandemic has forced virtually all college classes online. These online classes greatly lack in human interaction, which negatively affects students and professors alike. Professors often complain that the perceived lack of a real audience can make lecturing significantly more difficult. For students, the absence of lifelike interaction can make it difficult to stay focused and engaged during lectures. One of the underlying issues that makes online classes worse is not being able to see your peers. Many students don't have webcams or don't want the social pressure of being seen by the hundreds of people attending lecture. When making VirtualMii, we were inspired to create a solution that would allow students to attend online classes in a more engaging and interactive manner.

What it does

VirtualMii is a native desktop application that acts as a "virtual camera" where users can display an animated avatar during online meetings and classes. Users can choose their avatar and trigger different animations to interact with the speaker during a meeting.

VirtualMii also has a web component that allows users to create a realistic custom avatar with machine learning. Uploaded user images are analyzed and used to generate a custom avatar for the user. Avatars can also be customized by the user based on specified properties such as age and gender. You can create an account with VirtualMii and save your avatar to be used with the desktop application.

How we built it

The desktop application was built with the Unity Game Engine. The avatars were generated with the Unreal Engine MetaHuman creator. The website was built with React and hosted with Firebase. Firebase Realtime Database, Firebase Cloud Storage, and Firebase Authentication were also used. The machine learning facial detection and analysis was done using Microsoft Azure Cognitive Services.

Challenges we ran into

Getting the Unity project to work proved to be the largest challenge. Multiple people working together using the unity source control and learning how to rig animations for the models proved to be a challenge. Setting up the facial recognition service also proved to be an interesting challenge. The free tier of the Azure service we used for facial recognition had a rate limit for API requests, which made it difficult to automate the analysis of our list of avatars.

Accomplishments that we're proud of

We are really proud to have brought so many technologies that were new to us together to create this project. Working with Unity for the first time was a challenging experience but we are really happy that we were able to create a finished product that aligns with out original vision. We are also really proud to have created a project that blends together a native application as well as a web application

What we learned

One of the largest areas that improved our skills in was creating character models and rigging animations in Unity. We also learned a lot about deploying applications with a hybrid-cloud strategy, as we used both Google Cloud and Microsoft Azure.

What's next for VirtualMii

In the future we would like to expand our roster of character avatars and improve the character customization features. We would also like to improve VirtualMii's integration with meeting applications such as Zoom or Microsoft Teams.

Share this project: