We are two sisters who were first inspired to create this product by our mom, a high school teacher for the last 30 years. Despite her best efforts, she has been struggling to get through to students during the COVID-19 pandemic. Remote learning creates a dull, impersonal environment, and even as our mom is screaming, singing, and dancing in front of her camera, her students are unengaged and unable to learn.

Watching our mom struggle allowed us to realize that there is a major problem with existing remote learning products. Our mom is one of the top teachers at her school; she was awarded Teacher of the Year by her school in 2019, she is constantly going to conferences to learn new teaching techniques, and she comes to class motivated and energetic every day. If our mom is unable to inspire students to learn during this time, there are millions of teachers across America facing the same problem. This fact motivated us to build a better solution, so that the 1.5 billion students who have been impacted by COVID-19 may continue to learn through these unprecedented times.

What it does

Maia is a web application that has reinvented remote learning. Instead of hosting online classes through the traditional layout where participants appear in separate boxes, Maia brings the classroom to life by creating live animated versions of teachers and students. Teachers can write on a virtual whiteboard; add sound effects, stickers, and videos to their lesson; and easily start polls, assignments, and games. Students can click on their character to raise their hand at any time and they can write on the interactive board when they are called on to do so.

Traditional video communication methods create a dull, impersonal environment by removing the ability to use hand gestures to engage. Our design completely reinvents this traditional solution by using virtualization to allow people that are thousands of miles apart feel like they are in the same room. This ultimately enhances focus, empathy, and motivation in any human interaction, which is a necessary development in our increasingly cyber-physical world.

How we built it

The first step we took to build our alpha prototype was to research and write the code for the facial tracking and motion capture tool. This was implemented through computer vision based sensorless motion capture, meaning that users can use an ordinary camera and no sensors are required. After the face is detected, prominent parts of the face are identified using the Histogram of Oriented Gradients (HOG) method to extract features from pixels. After the features are identified on the human face, the corresponding features on the user’s avatar are linked and will move synchronously.

Once this tool was working, we choreographed the facial movements of our animated character by pairing the motion capture tool with Adobe Animate. Next, the character’s legs, arms, and torso were animated by meticulously moving each body part throughout each second of the clip. Although choreographing the motion of the character's body parts was a very manually intensive process during the alpha prototype creation, this process will be automated in the final prototype of the software by using natural language processing (NLP) and sentiment analysis to determine the content and tone of the character’s speech.

After animating the character, the user interface of the web application was designed. The detailed designs of the main toolbar, the drawing toolbar, and the effects toolbar are described in the Technical Product Implementation document. The last step required to make the user interface was to design the interactive whiteboard tool. We conducted research regarding how to maintain consistency on a drawing tool when multiple people are simultaneously editing, as shown in the Technical Product Implementation document.

Finally, we began considering how we would market our product, who we would market it to, how much we would sell it for, and how we could expand to new markets in the future. This analysis is described in detail in the Business Model and Market Potential document attached to this submission.

Challenges we ran into

The biggest challenge we faced was our tight constraints on time and labor. Since we had a team of only two people and a time limit of only 7 days, we quickly realized that we had to narrow our scope and focus on the most important aspects of our design.

Additionally, we ran into several challenges while trying to create the facial tracking and motion capture tool. Although we had some experience writing facial recognition code in the past, capturing motion and translating that motion to an animated figure proved to be difficult. We overcame this challenge by thoroughly researching the technical aspects of the development of various computer vision algorithms. Beyond online resources, we contacted experts in this field to learn how they created such projects. Whereas last week we had nearly no knowledge of motion tracking beyond facial recognition, today we are close to the implementation of facial tracking and motion capture in Maia.

Accomplishments that we're proud of

  1. Developing a complex, transformative, marketable product in seven days with only two people.
  2. Manipulating a working facial tracking and motion capture algorithm.
  3. Learning a complex animation software from scratch and effectively implementing it.
  4. Validating the market need for our product by receiving 97 survey responses from teachers ranging from kindergarten to college level. This helped us validate that online education has been a major struggle for teachers across states at many grade levels. Additionally, many teachers left suggestions for features they wish their e-learning platform included, which helped guide our product design.

What we learned

Through the development of Maia, we learned how to implement and combine new technologies, including Adobe Animate, facial tracking and motion capture, and a collaborative object-based graphical editing system. Beyond that, we learned that the quality of education suffers when it is online. Through our own experiences, a survey, and open communication with current teachers and students, we learned that it is necessary to reimagine traditional online learning platforms, and Maia has done just that.

What's next for Maia

We are confident that Maia has the potential to hugely impact the e-learning market. To grow our business, we will first build out the live broadcasting and collaborative capabilities of our software. We will build a team of about 4 to 6 members to work on optimizing the animations and the UI design. We have already got in contact with a technical director at Walt Disney Animation Studios about our idea and we are in the process of setting up a call to gauge his further interest in the project.

Once our product design is finalized, we will file for a utility patent on the process of utilizing live animations in a virtual classroom and we will file for a design patent on the aesthetic of the UI and animations. If we are awarded winnings in this competition, we will use the money to invest in better animation tools and cloud storage. We will then market our product to public, private, and online elementary schools. From here, we hope to expand Maia’s capabilities to help middle schools, high schools, universities, and eventually corporations. We really enjoyed working on Maia this past week, but it is only the beginning. We are eager to venture into the market and watch as Maia transforms the future.

Built With

Share this project: