Inspiration

Expressing yourself is a part of being human. We all seek to be able to show the world our stories and interests liberates our soul. Currently, our profile information is stuck in the 2D-world; a flat profile picture and a short bio is the most common layout in most social media! What happens to profile pictures when we move on from mobile phones to AR glasses in the future? Well, Crouton is how we imagined profile information will be shown when both the real world and our digital world combined. With Crouton, you can show every crumb of your persona as an AR avatar

What it does

Crouton lets you upload any 3D models: An expressive art piece you created in TiltBrush, a 3D model badge, a cute bunny, any special effect, and overlay them around your face. Crouton out-of-the-box comes with face tracking, face detection, and face alignment for an immersive Mixed Reality experience that maps perfectly to the faces around you. Anyone who has an AR headset will be able to see your Croutons, in real time, when your face is within their vision.

How we built it

  1. We implemented real-time local face tracking on Hololens platform. We found it to be quite challenging because it's not easy to take picture, do neural net inference to localize a face, and overlay 3D mesh within the constraint of a head-mounted device. Luckily we found a research paper that's done a very good job exploring this topic [1]. So, we learned, used, and implemented their findings to solve our need

  2. We used Microsoft's Azure face recognition API to do 1-1 mapping between croutons and faces. Microsoft's API is very easy to use, we love it. However, integrating API-based individual face recognition to the real time face tracker is pretty challenging

  3. We spent a lot of time making personal 3D assets because we really care about expressing our personal beliefs and things that we care about, and we think everyone should do too

Challenges we ran into

  1. We spent a little bit of time setting up developer environment for Hololens, as Unity + Visual Studio + Hololens Dev Kit + Universal Windows Platform takes surprisingly a bajillion bytes to download

  2. We also borrowed 2 PCs because 2/3 of our developers only brought a Mac

  3. Again, Implementing real-time local face tracking is very challenging because of the computational and power constraint of AR glasses. At the same time we need to identify a face, track it, and overlay meshes with < 5 inches error from the center of the face achieve good UI/UX

  4. Combining API-based face detection and local face localization is also tricky

Accomplishments that we're proud of

  1. Got local face tracking to work
  2. Got face recognition to work
  3. Got face rendering to work
  4. Got all this to work real-time (it's a pretty big engineering feat)

What we learned

  1. We learned that Hololens Development is exclusively PC
  2. We learned that 3D spatial UI/UX is completely different than screen-based UI/UX

What's next for Crouton

[1] Kowalski et al. HoloFace: Augmenting Human-to-Human Interactions on HoloLens Paper

Share this project:

Updates