Inspiration

Perceptually-enabled Task Guidance system R&D by DARPA + Anduril’s EagleEye

What it does

Guides a user step-by-step to teach them how to fold a paper plane with an AR interface for Quest 3

How we built it

Trained an ML vision model and integrated it with Unity Sentis

Challenges we ran into

Training data quality, platform support, interface iteration

Accomplishments that we're proud of

It works!

What we learned

Data labelling, HUD interface interaction, UX testing

What's next for Paper Plane Folder

Wider use cases + potential skill store of custom models?

Built With

Share this project:

Updates