Inspiration

Origami can sometimes be a tedious process - translating what you see on a screen onto the physical paper in front of you isn’t always easy.

What it does

We streamline the process using Machine Learning to identify which step the user is currently on and guide them through the next step needed to successfully complete the selected origami.

How we built it

We began by creating what is likely the largest dataset of folding steps for an origami heart. With it, we trained a model to classify the specific folding step captured through the camera passthrough. We then created models of each corresponding next step and animated them. Finally, we built a UI that allows users to select the origami they want to create.

Challenges we ran into

  • Detecting the current step and integrating the updates into the Unity app with Meta Quest
  • Correctly predicting the current step due to similarities between some folds.

Accomplishments that we're proud of

  • Creating the most extensive dataset of folding steps for an origami heart
  • Refining a YOLO Model to detect the current step in the user process

What we learned

  • How to rig a plane in blender
  • How to further refine and improve existing machine learning models.

What's next for Paper Therapy

  • Expanding the model to support more types of origami.
  • Improving overall user flow.

Built With

Share this project:

Updates