Inspired by the work of Vladislav Petrovskiy, Geoffrey Mann, and Shinichi Maruyama, who use various mediums to capture bodies in motion. Struck by the lack of tools available to generate these sculptures procedurally, we sought to tools to procedurally generate art. We wanted to use a familiar form, the human body, to make unfamiliar forms through technology that still feel organic, human, and active, elevating typically mundane activities to reach artistic value and physically altering a space to evoke intrigue. Our response to "Why Human" explores the collaboration of technology and the human form to speed up the process of artistic creation, using the human body as our paintbrush and our tangible world as our media.
What it does
We created a simple and modular tool for artists to convert motion capture data from the perception neuron into models that can be turned into sculptures, furniture, and installations. Our scripts can generate point clouds, wireframes, and solid body models, optimizing the process to filter and analyze data, reducing redundancy to optimize our algorithm.
How we built it
Our scripts were written in python with Rhino's Python scripting API, processing .csv files exported from Axis Neuron's Perception Neuron data.
Challenges we ran into
Our biggest challenges were nearly our entire team being completely unfamiliar with both Rhino and Python! We all picked these up quickly and are extremely proud of the work we did.
Accomplishments that we're proud of
We're proud of building a tool capable of producing 3D renderings of potential applications for this tool, transforming cartwheels into benches and dance into sculpture. As far as our research has found, no publicly available tool has been able to produce models this way, and all previous work similar to this has revolved around long exposure photography, 2D media, or expensive optical motion capture setups.
What's next for Motion Sculpting
Our next steps including adding more ways for users to customize their outputs, speeding up the processes involved, and potentially integrating this concept into VR environments for live action sculpting! We hope to add functionality that creates more recognizably human forms, even less recognizably human form, and other tools to automate processes on generated sculptures.