Inspiration

We had a previous interest in gesture recognition but found proprietary hardware to be the main barrier. Mozilla WebVR also represented new possibilities for browser-based interactions.

What it does

Phazer pulls DoF data from the smartphone's browser and maps those inputs to recordable gestures with the help of some ML (basically, control things with your phone in your pocket).

How we built it

Google Cloud is the home of our application. We're running a node app on a compute engine instance to pull accelerometer and gyroscope data from a user's phone. Then, to guess what the person is doing in the real world, we feed the data to a little ML leprechaun (actually it's just an ordinary library, but it's been a long night).

Challenges we ran into

We spent most of Saturday building a live data pipeline from node to python (and back to node). We had a rough time getting python to play nice with node's asynchronous queries, so we had to overhaul everything and switch to a more compatible node library for our ML needs.

Accomplishments that we're proud of

We somehow ended up producing a recording track for collecting training data using, of all things, Ableton Live (worldwide tour starts next Spring). Oh, and we didn't smash our phones out of anger during testing, so that's a plus.

What we learned

Node is to be feared and respected, in that order.* HTML5 can power some seriously awesome applications - lots of features there that are still going underutilized.

*Experiences may vary

What's next for Phazer

Better gesture recognition, and connecting real-world movements to the Cloud to control your speaker, lights, car, home, and dog. Probably a new phase.

Share this project:

Updates