What it does
How I built it
Synaptic capactive pad -> opencv + scikitlearn + numpy + ML -> web sockets -> mobile and web interface
Challenges I ran into
Asynchronous sockets. Enough said.
Accomplishments that I'm proud of
The data coming from the capacitive is very messy, and required a variety of machine learning and computer vision algorithms to coax into something useful.
What I learned
What's next for synapti-vision
The joining of these two technologies allows for a cheap way to create a psuedo-holodeck, and allow the physical position of mobile hardware to define interaction between devices. Multiplayer AR is a distinct possibility.
Images soon to come :D