Inspiration
> In a world of virtual reality and games based on interacting with others in real time, the controls were often out of place as we were to carry controllers and many other things just to be in this new world. But with the Kinect, we were able to overcome this issue and create a tool that is able to be used professionally and in the gaming world, using your body itself as a controller.
What it does
> The controller allows a user to send commands/interactions to a device via motion capture. With a more intuitive design, it allows the user to easily express themselves in real-time using common human gestures to create drastic changes.
How we built it
> We built this controller via Kinect API, which sent us raw data about depth and color. We then took the data through a cluster algorithm that set aside points of this data set given by the API to create a basic skeleton. Using this basic skeleton, we defined movements to represent interactions within a program.
Challenges we ran into
> Originally we planned to plan out basketball shots, analyzing any possible mistakes in the shooter's form. However, we faced a slew of major issues at the start. Our hard drive broke, we didn't have internet, and and the only thing we could do by dinner was rendering a skeleton. After setting up the basics and testing through various situations using our Xbox 360 Kinect, we found ourselves facing the problem of the Kinect 360 not being compatible with blender, a 3d animation software for rigging characters for motion capture. This forced us to change our course and focus on the task at hand, as we were now pressed for both time and attention. We ran through a list of ideas we could still achieve that would affect a wide percent of the population. We eventually came to the idea of system controls, leading us to our eventual final project.
Accomplishments that we're proud of
> We're proud of our ability to smoothly track user input and translate it to game controls, as well as being able to track user movements efficiently and effectively.
What we learned
> Using motion to play a more complex game was surprisingly more fun than we imagined.
> Learned how to include APIs in c#
> Creative process for using hardware and troubleshooting
What's next for Automaton
> Creating an interface to allow the program to interact with a verity of different games and applications. Also make a calibrator in the interface to make the motion capture even more accurate. Work with simulations in real time such as surgeries and delicate assemblies. Being able to control or interface with robots with this technology.
Log in or sign up for Devpost to join the conversation.