Final Unity Build. The motion detection regions are highlighted blue.
Image target with tracking features highlighted
As busy university students, it's become harder and harder for us to find time for music. KeyboARd stems from a desire to be able to drop everything, relax, and jam out like we once could. We came up with KeyboARd. While it certainly won't let you play the intricate rhythms you would on a classic piano, KeyboARd still lets you de-stress and at least enjoy some classic Twinkle Twinkle Little Star.
What it does
Wearing Google Cardboard, the user "plays" music on basic printed diagram of piano keys. KeyboARd tracks the paper through natural feature detection, reads the position of each virtual key, and plays a note when the user's finger moves over it. KeyboARd will track the virtual piano at all orientations, and helpfully displays a set of silver notes when the piano is active.
How we built it
KeyboARd stems from three technologies: Vuforia, Unity3D with Android Development Tools, and the Google Cardboard.
Vuforia is an augmented reality SDK that allows developers to position virtual objects in relation to real world images captured through a device camera. We used Vuforia to manage a database of image targets (real-world images that will be used to position virtual objects). In this case, we created a database for the piano diagram, which we then tracked in real time.
Vuforia worked with Unity, a 3D game engine. We used Unity's environment and component system to build a project with the tools Vuforia provided. In Unity, we were able to connect the AR camera and the target keyboard with audio and user interaction. Unity's C# scripting tools allowed us to write connected scripts that managed the interactions of buttons, audio, and positioning. The asset store supplied 3d models which allowed us to give our "piano" the final decorative touches. Unity's Android tools allowed us to build and run the entire project on our smartphones.
We modified a Google Cardboard to allow for Augmented Reality.
Challenges we ran into
The first real-world image we tried to map our virtual objects onto was a generic blank piano keyboard. The lack of any distinguishing features led to inaccuracy the image detection, tracking, and the virtual buttons responsible for the piano. We increased the accuracy by adding more distinguishing features to our keyboard, allowing for precise positioning of the virtual buttons.
What we're proud of
Prior to today, none of us had ever used Google Cardboard, Unity, or Vuforia for development. Even minor problems often meant hours of research due to our unfamiliarity. We're very proud to have not only developed a finished product, but one that works well.
We would like to incorporate OpenCV, a powerful open source computer vision library and one Winston has prior experience with. This would allow us to tackle challenges like refining the detection of black keys, something that was difficult for us to implement. More powerful image processing would also allow us to begin looking at details like hand positions and orientations to refine the detection process.