Here at Duke, we have a whole host of world-class amenities, from sports facilities to research laboratories to spaces dedicated solely to the arts. Many people across the United States and the world, however, don't have the time or the money to do things we take for granted, like sitting down at a Piano and playing some music. Paper Play aims to address this problem by providing people with a low-cost Piano-alternative that requires only a smartphone, while still providing the tactile experience of a real Piano.
What it does
Paper Play allows the user to mount their smartphone on a simple stand (or a couple of books, or a shoebox), point the camera at a pre-printed sheet of paper with some funny cartoon faces on it, and then, by covering those faces with their fingers, make music. It also allows users to choose from playing notes versus chords, and even has recording capability so users can listen to what they just played!
How we built it
We used React-Native and Expo to design our mobile app, and the Expo camera API to process the live image stream.
Challenges we ran into
We originally wanted to use a simpler keyboard-style printout for our paper keyboard, but it turns out to be really hard to design a camera api using React Native that simply detects if a user's fingers are touching a piece of paper in a certain location. Instead, we used the Expo camera api's face-detection feature to detect cartoon faces on the piece of paper, and hacked together a way to determine which note was being pressed by figuring out which faces were visible and which weren't.
Accomplishments that we're proud of
- Hacking the Expo camera face detection feature to make our keyboard
- Designing an algorithm to associate notes with keys and ensure one sound being made for each key press
- Ability to change modes of playing (Eg. single note or chords)
- Ability to record certain time frames of playing
What we learned
- How to create a React Native application from scratch
- React Native camera apis
What's next for Paper Play
- A camera api which works for any key design, not just faces
- Sound-mixer like capability to allow users to play over sounds they just recorded, possibly in a different key
- More keys and faster response to key presses
- Cuter UI
- More possible sounds