Check out our website!
and make sure to turn on your volume!
Inspiration
We wanted to make something silly and interactive. Brainstorming ways that a user can interact with a product, we settled on visual input. Getting a chance to work with some new libraries and stepping out of our comfort zone sounded like fun so we spit-balled until we landed on the concept for Lip Synth.
What it does
Lip Synth performs facial tracking to determine how wide your mouth is. Based on how wide open it is, it will play a note on the selected instrument to match! Lip Synth comes with several options to change your scale, instrument and sensitivity! You can also log into Spotify and search a song, and Lip Synth will show you how to play it with your mouth (its not the easiest to master though).
How we built it
We started by using a package called face-api.js that allowed us to use facial tracking with the user's webcam. Once we had access to the face measurements, we hooked up Tone.js with some samples and created several systems to load/change instrument samples, change musical keys, and determine notes based on the shape of the user's mouth.
Next, we wrote a simple algorithm to anticipate how big a user's mouth should be to play a given note and draw a corresponding ellipse to fit their mouth in the webcam display. With this implemented, we made our final addition and connected to the Spotify API to get access to songs and song data so we could see the notes being played in the song. One or two final edits and now you can learn to play your favorite songs on Lip Synth.
Challenges we ran into
We ran into a series of roadblocks along our journey. The first and foremost was the long hours. After coding for twelve hours in a row, it became easy to make silly mistakes in our code (JavaScript was in no mood to help us find them either). Thankfully, we had each other to perform what I like to call "the stupid idiot check," where one person looks over the other's shoulder and says "you never actually called your function." On a more technical front, packages can be frustrating, especially when they aren't all compatible with package managers. Trying to collect packages and place them in the right places with out dated documentation led us down several rabbit holes but we managed to find our way out. Asynchronous behavior gave us some issue as well but nothing a few short term hacks couldn't fix. Also Spotify OAuth was not happy this weekend.
Accomplishments that we're proud of
Everything. No regrets.
Especially making people look like clowns to unaware passerby :).
What we learned
Technology-wise, some of us were new to node.js and none of us were familiar with any of the libraries. We always learn a little from using new libraries. We also learned about how synthesizers and samplers work for music, which was quite different from our normal topics.
@[Sponsor] CoStar: Thanks for sponsoring and the panel, but...
What's next for Lip Synth
We would love to further refine the way in which Lip Synth processes your oral input. Currently it can be tricky to sustain a single note with your mouth naturally moving; in the future we would improve the analysis of the tracked movement to better interpret user detection. We would also like to finish setting up Spotify auto-play so that you can hear the song you are learning as you Lip Synth along! Lastly, we would love to implement multi-user functionality- the framework is there, it just needs to be put in order.
And most importantly, we would like to thank the one and only @[Organizer] Zach (he/him) for his great honor and valor during this event.
Built With
- face-api.js
- heroku
- html5
- javascript
- node.js
- spotify-web-api-js
- tone.js

Log in or sign up for Devpost to join the conversation.