Music Mask

Let us pick your next song.

Version 1.0

Music Mask is a web application that uses facial recognition software to determine your current mood and pick a song matching the mood.

Happy Face

Tech

Runaway uses a number of API's/Platforms to work properly:

  • Node.js - Evented I/O for the backend
  • Express.js - Framework used to build the REST-based backend
  • Meteor.js - Build apps that use Node.js client-side and server-side
  • jQuery - Obvious things are obvious
  • C++ - Gabor Bank, Support Vector Machine, Carnegie Mellon Classifiers
  • Soundcloud - Kickass music player

How Backend Works

There are three main sides to this software. The user takes a picture first, the base64 representation of that image is sent to the Node.js server. The server then crops the image in half to optimize the speed and saves it to a jpg file. That file is then passed to a C++ OpenCV program which would do emotional analysis and send back a list of emotions detected and their level of probability.

The emotion analysis is two part. First, it takes the image and pre-processes it by converting it to grayscale and re-scaling to match the size of the training dataset. Then, it uses gabor banks to make feature vectors in the face using the classifiers from Carnegie Mellon. These vectors are passed to a Gabor bank which would determine the probability of 8 emotional states from the image matrix.

All this is held together using Node.js and Meteor.js that talk to a native C++ program.

Prerequisites

Installation

git clone https://github.com/rkrishnan2012/HackHolyoke hackholyoke
cd hackholyoke
npm install

Starting the server

The Frontend

https://github.com/luca-m/emotime

cd hackholyoke
meteor --port 3000
Point your favorite browser to http://localhost:3000 and Runaway!

License

MIT

+ 2 more
Share this project:
×

Updates