We wanted to build something cool, yet challenging. We wanted to stretch our skills and test what we are capable of. Each member of our team has his own specialties: hardware, database design, algorithms, machine learning. This project combines each skill in such a way that allows us to synergize and create an amazing project.

What it does

moodDisco first provides an API through which a user can upload a word and receive the mood of the word. The mood is described by four numbers on a scale from -0.5 to +0.5. The lower the number, more negative the mood. These categories are happiness/sadness, interest/boredom, love/hate, and excitement/calmness. On the client side, these four scores can be computed into a more comprehensive mood such as romance, anticipation, and melancholy. These moods can then be operated on however the client desires.

In our case, we chose to represent moods with the use of music and sound. We imaged a situation where a group of friends are sitting around and just talking. moodDisco provides a background ambiance that will not just describe but build on the mood of the room. The lights will become red when it detects romance and will play soft, calm music when it detects bored. It is the combination of our API and hardware implementation that makes moodDisco an interesting, fun, and useful tool for creating a background environment.

How we built it

moodDisco required a quite diverse set of skills. In order to detect the mood, we first had to detect what words were being said. This was implemented on a raspberry pi using a google API. The voice is picked up with a webcam and converted to text. This process was difficult originally due to our limited knowledge of network interfaces. Although after much work and many different interfaces, we were able to facilitate the communication well. We also attempted to use the Watson API using an iPhone app. This did not work as no matter how hard we tried, the data transfer was unsuccessful.

The second part of this problem involved converting the text to a mood. After searching the internet for a set of pre-learned descriptors of a word's sentiment, we concluded that it would be best for us to implement it ourselves. We were able find much research about detecting whether blocks of text are positively or negatively connoted; we were not able to find whether a word was happy, sad, interesting, etc. We used a set of prelearned weights from GloVe: Global Vectors for Word Representation from Stanford University. This provided us with a 300 dimensional vector description for each word trained on data sets such as Twitter and Wikipedia. We then hand labeled 300 data points, indicating a word's sentiment in each category. This was then used to train a neural network to detect the mood. This network was a feed forward neural network with depth 300, 1024, 1024, and then 4.

Once the sentimental values of each word were found, the values were loaded into a database for the server to look up upon receiving a query. By doing all the calculations ahead of time, we could get much higher performance in the server's API. Additionally, since the values were stored in a traditional RBDMS with a sorted index, a binary search could be applied on all ~60000 words to further reduce time in getting a mood assessment from a word. The backend for the API was written in PHP, interacting with a MySQL database.

Along with the API, we also used our domain name to point to a website that showed our project's most salient features. The website features the ability to directly access our mood/word database, as well as giving users the ability to see a chart of their historical mood, word by word.

Both the API and the website were hosted on a single EC2 instance thanks to Amazon Web Services.

From here, the sentiment data is sent to our Raspberry Pi, where it converts the 4-field mood into a single mood. We mapped out a large number of moods to colors as well as sounds. The light colors and sounds fade as the mood detected by the pi switches.

Just as a little fun side-project, we 3D printed a case for our circuity, power, and Pi.

Accomplishments that we are proud of

We are very proud of many aspects of our project. Firstly, we are very proud of our ability to convert speech to text on the Pi. Although we were unable to transfer this to the iPhone, we learned a lot about server communication in the process. Secondly, we are proud of our text to mood translation. It took much effort for our neural network and its gradients to work. After testing out our learned weights, it is clear that the weights contained much information about moods. Thirdly, our database, website, and API are very elegantly designed to provide a very friendly interface for the user. Lastly, we are proud of our representation of mood as a combination of light and sound. It took a lot of effort to compile a list of suitable songs. The implementation itself was quite difficult as well.


moodDisco is a background synthesizer applicable in many different environments. This includes use at home, restaurants, and much more. The API provides the user the ability to connect many different devices, allowing for much customization.

What's next for Mood

In the future, moodDiso is hoping to bring more connectivity between devices such as iPhone and Android. This will allow for a much larger user database.

Share this project: