MoodSing

Inspiration

With the growing availability of real-time biological data from wearables comes a new way to interact with entertainment. The idea of the MoodSing stems from the attempt to explore this frontier of technology and push the limits on our conceptions of modern entertainment.

Target Audience

With the Emotiv, MoodSing tracks EEG signals in order allow the user to create music with their minds. By tracking abstracted metrics provided by the Emotiv SDK, the MoodSing engine combines measurements of the user's frustration, excitement, and valence. We then generate arpeggios for the user to listen to whose properties (major/minor, tempo, volume) are all dependent on the various readings that one can extract from the Emotiv SDK.

The future of this product is mood-based playlists. With a dataset of musical sequences that correspond to one's mood, we can create auto-generated electronic music tailored to improve "focus" or increase "relaxation" based on one's actual EEG readings.

Technology

Emotiv: source of EEG values that are used to generate tone sequences
Oculus Rift: produces changing environment on a rough 3D model of the Stanford main quad dependent on EEG values
Parse: EEG values are attained through a C++ backend and then pushed to Parse where the JS frontend can access the data.

Share this project:
×

Updates