Inspiration

For some people, going to music concerts and performances is a wonderful experience. However, others may be sensitive to certain aspects of output sound at these events, which may ruin their experience. The goal of this project is to optimize performance goers' sound experience at a certain event in real time through monitoring of key biometrics on a person-by-person basis.

What it does

This system demonstrates use from the moment a person enters a music/performance venue until the end of the performance. As a prerequisite to the use of the system, the end-users, identified as event goers, ideally would be wearing a Muse Brain Sensing Band and/or a Myo gesture armband. The end-user first checks into the venue, at a kiosk, using an NFC reader for their phone and a camera that is used to identify their likely age and gender. When they are in the venue, data from the Muse and Myo bands will be combined with the previously collected data to enhance the sound effects in a localized region of the specific end-user.

How we built it

This was built by first testing out new sensing systems to us, such as the Myo and Muse bands, and being able to access this data over wireless networks. Then, based off of previous experience with DSPs and embedded systems, we were able to retrieve useful data and integrate all our hardware together to create a miniature scale of the given workflow.

Challenges we ran into

Out of the two biometric sensing bands, the Muse band had poor support for MacOS for development, but we were able to overcome this within a short period of time. In addition, we were planning on using a DragonBoard 410c for the initial kiosk in the system, however, there was little support for serial communication that our team was able to find (such as serial ports for certain protocols), which we were able to overcome by switching it out with a Raspberry Pi that had all the functionality that we needed.

Accomplishments that we're proud of

It was really awesome being able to read out, and make sense of, brain waves measured by the Muse band, and then integrating this with an automated system of deciding what audio effects should be used for a certain group of people.

What we learned

We learned a bunch about biometric sensors, such as those built into the two bands utilized at the core of our system. In addition, we learned how to utilize Watson's Visual Recognition features in determining what exactly is in an image.

What's next for Vamped

We hope to develop the system further to be able to scale the number of users in a single area to the thousands. In addition, we would like to develop more customized sound effects to optimize the experience for all end-users.

Built With

Share this project:
×

Updates