Inspiration

Our project is meant to help autistic people, among others, who struggle to understand social cues. Our team chose to pursue this topic because we both struggled to understand social cues while growing up, and this would have been an incredibly useful tool to have at the time.

Emotional intelligence is essential in the modern world, but it can be hard to develop if it does not come naturally. Feedback about social performance is often cryptic or nonexistent, as it can be difficult to genuinely discuss one's feelings with others. That's where Social Q's can help -- Social Q's uses a Muse to detect brainwaves during a conversation. When aligned with the text from the conversation (translated using the asynchronous Rev.ai SDK) in a clear visual labeled with timestamps, users are provided with quantitative data with which to gauge social situations.

What it does

Our software takes in an mp3 file of a conversation as well as data from the Muse to produce a visual showing the conversation partner's brainwaves at different parts of the conversation. Using these trends, the user can interpret levels of engagement, lack of interest, and even tension.

How we built it

We used the legacy Muse 2016 SDK to gather data with the Muse, Rev.ai to convert speech to text, and Python to automate the process.

Using the legacy Muse SDK, data is read from the Muse and processed with MuseIO. The data is stored in a .csv, which is compressed to a more concise version with regex.

When the headband is placed on the head, an audio recording is simultaneously started. After the conversation, the audio recording is then stopped and processed with Rev.ai to obtain the conversation transcript.

After the conversation transcript is obtained, the timestamps from both files are aligned to match text with the conversation partner's brainwaves, and all the data is plotted using matplotlib.

Challenges we ran into

We ran into many problems using the Muse module. The company itself ceased to provide developer support, so we needed to rely on a deprecated SDK. It was not very well maintained, so figuring out how to use it was difficult.

Accomplishments that we're proud of

We're proud of getting a minimum viable product working, considering our lack of experience with the tools we were using and the lack of developer support for the Muse SDK.

What we learned

We learned more about brainwave theory and Muse and Rev.ai SDKs.

What's next for Social Q's

  • Making an app -- cleaning up the UI/UX in some way
  • The app would have video/voice recording capabilities
  • Do file conversions within the app and call all of the supplement files
  • Research brainwave signals more extensively - match more than just these attributes
  • Use a brainwave tool more precise than the Muse
  • Add two people: both conversationalists

Built With

Share this project:

Updates