Words are meaningless without tone, when we are speaking to someone, sometimes they say something but mean something else. We wanted to address issues that therapists have when talking to the client, as the emotional tension in the room could increase, but the therapist would not be able to tell.
What it does
Bringing to you Sound Vibes, the ultimate emotional recognition tool. This tool can be used in multiple scenarios, including in one-on-one therapy sessions, court hearings, or in an interrogation room. This tool is capable of recognizing basic emotions that a tone could inflict. The microphone catches the sound waves and translates it into an emotion including "sad", "happy", "fearful" etc. It uses a machine learning algorithm, with the Deep Affect API to get the emotional data analysis.
How we built it
The Particle Photon samples ADC data from the Big Sound module using DMA, and sends the data to a web server via a TCP connection. Then the server encapsulated the data as a .wav file, which is filtered using a basic low pass filter. The wav file is then ran through a Deep Affect machine learning algorithm, where emotional data is drawn from the audio sample.
In order to use the device, the user presses a button on the device to begin recording, says a line, and then presses the button again to end the recording. When the recording ends, the sample is sent to a web server and then it undergoes analysis and the emotional data is portrayed. The user can also listen to their audio clip by pressing the play audio button.
Challenges we ran into
We discovered that the microphone is not of the best quality to record voices. It picks up a tremendous amount of noise while recording.
The noise problem could've been solved via hardware filtering but there wasn't any resources available at the Hack to do so.
We found it difficult to run a backend node.js server that also supported digital filtering techniques.
Accomplishments that we're proud of
We're proud that we were able to create a product where the user can reliably record their voice at a press of a button and have their voices analyzed (though not accurate).
What we learned
Taking forward the things we learned from this project, We learned how to implement hardware and work with node.js using a machine learning API. We also learned that although node.js is a practical tool to integrate a web server with hardware, perhaps setting up a TCP server using a PHP or python backend might be a better option.
What's next for Sound Vibes
We want to be able to make the information more user friendly and applicable to the public.