Initially, the team's aim was to provide those who are hard of hearing or who have trouble grasping social situations to have a visual aid that they can carry around anywhere at any time to provide them with either real-time subtitles or social cues based on vocal intonations. Due to challenge of identifying different voices and the limited range of recording devices, we opted to create a personal wearable to aide those in speaking, whether public or private; a tool to help one improve how they deliver a message or bring awareness to how others perceive their tone of speech.
What it does
TELME takes vocal recordings from a wearable, cellular enabled IoT device and transmits the data to a server. The back-end of the server takes this input and sends it to IBM's Watson AI, using it's text tonal analyzer to generate the feedback. The response is then given to the front-end to graph the result based on positive and negative emotions backing the words.
How we built it
Lots of caffeine
Challenges we ran into
There were many challenges faced throughout the development of this project. While attempting to build segments of the front-end and parts of the back-end, numerous errors due to incompatible APIs, platforms, and versions of different libraries. On the hardware end, the given microphones were not meant for recording voice and were unable to reproduce voice well enough to be used as a wearable in this hack. To mitigate this, we attempted to incorporate the use of bluetooth microphones, linked through a DragonBoard 410c which streamed the information to the Particle-Electron using Serial Rx/Tx and I2C. Unfortunately, we were unable to get the serial link working and would result in crashes, likely due to the voltage difference between the boards.