Inspiration

Our inspiration with ToneSense is to create an application that offers individuals on the autistic spectrum with real-time feedback to interpret social cues and navigate interactions with confidence. By offering personalized support tailored to emotional needs, ranging from therapeutic interventions to communication aids, we envision fostering social growth and integration. Our goal is to illuminate pathways to inclusion and active participation in society for our users.

What it does

The ToneSense application is designed to analyze spoken or written content provided by the user and determine the predominant emotion conveyed within that content. Users can either record a piece of speech directly through the app or paste a segment of text into the interface.

How we built it

In building the ToneSense application, we leveraged machine learning to achieve accurate sentiment analysis. Specifically, we deployed the "roberta-base-go_emotions" model for text sentiment analysis as it provided classification into a wide range of emotions. For analyzing audio content, we employed a speech-to-text conversion tool to transcribe spoken words into text format. We tested the models with data from diverse emotions to evaluate their performance and identify strengths and weaknesses.

We developed a user-friendly application interface that seamlessly integrates with the models' APIs and data conversion. This interface enables users to effortlessly record live audio or input text, streamlining the process with minimal effort required. Users only need a single click to identify the emotion in the piece of data, enhancing user experience and accessibility.

Challenges we ran into

Learning to code in a new language like Angular-Typescript within this short span of time was a challenge. Finding a pre-trained machine learning model with high accuracy that provides a deployable API was another challenge.

Accomplishments that we're proud of

We are proud of achieving enhanced accuracy and efficiency by leveraging machine learning models to analyze the sentiments of speech and text data. We are also proud of integrating this capability with an application that offers smooth interaction ensuring ease of accessibility and use.

What we learned

We have learned how to assess, test and deploy pre-trained models for various applications. Additionally, we have learned to integrate these applications with a visual interface using Angular and Typescript.

What's next for ToneSense

Our initial focus will be on implementing multimodal sentiment analysis, recognizing the limitations of relying solely on speech-to-text conversion for emotion recognition, which may overlook crucial non-verbal cues like tone of voice. Subsequently, we aim to enable real-time emotion prediction within the application.

Share this project:

Updates