Inspiration

In many social situations it may be difficult to gauge the tone of the conversation or even the emotions which others may currently be experiencing. For those with ASD (Autism Spectrum Disorder) this issue is exacerbated; it becomes difficult to express their own emotions in addition to responding adequately to the situation they are presented with.

Our inspiration for this invention came from our regular interactions with our fellow students who have ASD in our school community. The Planning for Independence (PIP) division in our school is responsible for helping students with learning disabilities be prepared for the real world while giving them a safe environment to learn in. Since a major aspect of independence in today’s society requires the ability to interact in social situations, this proves to be a problem for many students with ASD as they often have difficulty understanding the emotional context of conversations and facial expressions. We brainstormed that we could help those with ASD, to succeed in these unfamiliar social situations by creating an application which could read emotional context for them.

What it does?

Our app, feel, operates by recording a clip of speech and then analyzing it using the sentiment API. The app then assigns the value returned by the API to one of three basic emotions (positive, neutral, negative) and returns the result to the user in the form of an emoticon. The reason that there are only three emotions is to keep it as simple as possible for the user. Feel also incorporates the Facial Recognition API, by capturing an image of a person’s face and using the sentiment API to analyze it. The data is assigned and returned to the user in the same way as the speech operation mentioned above.

How we built it

Used Flutter, a UI Software Development kit created by Google as the IDE to allow for seamless testing on Samsung phones and the provided virtual machine Used speech to text API and Sentiment Analysis API to output an appropriate emotion For expanding on this idea, Microsoft Azure and their emotional recognition software can be implemented with a camera

Challenges we ran into

During the hackathon, we faced many challenges and struggles that we were for the most part, able to overcome and find solutions to.

The first major challenge that we encountered was learning how to use the various APIs that we required in order to make our idea work (i.e. Sentiment API, Facial Recognition API, Speech to Text). This included setting up and getting comfortable with the speech to text function and then running it through the Sentiment API. The same went for the facial recognition, as we spent a sizeable amount of time trying to understand how to use and apply the Facial Recognition API.

The second major challenge that we faced was in developing a clean user interface. This meant, developing a deeper understanding of how to configure the layout and feasibility of the application itself and ensuring that it actually ran and was fully operational. Not to mention, also learning more about design, aesthetic, and what users visually look for in an attractive application. Not only did we take these factors into account, we also looked into a design that would specifically align with the needs of our target audience, those with ASD. Through careful planning and a clear idea of what we wanted our app to look like, we were able to achieve a clean UI.

Accomplishments that we're proud of

We’re really proud of the functionality of our app and our overall ability to bring our idea to reality. We’re especially proud of the work we put into the configuration of the APIs by overcoming our unfamiliarity with them and managing to integrate the API into the final app and making it work in the end.

In addition to our work with the functionality of the app, we are also very proud of our work with designing the UI. By sticking with our vision of the app, we managed to make a clean and minimalistic design ensuring a wonderful user experience by concentrating on the details (i.e. sticking to the red colour family as studies have shown that those with ASD respond more positively toward the colour red and using a simple design as to not to cause sensory overload in those with ASD)

What we learned

Throughout the course of this Hackathon, we managed to learn a lot. First of all, this was the first time we had ever used the Facial Recognition and Sentiment APIs. So we learned a lot about how to use them and most importantly, how to integrate them into our app, Feel. Funnily enough, this was also our first time making an app. We definitely learned a lot from having to design the app from ideation to the final product. Overall this was such an incredible experience as it gave us a feel for the software and helped us develop crucial industry skills.

What's next for feel

For the future we hope to expand the basic three emotions it currently recognizes into a wider range of emotions to fulfill the needs of those with ASD who have more difficulty with recognizing and understanding more complex emotions.

Furthermore we hope to improve on the convenience and portability of Feel by making it accessible on wearable OS such as smart watches and Google glass. This benefits the user by making their use of Feel more inconspicuous.

Built With

Share this project:

Updates