Affective

af·fec·tive /əˈfektiv/

Relating to moods, feelings or their expression

Inspiration

Human communication goes far past words. We often convey more with our facial expressions, body language, and tone than with the actual words we say.

With many disorders, it becomes almost impossible to decipher the meaning and emotion of our non-verbal communication, especially facial expressions. In Frontotemporal disorders (a subset of dementias), strokes, schizoid disorder, TBIs, and other disorders, an agnosia develops where a person stops being able to create and recognize facial expressions. This can cause daily turmoil by mistaking someone’s sadness for anger or shock for happiness. Their lack of facial expressions or inappropriate use of facial expressions also makes it harder for them to communicate with their loved ones.

In similar disorders that cause a loss in recognition, there is often training to help improve and monitor the patient’s skills. We see this often in relearning to read or write after strokes and with aphasias. Our group thought the same principles of relearning and monitoring could be applied to facial expressions!

We sought to design an initial web application for learning and creating facial expressions based on emotions. With gamification and metric tracking, we plan to create tools for people to practice non-verbal skills that were lost to them with illness.

What it does

Affective is a web application that has two games to practice recognizing and expressing emotions through facial expressions.

In Game 1: Emotion Recognition, users will be presented with an image of someone making another facial expression. The user will then select the emotion they think the image is portraying from a list of emotions (angry, happy, sad, surprised, and neutral). If they correctly identify the emotion, they’ll receive a point. They can make up to 3 mistakes before the game ends.

In Game 2: Emotion Expression, users will be presented with an emotion and prompted to take and upload a picture of them showing that emotion with their face. The application allows for the use of a webcam to take a picture or an upload button to use a preexisting photo. The web application will then use an API to check if that image correctly matches the emotion prompted. If they correctly match the emotion, they’ll receive a point. They can make up to 3 mistakes before the game ends.

The games will also have the option of using colors or words for the emotions since a comorbidity of many of these disorders is a difficulty with reading. This will make the application more inclusive of users with a variety of disabilities. If they are struggling to read the word, they can use the color of the button.

The web app also has the ability to create accounts and track scores over time.

How we built it

We created a backend with Django to store scores, manage accounts and sessions, display the various pages, and connect the 2 games. Game 1 was built with Javascript/html/css. Game 1 takes user input via button clicks and categorizes that as a correct or incorrect answer. Based on that result, it will adjust the lives and score. Game 1 can report current and best scores to the backend.

Game 2 was built with python, react, and a facial recognition API. Game 2 gives users an emotion and prompts them to submit or take a photo with the correct emotion expressed on their face. After the user submits a photo, the script will compare the emotion the API determines that the photo is expressed to the prompted photo to determine if the photo is a match. Based on that result, it will adjust the lives and score.

Challenges we ran into

Trying to write code as a team of beginners who don’t share language commonalities

We are all new students with varying backgrounds and no shared common languages. We struggled as we mulled over the tech stack for most of the first night in an effort to decide what we should code in. We overcame this by splitting the backend and frontend into 2 different languages as well as watching tutorials and just giving it a shot regardless of our familiarity with the syntax.

Time Constraints

We are all post-bacc students with a variety of obligations such as jobs and classes. So we didn’t have all of Thursday- Sunday to work on this. There was also a ton to get done in order to make an application we can be proud of. In order to beat our constraints, we broke our project down into various requirements that were prioritized based on if they were necessary for the experience we wanted to create or if we felt they were stretch goals to be addressed if we had free time. Regular check-ins helped us share what we were working on, gain feedback, and prioritize tasks. We also each tried to tackle different aspects of the application when we were working alone and utilized overlaps in freetime to collaborate using tools like Glitch.

Merging the Games and backend

We spent most of the last day focusing on having our games and backend work well together and spent a lot of time on StackOverflow. We ran into merge and permissions issues that we had to troubleshoot together. In particular, it took a long time for react to work with our backend.

Accomplishments that we're proud of

  1. Live Camera Option in game 2. We really think this improves the usability of game 2.

  2. Accomplishing all of our requirements and even some stretch goals

What we learned

  1. How to create a website frontend with Javascript, HTML, CSS, and React

  2. How to use Django and our knowledge of python to build a website backend

  3. How to incorporate API and use live cameras

  4. How to code as a team rather than as individuals

What's next for Affective

More features, more metrics, and more analysis! The more data about the results we can collect the more useful the data could be for the users, their caregivers, or even their doctors. Being able to monitor the amount of time it takes a user to identify an emotion after the stimulus is present is a big feature for the future of the application. Being able to track this response time can be used as a metric to monitor improvement and opens the application up to being used as a tool for research studies where it is often necessary to record response time.

We would like to also add some graphing of the score progress and breakdowns of scores based on each of the emotions. Adding a visual rather than just a list of scores would be helpful for letting the user understand the data and their progress. This will require us to accomplish our stretch goal of separating the scores by emotion. Finding a better model/API for our application. In our testing, we discovered that the API in game 2 has a high error rate in identifying emotions. We hope to do some research and find a more reliable API. We also noticed that the API is missing fear and disgust, 2 of the base 6 emotions. We hope to look for a better API in order to incorporate these 2 emotions which are often considered harder to recognize in order to obtain more well-rounded data.

Another avenue for improvement for the application making it a mobile application for iOS and Android. By doing this we could increase the accessibility of the app in particular to younger audiences. If we created a mobile version of the application with a child-friendly UI, it could open up the same technology to be used for children with Autism. Young autistic people can often experience difficulty identifying their own emotions and describing them accurately. This app could be a safe way for them to practice this skill without any social repercussions or fear.

Citations

Arsa Technology. Face Detection and Analysis API. Retrieved January 14, 2023, from https://rapidapi.com/arsa-technology-arsa-technology-default/api/face-detection-and-analysis/details.

Olszanowski, M., Pochwatko, G., Kuklinski, K., Scibor-Rylski, M., Lewinski, P., & Ohme, R. K. (2015). Warsaw set of emotional facial expression pictures: a validation study of facial display photographs. Frontiers in psychology, 5, 1516. https://doi.org/10.3389/fpsyg.2014.01516

Share this project:

Updates