Inspiration

Our inspiration was to leverage multiple libraries and technologies to apply machine learning models to real-world data while maximizing the breadth of impact. We saw the potential for facial recognition and emotion analysis to assist in improving the analysis of individuals' reactions to stimuli for various fields. We captured the essence of this through the implementation of a system that assists therapists in improving the analysis of their patients through emotional trends through visual and auditory expression. By storing this data, therapists can establish a comprehensive database for future analysis, improving patient care through detailed emotional tracking. A patient can have a profile compiled of their sessions with the therapists in which unrealized trends can be made apparent, or trends can be further analyzed by the professionals.

What it does

Mood Maps recognizes a patient's emotions through facial recognition and analyzes their spoken words during a therapy session. This data is then stored, enabling therapists to track and reference emotional patterns and verbal expressions with improved results over time. It provides therapists with a tool to better understand their patients' mental states, allowing for improvements in diagnosis and treatment planning.

How we built it

We used ChakraUI, a React-based bootstrap for React applications for the user interface, OpenCV to create a live video feed, DeepFace AI to analyze facial features, Supabase to quickly and reliably restblish a relational database utilizing 3 tables and 2 foreign keys for effective data storage and retreival. We utilized WebSockets for interoperability between our python-based facial recognition and dynamic front-end. We discovered that facial recognition models (e.g., the DeepFace model) could detect emotions based on facial expressions, while we used Whisper for natural language processing to analyze the patient's speech. We developed the backend using python and Flask, along with many libraries like pandas and numpy for graphs. We stored our relational database and three tables in Supabase. Chart.js was used to visualize emotional data in real-time, giving therapists a clear, interactive view of the patient's state.

Challenges we ran into

Our team knew little about facial recognition and their associated models, so developing an application that analyzes data based on facial recognition was challenging. Having various operating systems among our team members also made progress slow at times due to varying results for varying platforms. We collaborated to discover ways of displaying data that would be most useful to an end user. To tackle this challenge we dedicated time to thoroughly analyzing tradeoffs, and optimizing the development process simultaneously. While having facial and audio based emotion recognition data is interesting, having an adequate way of utilizing it further empowers the information to drive innovation and discovery.

Accomplishments that we're proud of

Our creation of Mood Map neatly gathers data through a UI and facial recognition model that is ~75% accurate. We were able to showcase this data in a digestible format for potential healthcare professionals, connecting the backend and frontend with a very dedicated team. Also, the Mood Map platform provides therapists with a dashboard for their patients, a live recording page, and an analytical page all in the name of health. The real-time graphical display of emotional fluctuations highlight the efficacy of the therapist's analysis may improve.

What we learned

We learned how to utilize AI to streamline the process of making an idea tangible, driving curiosity and creativity to new heights. Through the integration of a facial recognition model and analytical interpretation into a web-based user interface, we learned how to make data communicate between parts of a system that utilize different platforms to handle and share data. A WebSocket was opened to simulate communication between client and server such that we may run the facial recognition program in tandem with a real-time camera feed. We also learned how to design a system while reoptimize as needed as our system grew.

What's next for Mood Map

Our prototype displays the effectiveness of this extra assistance as a tool to the therapist and client. Mood Map will be provided to healthcare providers such that they are equipped with tools to help improve the well-being of others. The next steps to improve our software will require that we train on a custom model that will train against more data to achieve higher emotional accuracy estimates for voice and video. Improvements will be made in terms of processing by moving the platform to the cloud. These steps would assist in improving overall mental health as profiles built to better understand patients are improved per session. The internal framework that differentiates moods and transcription also has the potential to be used in many fields i.e., UX testing, marketing, 3D graphics optimization, special needs accommodations, and more!

Built With

Share this project:

Updates