Inspiration
The inspiration for EmoSense Therapeutics stemmed from a deep commitment to improving the rehabilitation process and providing a more holistic approach to patient care. Witnessing the challenges faced by healthcare professionals in gauging patient responses to medications and treatments, we sought to integrate advanced technologies to fill this gap. The idea was to harness the power of facial recognition using deep learning to capture subtle emotional cues, providing valuable insights into the patient experience during rehabilitation.
What it does
EmoSense Therapeutics is a project that utilizes facial detection using a deep learning model to monitor and analyze the emotional responses of individuals in a rehabilitation center. Specifically, the system focuses on tracking how patients react to medications, providing healthcare professionals with real-time information to enhance the effectiveness of rehabilitation programs. The technology generates weekly reports, offering a detailed overview of emotional nuances and contributing to a more personalized understanding of each patient's journey.
How we built it
The foundation of our project lies in using FER2013 Dataset and Transfer Learning, where we adapted the MobileNetV2 architecture. The modification involved removing the last layer and introducing three new layers tailored to our objectives. The final layer utilizes a 'softmax' activation function, and the loss function is set to 'sparse_categorical_crossentropy.' This configuration enables the model to be trained to recognize seven facial emotions: Angry, Disgust, Sad, Happy, Surprise, and Neutral. Leveraging the HaarCascade algorithm, we pinpointed the face as the Region of Interest (ROI) for this project. Initial tests with sample image inputs validated the emotion detection capability. Subsequently, using a webcam as an alternative to CCTV cameras in rehabilitation centers, we achieved real-time facial emotion recognition over an extended period ( which is meant to be for about one week in real-time). The captured data serves as the basis for generating a comprehensive report, including a graph depicting the individual's mood throughout the day. This report equips healthcare professionals with valuable insights to assess the efficacy of treatment plans.
Challenges we ran into
A significant challenge during our project was the limited number of images depicting the emotion of disgust, totaling only 436 instances. This scarcity posed difficulties for the model in accurately recognizing and distinguishing this particular emotion. To address this issue, our team explored innovative solutions, including the implementation of a data augmentation strategy. By augmenting the existing dataset through techniques such as rotation, flipping, and scaling, we aimed to effectively increase the diversity of disgust images. However, it's noteworthy that we were not able to access the local webcam using Google Colab for testing purposes, prompting the use of Jupyter Notebook as an alternative platform. Despite these challenges, our focus on finding creative solutions demonstrated our commitment to enhancing the model's accuracy and overcoming technical constraints for the project's success.
Accomplishments that we're proud of
The accomplishment we take pride in is the remarkable accuracy achieved during training, reaching approximately 98%, while maintaining a robust testing accuracy of 92%. This achievement underscores the effectiveness of EmoSense Therapeutics as a powerful tool for healthcare professionals. The system's real-time insights into patient emotional responses, coupled with the generation of informative weekly reports, highlight its potential to significantly impact the rehabilitation process. EmoSense Therapeutics, driven by a sophisticated deep learning model, introduces a promising avenue for personalized care within rehabilitation centers, offering the potential to enhance overall patient outcomes.
What we learned
During the development of EmoSense Therapeutics, our technical journey primarily revolved around the training and testing of a deep learning model for facial recognition. Emphasizing the importance of a well-curated dataset, we navigated challenges associated with model training and fine-tuning. Experimentation with hyperparameters, model architectures, and training strategies played a pivotal role in achieving optimal performance. Additionally, we delved into the intricacies of real-time processing, aiming to strike a balance between model accuracy and computational efficiency. Though the implementation is currently at the training and testing stage, this experience provides foundational technical insights applicable to the potential future deployment of EmoSense Therapeutics in practical settings. Our focus on continuous improvement and adaptability reflects the dynamic nature of working with advanced technologies in the realm of emotion recognition and healthcare applications.
What's next for EmoSense Therapeutics
In the upcoming phases of EmoSense Therapeutics, we envision advancing emotion detection capabilities by not only identifying emotions but also gauging their intensity through the integration of bio signals such as ECG. This expansion into bio signal analysis will provide a more comprehensive understanding of emotional states, introducing a vital dimension to the assessment. By incorporating quantitative measures of emotional intensity, including physiological signals, we aim to establish a nuanced understanding of the patient's emotional well-being. This enhanced insight will empower healthcare professionals to assess the urgency of intervention, utilizing technical metrics from bio signals to determine whether immediate action is required.
Built With
- deep-learning
- google-colab
- jupyter-notebook
- machine-learning
- python
- transfer-learning
Log in or sign up for Devpost to join the conversation.