Hello I am Tanmay Bogguram Vasudev, I am a junior at Liberty High School, Frisco TX.
Hi I am Sathvik Yechuri, I am also a junior at Liberty High School. First of all, we want to thank the AIFA Foundations for hosting this hackathon and giving us the opportunity to learn and explore the foundations of AI. Our project idea for this Hackathon was the app called EmMotivate!
In the world today, with the emergence of social media platforms such as Facebook and Instagram, mental health has been under attack more than ever. We welcome you to the future! Leveraging Chat GPT and our AI coding expertise, we embarked on a groundbreaking project to develop a motivational bot. How it works is that the algorithm analyzes the provided image to recognize a human face and detect the facial expression. This is done with the help of the facial_emotion_recognition python packages, allowing us for faster results. After this process, we use the ChatGPT API, specifically the GPT-3.5-Turbo model, to prompt for emotional feedback depending on the emotion recognized. In other words, this is a therapy session but with a trained AI model which can adjust its feedback based on your emotions.
Looking at the steps specifically, we started out by inputting an image from the user and detecting the position of the human face. Following this, we fed this new image into the facial_emotion_detection python model, which compares our image with a global database of other images, to obtain the probabilities of each emotion recognized. Then we find the highest likely emotion and display it to the user. An additional step we did was to feed this image analysis into the gpt–3.5-turbo model, which interpreted and synthesized a “therapist-like” response.
Initially out plan was to train our own AI model using patterns in key facial features to recognize human emotions. However, we realized how time consuming this process was. For obtaining the emotion from an image, our algorithm had to parse through the image various times in order to detect patterns between the facial landmarks. Additionally, the size of the pixel greatly altered the accuracy and increased the time complexity of the algorithm exponentially. Therefore, with the limited time we had, we decided to use a prebuild model which was trained with global set of images specifically for the purpose of emotion detection.
This was our first time participating in a Ai related hackathon and has elavated us to new heights. With the time counted down on this submission, we thank the AI For all Foundation for introducing us to this topic. This will stick with us for many years to come, remembered as a moment which opened us to this new field. Thank you so much!
Log in or sign up for Devpost to join the conversation.