Inspiration

Our fascination with the intersection of human emotions, language, and art was the main driving force behind this project. We wanted to create a unique platform where one could express their emotions in text and have it visualized creatively using AI. The idea was to bring words to life and make emotions more tangible and expressive.

What it does

Our project is an interactive platform that takes user input in the form of text, deciphers the underlying emotion, and mirrors it back visually. It uses a state-of-the-art language model to read and understand the emotion behind the words, and then, it employs a savvy image model that displays the corresponding emotion in real-time on a human face.

How we built it

We combined two AI models: a GPT-3.5 Turbo model for sentiment analysis and a convolutional neural network (CNN) for facial emotion recognition. GPT-3.5 Turbo, an expert at understanding human language, was trained to decipher emotions from text, while the CNN model was trained on a vast dataset of faces expressing various emotions. These two models were then harmoniously combined with a middleware translator.

Challenges we ran into

We faced a couple of challenges. Training the language model to accurately understand specific emotions was tricky. Similarly, gathering a diverse set of facial images showing different emotions for the CNN was quite a task. Combining the language and image models was another hurdle due to their different data outputs. And finally, deploying the combined model on a website for real-time user interaction was a demanding task.

Accomplishments that we're proud of

Successfully integrating the two advanced models and creating a real-time interactive platform is an accomplishment we're really proud of. Overcoming challenges in training, integration, and deployment has been an enriching journey. But the real cherry on the top is seeing our platform in action, bringing words to life.

What we learned

We learned a lot about fine-tuning language models, data augmentation techniques for image datasets, handling overfitting in neural networks, integrating disparate models, and deploying AI models in a real-time environment. Each challenge we encountered was a learning opportunity that will undoubtedly prove valuable in our future endeavors.

What's next for Untitled

The next step for our project is to improve and refine our models further for more accurate emotion recognition. We're also planning on adding more features, such as support for more languages and a broader range of emotions. And we're also considering the possibility of an app version of our platform for an even more accessible user experience.

Built With

Share this project:

Updates