Inspiration
This project was inspired by one of our little cousins, who has ASD and really struggles to understand emotions. We wanted to create something that could make a real difference—not just for him, but for other kids like him. That’s how the idea for this project was born: an interactive, fun way for kids to learn about facial expressions and emotions while practicing at their own pace.
What it does
Our project is an interactive, educational website designed to help kids with ASD and alexithymia learn to recognize, understand, and practice facial expressions. It features:
Facial Recognition Practice: Kids can mimic pre-selected expressions (like happy, sad, angry, surprised) and receive real-time feedback using DeepFace AI to analyze how closely their expressions match the target images.
Learning Activities: Interactive exercises teach kids about different facial features and how they combine to form various emotions. These activities help kids build a deeper understanding of facial expressions in a simple, engaging way.
User-Friendly Design: The website is designed with kids in mind, offering a straightforward and accessible interface to ensure ease of use.
How we built it
We used a mix of tools and technologies to bring our idea to life:
- Frontend: HTML, CSS, and JavaScript for the interface.
- Backend: Python and Flask for the backend.
- AI Integration: DeepFace to analyze facial expressions and provide real-time feedback.
Challenges we ran into
As a two-person team participating in our very first hackathon, we faced several challenges:
- Time Management: Dividing tasks and staying on track within the 36-hour timeframe was tough, especially with so much to learn.
- Learning DeepFace: Understanding and implementing DeepFace was time-intensive, which took away from the time we could have spent polishing the frontend.
- GitHub Collaboration: Working together on GitHub for the first time posed challenges like managing merge conflicts and syncing changes effectively.
- Balancing Features: We wanted to include a lot of learning activities and fun features, but the limited time made it difficult to incorporate everything we envisioned.
Accomplishments that we're proud of
We are incredibly proud that we managed to complete the main features of our project on time and ensure they work as expected. As a two-person team participating in our first-ever hackathon, this achievement feels especially rewarding.
We're also proud of how much we learned and grew in such a short time. From tackling a steep learning curve with DeepFace to figuring out how to collaborate effectively on GitHub, we pushed ourselves outside our comfort zones and came out with valuable new skills.
But most of all, we're proud of creating something that has the potential to make a real difference in the lives of kids with ASD and alexithymia. Knowing that our project could help individuals better understand and connect with emotions is the biggest accomplishment of all.
What we learned
Throughout this journey, we learned a lot, especially since this was our first hackathon. Some of the key takeaways include:
- DeepFace: One of the biggest learning curves was understanding and implementing DeepFace, an AI-powered facial recognition library, to analyze facial expressions.
- Collaborating on GitHub: We gained valuable experience managing repositories, resolving merge conflicts, and collaborating effectively on a shared codebase.
- Learning Activities Design: We explored creative ways to make learning about facial features engaging and accessible for kids.
What's next for EmotionQuest
We have big plans for EmotionQuest to make it even more impactful and engaging for kids.
Expanding Emotional Range: Currently, we focus on five core emotions, but we want to include a broader range of emotions, like fear, disgust, or confusion, to help kids develop a more nuanced understanding of facial expressions.
Gamifying the Learning Section: To keep kids engaged and prevent the content from feeling repetitive, we plan to introduce gamified elements to the learning activities. These could include earning badges, completing challenges, or progressing through levels as they master different skills.
Improved Feedback from Facial Recognition: We want the facial recognition feature to go beyond simple matching by offering more detailed feedback. For example, it could highlight specific areas (like raising eyebrows or widening eyes) to help kids refine how they express and interpret emotions.
Emotion Understanding Tools: We aim to incorporate features that help kids not only mimic emotions but also understand what they mean and how they relate to different social situations. This could involve brief, AI-generated explanations or scenarios to give context to the emotions being practiced.
Polishing and Expanding What We Have: Finally, we want to refine and fully integrate the features we've built so far, ensuring everything runs smoothly and provides a seamless experience for users.
Log in or sign up for Devpost to join the conversation.