Inspiration

Have you ever woken from the most vivid dream, where the events play out in your head like a film and you’re the star; only for, in the next minute, it all disappears like smoke in the wind. Dreaming is a universal human experience that has been subject of both elusive fascination and wonderment since the advent of human communication. From the mystic to the scientific, it is only natural that we wonder about the basic question: “What do my dreams mean?”

Research from the McGovern Institute at MIT theorizes that dreaming is a byproduct of the biological process reorganizing memories in our brain. While they state that dreams “aren’t instilled with meaning, symbolism, and wisdom in the way we’ve always imagined,” because of how much emotion and sensory experience is involved, a look into our dreams may also be a look into ourselves.

To bridge the gap between the fleeting nature of dreams and our desire to preserve and understand them, many people turn to dream journals. Regularly recording dreams can significantly improve recall, helping individuals identify recurring themes, emotions, and imagery. However, the process of documenting dreams, especially right after waking, can be tedious. Some try using voice memos as a quicker method, but reviewing and transcribing these recordings often becomes a chore in itself. This inspired us to build WanderLandAI.

What it does

A mobile application that streamlines this process: users can simply record themselves describing their dreams upon waking, and the app automatically transcribes the audio into text and generates a dreamy, watercolor-styled comic strip based on the description.

How we built it

We utilized a multi-layered LLM API approach that relies on our beloved sponsor, Google, and their Gemini API. First audio is recorded from the user’s device where the mp3 file is then passed to Gemini for audio transcription. Next, the transcribed audio is parsed into a detailed summary and emotion and sentiment analysis for the dream. This detailed description is then chunked into 6 dream chapters where each of those chapters is passed into the text to image API with a prompt template that creates watercolor style comic images of the dream. The backend was built in node JS. As we are first time hackers, we used Cursor to help us with setting up the backend. Our designer created a UI that mirrored the dreamlike quality of the generated images, using flowing visuals and a gentle color palette to evoke the feeling of being between sleep and memory. Meanwhile, our front-end developer bridged the gap between the user interface and back-end logic, ensuring seamless communication that brought the entire experience to life.

Challenges we ran into

We are new to hackathons so we faced various challenges. The biggest challenges were converting the web design into a mobile app frontend. Additionally, we had trouble connecting the backend with the frontend. We were also struggling with the API credits as we only had $5 worth of Gemini API credits.

Accomplishments that we're proud of

3/4 of our team are first-time hackers. No one on our team had prior experience in mobile development. From setting up the development environment to integrating speech-to-text and image generation models like Gemini, the learning curve was steep but ultimately rewarding as we troubleshooted, pivoted, pivoted again, and again, and had to find creative ways of working through a development space we had little knowledge of.

What we learned

Throughout the hackathon, one of the biggest learning curves was mobile app development. None of us had prior experience building a mobile app from scratch, so we had to quickly familiarize ourselves with mobile frameworks, UI/UX design principles, and the intricacies of debugging across different front-end and back-end codes. We experimented with different toolkits, read through documentation, and learned how to design user-friendly interfaces that felt intuitive and engaging. This hands-on crash course not only taught us how to bring an idea to life on a mobile platform, but also gave us a deep appreciation for the design process.

We also discovered the power of collaboration tools like Cursor AI, which helped us streamline our codebase, troubleshoot bugs more efficiently, and even learn from AI-generated suggestions that sped up our development process. Beyond the technical skills, perhaps the most important thing we learned was how to stay motivated and support one another. When things didn’t work, and they often didn’t, we reminded each other of what our goals were for this project, why we were here, and realigning between cookies and energy drinks. In moments of frustration or burnout, it was our mutual encouragement, shared vision, and long hour breakthroughs that kept us moving. This experience wasn’t just about building an app, it was about learning how to build as a team.

What's next for WonderLandAI

Looking ahead, our next goal is to build out user account functionality so the public can securely save and revisit their dream logs within the app. This will allow users to build a personal dream archive and track patterns or changes over time. One feature we were especially excited about, but didn’t have time to implement, was a conversational AI component. We envision that a future iteration of the app would include an interactive chat that would generate thoughtful, reflective prompts based on a user's dream content. This feature would encourage deeper introspection and help users explore the emotional layers of their dreams in a more guided and meaningful way.

Share this project:

Updates