Inspiration
We wanted to know what kind of dreams and memories objects around us might have about their owner in different stages of their life. This is the opportunity to learn something about a person who might no longer be with us.
What it does
During the experience you will look at items around you and interact with hovering dots to listen to a story they've got for you. It might surprise you what you can learn about the owner's past.
How we built it
The experience is built for Snap Spectacles using Lens Studio, TypeScript, Snap Cloud, and Gemini. We capture an image of the user’s environment and send it to Gemini, which returns a label, a description of the detected item, and its world position. While we wait for this data, a narrative engine running on Snap Cloud edge functions generates memories based on objects seen previously. This engine has a stored persona (which could be made dynamic), and as new objects are recognised, it generates a “dream” for each one, telling a story about that persona, Thomas Anderson, the world's most famous hacker.
Challenges we ran into
I’d (Suvi) used Unity before, and while Lens Studio works in a similar way, there were a few quirks that only became clear once I started experimenting with it. I ended up spending about half a day just to fully understand the differences in how things work under the hood.
We spent considerable amount of time to improve the narrative engine to ensure the dreams cross-reference each other. We wanted to make the experience more conversational however, due to the hackathon time restriction, we decided to leave it out.
One of our challenges was prompt engineering Gemini to avoid behaving like a helpful assistant, since we didn’t want to use those capabilities for this project. We spent time carefully shaping prompts so the model would stay in character and generate narrative responses instead of defaulting to instructional or support-style output.
Accomplishments that we're proud of
I (Suvi) hadn’t used Lens Studio before, so getting up to speed was a steep learning curve. I'm really proud that I was able to design the UI and successfully integrate our idea into a working experience on Spectacles.
We’re proud of how well we worked together as a team and how quickly we were able to bring multiple interesting technologies together to explore storytelling in a new way. Despite the short timeframe, we executed the idea fast, did the necessary research to make informed technical and creative decisions, and turned a rough concept into a working experience that meaningfully combines Spectacles, AI, and narrative.
What we learned
I (Suvi) really appreciated the creative process we went through to explore what it means for everyday objects to “speak” about us and what they might reveal about their owner. It taught me how important it is to plan and think deeply about the experience itself, not just push the technical capabilities of a device. My teammates helped me stay focused on what still needed to be done, and their support boosted my confidence in getting things working despite technical difficulties. I also learned to trust my abilities and stay calm under hackathon pressure.
What's next for Wake Them Up!
Next we plan on making the experience more intuitive and user-friendly. We want to add clearer cues to guide users on what to do. For example, prompting them to say “Hello” to start interacting with objects, since that isn’t obvious right now. We also plan to make the narrative engine more conversational, so users can talk to the objects instead of just listening, and to make the persona dynamic. At the moment there’s only one hard-coded character who “owns” the items, but in future versions this could change to support different voices, backstories, and storytelling styles.
Built With
- snapcloud
- spectacles
- supabase
- typescript
Log in or sign up for Devpost to join the conversation.