Inspiration
Cognitive offload occurs when users leave the "thinking" to an AI. How can we design an agent that supports human flourishing by supporting curiosity and inquiry, rather than hindering it? Curiosity Agent is an AI that asks questions, rather than giving answers.
What it does
A clip-worn camera you put on when you leave the house, when wanting to engage with the world, brainstorming, and more. This agent explores the world through the eyes of the user, and every so often, asks a question about what it sees, delivered through whatever earbuds already owned. Questions are drawn from one of six registers (Observational, Social, Intentional, Prior Life, Predictive, Absence) chosen by the model based on what the scene calls for. During use, there's no screen, no prompt, nothing to manage. When you get home, a dashboard surfaces the day's curiosities: trends across the places you visited and the types of questions/inquiries you engaged with.
How we built it
This project was mainly an "IoT" device built by connecting an AI-Thinker ESP32-CAM to a Raspberry Pi for the base station. When both devices were on the same network, the agent would send images to the Pi and receive responses through a carefully crafted prompt sent to Claude. An e-ink display connected to the Pi showcased the dashboard of curiosities. The Pi and display were fitted in a custom 3D-printed case that served as the dock. E-ink itself was a deliberate choice, as it is slow, persistent, doesn't pulse or refresh. The display shows question history alongside synthesized themes: a record of what kept catching your attention.
Challenges we ran into
Getting questions that were meaningful and relevant required a very carefully crafted prompt and good set of images. IoT and networking between devices also proved to be challenging at times.
Accomplishments that we're proud of
Everyone in the team fulfilled a specific role very well and we were very close to the original vision.
What's next for Curiosity Agent
Curiosity Agent is, at its core, a philosophy about the role AI can play. We can expand on how users interact with it by playing with the camera-form-factor, fine tuning the day-to-day interactions, and building more accessories, like the doc, that expand on how the user receives their information. Next steps include refining the form factor, tuning the day-to-day interaction rhythm, and building a companion app, with flows for question arrival, an archive filterable by register, pattern synthesis over time, and a mode that lets you speak back to a question without it collapsing into a chat interface.
Built With
- anthropic-api
- esp-32
- pycharts
- raspberry-pi
Log in or sign up for Devpost to join the conversation.