Inspiration
The inspiration for this project came from a simple observation: most AI systems are built to answer questions and resolve uncertainty as fast as possible. But human thinking doesn’t work like that. We often hold contradictory beliefs, leave questions unresolved, and still function. I wanted to explore what would happen if an AI system was not allowed to “solve” contradictions, but instead had to carry them over time. That curiosity led to building Frame as an experiment in worldview persistence rather than answer generation.
What it does
Frame is not a chatbot and it does not provide answers . Instead, it treats every input as an experience that alters an internal state. When conflicting statements are introduced, the system preserves both and records the tension rather than choosing one. The output is a reflection of the system’s current internal state, showing how beliefs, contradictions, and structure change over time. A live graph visualizes this internal structure, where nodes represent concepts and edges represent relationships or tension.
How we built it
The project was built using Google Gemini 3 and AI Studio. The internal state is not stored in a database; instead, it exists entirely within the model’s long-horizon reasoning. Each interaction forces Gemini 3 to reconstruct and evolve a structured description of its own internal state. Strict prompt constraints were used to prevent the model from answering questions and to ensure it reports state changes instead of explanations. The visualization layer simply reflects this evolving internal structure.
Challenges we ran into
One major challenge was preventing the model from drifting into creative or philosophical language without clarity. Large language models naturally try to be helpful and expressive, so strong constraints were required to keep outputs understandable and grounded. Another challenge was ensuring contradictory inputs were treated equally rather than one being discarded. Achieving consistency without external memory was also difficult and required careful prompt design
Accomplishments that we're proud of
We’re proud that Frame maintains coherent internal structure across multiple interactions without resolving contradictions. The system behaves consistently, demonstrates long-horizon reasoning, and clearly shows state changes rather than surface-level responses. Most importantly, the demo works live and communicates a non-standard way of interacting with AI in a way that is observable and testable.
What we learned
This project taught us how easily AI systems drift toward answer optimization unless constrained. We learned that self-consistency and state reconstruction can be powerful tools for exploring reasoning behavior. We also learned that clarity is more important than complexity, if the system’s behavior cannot be understood by its builder, it isn’t working.
What's next for Frame
Next, we want to refine how internal state changes are summarized so they remain readable without losing depth. We’re also interested in exploring how this approach could be used for AI interpretability, alignment research, or education for helping people understand how AI systems reason under uncertainty instead of just what answers they produce.
Built With
- aistudio
- javascript
- react
Log in or sign up for Devpost to join the conversation.