Inspiration

Why do we keep thinking the same thoughts?

Not because we lack information. Not because the answer is hard. But because something we can't articulate — an avoidance, a tacit assumption, a somatic pull — keeps dragging us back to the same place.

In cognitive science, this is called rumination — the mind's tendency to circle the same territory endlessly. We all do it. "Should I quit my job?" "Was that the right decision?" "Why does this keep happening to me?" The thoughts feel different each time, but trace them upward through layers of abstraction, and they converge on the same root. Every time.

The terrifying part? You can't see the walls. The structure of your circular thinking is invisible to you — by definition. If you could see the loop, you'd step out of it.

DoDoCircuit was born from one question: Can we make the invisible walls visible?

Not to stop circular thinking — it isn't a bug. But to externalize it. To watch it unfold in real time. To see the moment two completely different starting points converge on the same root concept, and realize: that convergence is not coincidence. It's a fingerprint of how you think.

What It Does

DoDoCircuit generates multiple parallel chains of philosophical thought in real time using Amazon Nova, visualizing them as an evolving graph on a Canvas.

Agent Mode — Autonomous Exploration

DoDoCircuit's default mode is not a timer-driven loop — it's an autonomous agent. Every few seconds, the agent receives the full graph state (all chains, convergence history, working memory, past discoveries) and decides what to do next using tool_use:

Tool Purpose
deepen_chain(chainId, reason) Explore a specific chain deeper, with stated rationale
check_convergence(chainA, chainB, hypothesis) Test whether two chains share a root concept
reflect(observation, hypothesis) Pause, record an insight, and continue reasoning (multi-turn)
conclude(summary, nextPriority) End the current step and declare what to prioritize next

The agent maintains a working memory buffer of its last 10 insights, carries bias profiles across sessions, and adapts its strategy based on accumulated personality state. It can reflect internally for up to 3 iterations before committing to an action — genuine multi-turn reasoning, not single-shot generation.

Users can toggle between Agent mode (autonomous) and Manual mode (round-robin) at any time.

Solo Mode — One Mind, Many Chains

                    ┌─ Origin ─┐
                   /             \
              Principles      Principles
              /                     \
          Values                   Values
          /                             \
      Structure                     Structure
      /                                     \
  Context                               Context
  /                                             \
Concrete  ───── "Why do cities grow?" ─────  Concrete
            ── "Why do families argue?"
            ── "Why do empires fall?"

Multiple chains explore different starting points simultaneously:

  • Altitude layers map thoughts from concrete (edge) to abstract (center) across 6 levels
  • Convergence detection reveals when separate chains arrive at the same root concept — the "aha" moment when unrelated questions share a hidden connection
  • Fire mechanism — when concepts collide across chains, side-nodes activate, spreading influence like sparks
  • Bias profiling tracks your interaction patterns — what you click, what you ignore — building a personality that shapes future thinking
  • Loop narration generates poetic summaries of detected circular paths, with Amazon Polly TTS playback

Dialogue Mode — Two Loops Collide

Two AI agents, each carrying a distinct circular pattern, engage in structured dialogue. Three interference patterns emerge:

Pattern What Happens What It Reveals
Resonance Both loops intersect at a shared concept A hidden agreement neither agent intended
Divergence One agent's words crack the other's circular pattern The moment an external perspective breaks a loop
Double Helix Both agents orbit parallel loops without merging Two minds thinking in circles, side by side, unable to connect

Seven preset dialogue themes explore tensions like "Quality vs. Cost," "Freedom vs. Safety," and "Privacy vs. Transparency," each grounded in culturally-distinct agent configurations.

Personality System — Your Cultural Thinking Profile

A 3-step onboarding flow creates your cultural thinking fingerprint:

  1. Country selection — loads Hofstede cultural dimension scores (12 countries)
  2. Vantage point — where you stand (factory, university, hospital, farm, office, market, home, school)
  3. Headline selection — pick 2–3 AI-generated headlines to seed initial concepts and negative space

The resulting PersonalityState includes culturally-predisposed avoidance domains, concept seeds, negative space tracking, and somatic markers that evolve over time.

How We Built It

Architecture

┌─────────────────────────────────────────────┐
│  React + TypeScript Frontend (Vite)         │
│  ├── Canvas API — real-time graph rendering  │
│  ├── Agent Loop — autonomous decision cycle  │
│  │   ├── Working Memory (10-entry buffer)    │
│  │   ├── Bias Profile (decay + EMA)          │
│  │   └── Agent/Manual mode toggle            │
│  ├── PersonalitySetup — onboarding flow      │
│  ├── Dialogue UI — agent conversation view   │
│  └── LoopPlayer — TTS narration playback     │
├─────────────────────────────────────────────┤
│  Supabase Edge Functions                    │
│  ├── nova-generate — AI engine              │
│  │   (Amazon Nova 2 Lite via Bedrock        │
│  │    Converse w/ tool_use)                  │
│  │   (Claude 4.5 Haiku fallback)           │
│  │   Modes: init / regress / agent_step /    │
│  │          headlines / loop_narration        │
│  ├── dialogue-generate — dialogue engine     │
│  └── loop-tts — Amazon Polly TTS            │
├─────────────────────────────────────────────┤
│  Four-Layer Data Model                      │
│  ├── Flow — volatile in-memory graph         │
│  ├── Tracks — localStorage statistics        │
│  │   ├── Bias profile (concept weights       │
│  │   │   w/ 30-min half-life decay)          │
│  │   ├── PersonalityState (4 layers)         │
│  │   └── Cross-session discovery recall      │
│  ├── Patterns — Supabase (convergences)      │
│  └── Fossils — user-saved loops              │
└─────────────────────────────────────────────┘

Amazon Nova Integration

Function Primary AI Fallback Purpose
nova-generate (agent_step) Amazon Nova 2 Lite (Bedrock Converse w/ tool_use) Claude 4.5 Haiku (tool_use) Autonomous exploration decisions
nova-generate (init/regress) Amazon Nova 2 Lite (Bedrock Converse) Claude 4.5 Haiku Thought generation
dialogue-generate Amazon Nova 2 Lite (Bedrock Converse) Claude 4.5 Haiku Dialogue utterance generation
loop-tts Amazon Polly Text-to-speech narration

Amazon Nova 2 Lite generates each thought step as structured JSON containing the philosophical question, paired concepts, and abstraction altitude. In agent mode, the system uses Bedrock Converse with tool_use to enable multi-turn autonomous reasoning. The system sends approximately 25 requests per minute during active exploration, with rate limiting (50 free requests/session, 5-second cooldown).

Development

  • 14 days of development, 117 commits~4,700 lines of TypeScript
  • React + TypeScript with Canvas API for real-time rendering at 500+ nodes
  • Supabase Edge Functions (Deno) for serverless AI pipeline
  • Claude Code as primary development partner

The Honest Realization

Here is where we need to be candid.

Phase 1 called Amazon Nova Pro as a stateless function every few seconds. The surrounding context created an illusion of continuity, but the AI itself never decided what to do next. We admitted this openly:

Prompt play can generate circles. But a stateless function cannot ruminate.

That admission drove us to build what we now have: an autonomous agent architecture.

What changed:

The system now maintains state. Working memory accumulates across steps. Bias profiles persist across sessions. The agent receives graph state, past discoveries, and personality data — and chooses its own next action through tool_use. It can reflect internally before acting. It prioritizes convergence when chains reach sufficient depth. It avoids re-discovering loops it has already found.

What we can now claim:

Condition What It Requires Phase 1 Now
Autonomous judgment The system decides what to do next ❌ Timer-driven round-robin ✅ Agent selects from 4 tools with stated rationale
State accumulation Past decisions shape future behavior ❌ Stateless calls ✅ Working memory + bias profile + past discoveries
Tool use The system acts on the world, not just generates text ❌ Fill-in-the-blank ✅ Bedrock Converse tool_use (deepen / converge / reflect / conclude)
Multi-turn reasoning The system thinks before acting ❌ Single-shot ✅ Up to 3 reflect loops per step

What we still can't claim:

The agent doesn't truly avoid topics — it receives avoidance hints in its prompt, but nothing prevents it from going there if it chooses to. The personality shapes exploration, but doesn't constrain it the way human psychology constrains human thought. The invisible walls are still semi-transparent.

We've moved from "a beautiful toy" to "a genuine agent with personality-shaped exploration." But genuine rumination — where loops emerge from architectural inability rather than prompt guidance — remains the horizon.

Challenges We Ran Into

The Illusion of Depth

It is disturbingly easy to make an LLM output look deep. The graph is beautiful. The convergences feel meaningful. It took deliberate critical analysis to see through our own system's performance. We almost fooled ourselves.

Defining "Agent" — Then Building One

The industry uses "agent" loosely. We forced ourselves to define it precisely — autonomous judgment, tool use, observe-adapt loops — and initially admitted our system met none of these criteria. That honesty clarified everything: it became our roadmap. We then implemented a Bedrock Converse tool_use architecture where the AI genuinely selects actions, reflects on results, and adapts. The system now meets 3 of our 4 criteria. The fourth — genuine avoidance emerging from architecture rather than instruction — remains an open problem.

Cultural Modeling Complexity

Hofstede dimensions provide a starting framework, but translating cultural tendencies into "invisible walls" that an AI naturally bumps against — without explicit instruction to avoid — is an open research problem. Culture shapes thought, but encoding that into a system is harder than encoding language.

Language Bleeding

I think and communicate in Japanese, but all code, documentation, and user-facing text needed to be in English. AI would occasionally mix Japanese into source files. This required constant vigilance and explicit steering.

Rate Limiting vs. User Experience

Each thought step costs an API call. At intervals of a few seconds with multiple chains running simultaneously, we hit rate limits quickly. Balancing exploration speed with API costs while maintaining a smooth UX required careful engineering — cooldowns, BYOK (bring your own key) options, and speed controls (1x/2x/4x/8x).

Accomplishments We're Proud Of

Development

  • 14 days, ~4,700 lines of TypeScript, 117 commits
  • Real-time Canvas rendering handling 500+ nodes smoothly
  • Two fully functional modes (Solo + Dialogue) with distinct interaction paradigms

Agent Architecture

  • Multi-turn autonomous agent with 4-tool decision space (deepen / converge / reflect / conclude)
  • Working memory buffer persisting across exploration steps
  • Cross-session discovery recall — the system remembers what loops it found before
  • Graceful fallback from Amazon Nova 2 Lite to Claude 4.5 Haiku when Bedrock is unavailable
  • Agent/Manual mode toggle — users can switch between autonomous and guided exploration

Amazon Nova Integration

  • Nova 2 Lite as primary AI backbone with Bedrock Converse tool_use for agent reasoning
  • Amazon Polly TTS for loop narration — hearing your circular thoughts read aloud
  • Structured JSON generation pipeline optimized for Nova's response format

Intellectual Honesty

  • Publicly admitted our Phase 1 was "prompt play, not genuine rumination"
  • Drew the distinction between generating circles and getting stuck in them
  • Let that honesty drive the architecture of the agent system

The Achievement

  • A system that makes the invisible structure of circular thinking visible
  • Proved that AI can externalize cognitive patterns you can't see on your own

What We Learned

1. Beautiful Output ≠ Genuine Intelligence

The most important lesson: a beautiful, philosophical-looking output can emerge from a stateless API call on a timer. The illusion is perfect. Breaking through that illusion — and admitting what you've actually built — is the hardest part of working with AI.

2. Cultural Context Changes Everything

Country selection isn't decoration. Hofstede's cultural dimensions (Power Distance, Individualism, Masculinity, Uncertainty Avoidance, Long-Term Orientation, Indulgence) create meaningfully different thinking patterns. A user from Japan and a user from the US, given the same starting point, generate genuinely different convergences.

3. The Gap Between "Shaped by" and "Constrained by"

Building the agent taught us the difference between personality that shapes exploration and personality that constrains it. Our agent receives avoidance hints and tends to follow them — but it can ignore them. Human circular thinking isn't like that. You don't receive a hint to avoid a topic; you literally cannot see it. Bridging this gap — from soft guidance to hard architectural blindness — is the core challenge ahead.

4. The Architecture of Loops

Circular thinking requires personality — biases, blind spots, assumptions, attractions. Without personality, there is no avoidance. Without avoidance, there is no invisible wall. Without an invisible wall, loops are accidental, not structural. This insight changed our entire roadmap.

Personal Reflection: Seeing Your Own Walls

Building a system that visualizes invisible walls made me see my own. The project itself became circular — every design decision led back to the same question: "What makes thinking genuinely circular?" The answer was always: something you can't articulate.

AI doesn't just assist — it reveals. Through this hackathon, I discovered that building with AI is not just about what the system produces, but about what the process of building teaches you about your own thinking.

What's next for DoDoCircuit

Timeline Status Plans
Phase 2 (Agent) ✅ Implemented Autonomous agent with tool_use, working memory, cross-session recall, bias profiles
Phase 2.5 (Personality) 🔧 Partial PersonalityState with avoidance domains implemented; true architectural avoidance still prompt-guided
Phase 3 🔜 Next "Unspoken Engine" — external detection of blind spots, visualized as fog on the canvas
Phase 4 🔜 Planned Hard avoidance — loops emerge from architectural limitations, not prompt hints
Phase 5 🔜 Planned Cross-session memory evolution — the system watches your loops change over time

The fundamental bet: if we give the system a personality — with all its biases, blind spots, and gravitational pulls — genuine circular thinking will emerge as a natural consequence, not as a programmed behavior.

Built With

Share this project:

Updates