Inspiration
The average experience of a vibe coder: Give an agent your prompt, play a round of League of Legends while the agent generates code, skim over the agent's summarized changelog, test code and see errors (let's be honest, code doesn't work first try), ask agent to debug, and repeat.
A problem is that it can be clunky to digest an agent's changelog. The problem can be compared to reading a slidedeck that contains long bullet point lists of information. A common tip to deliver information effectively is to instead have visuals and minimal text. Mimicking this concept, IterViz makes agents output visuals instead of blocks of text to explain its material.
What it does
At its core, IterViz creates a visual orchestration layer between you and AI coding agents. Instead of walls of text, you get an interactive architecture graph that shows exactly what the agent plans to build.
The workflow:
- Prompt the agent with the the project idea (specifics help ground the model better to what you desire)
- Tool generates the plan as a graph — nodes represent components (services, databases, APIs), edges show how data flows between them
- Then, each component node, after some processing, can be clicked on to see its implementation breakdown: the specific functions, tests, types, and config files that will be created
- Clicking the "implement" button shows the implementation in real-time.
How we built it
Frontend (React/TypeScript/Vite):
React Flow — Powers the interactive graph canvas with custom node and edge components D3-force simulation — Provides physics-based layout with node gravity and collision detection The LLM integration uses structured output (Pydantic models) to ensure the AI returns valid graph data, not free-form text that might break the UI.
Backend (Python/FastAPI):
Architect Agent — Takes your prompt and generates a structured "contract" (the architecture graph) using Claude 4.5 Subgraph Generator — Breaks each component into concrete implementation tasks (functions, tests, types) Orchestrator — Coordinates multiple agents to implement nodes in parallel WebSocket server — Pushes real-time updates to the frontend as agents work
Challenges we ran into
It is hard to create a clean graphical visual. Some challenges include minimizing edge overlaps, choosing how to organize nodes when many edges and nodes exist, deciding on spacing between nodes to balance the compactness of the graph, and more.
Outside of implementation, I faced a time challenge of going for too difficult a project idea that in theory isn't even known to work, which burnt out too much time from getting a high-quality submission.
Accomplishments that we're proud of
I'm at least proud of getting decent visuals and having the outline of the project looking good.
What we learned
Learned more about AI agentic tools such as Devin, the idea of parallel agents (with one way of handling being multiple terminals using tmux) and in general better prompting practices. I also learned how to generate good-looking graphs, which is an improvement to the mess of a graph I created in my last hackathon.
What's next for IterViz
Getting the multi-agent in parallel to work would be nice to see, and having this tool be a form of add-on to existing wrappers such as cursor (IDE I suppose) would help make this tool merge with the strengths of existing abilities implemented in these wrappers.
Built With
- anthropic
- dagre
- fastapi
- instructor
- node.js
- opus4.5
- pydantic
- pytest
- python
- react
- sqlite
- tailwind
- typescript
- uvicorn
Log in or sign up for Devpost to join the conversation.