About the project

Inspiration Pitch Arena came from a brutally common hackathon problem: teams often have a decent build, but when it is time to pitch, the story falls apart. The idea is rarely the real bottleneck. The bottleneck is turning sketches, README notes, half-formed talking points, and judge anxiety into a sharp, memorable demo narrative.

I wanted something that felt less like a generic chatbot and more like a tough, useful judge coach sitting next to you during crunch time. That meant the product needed to work from the same messy material teams already have: a whiteboard sketch, a rough pitch, screenshots, judging criteria, and last-minute edits.

How we built it Pitch Arena is a standalone Next.js 16 app built around a single live workspace. The UI follows a full-screen studio pattern: a tldraw canvas in the center, persistent project title, transcript overlay, upload flow, and structured artifact tabs for Rubric Score, Better Pitch, Judge Q&A, Demo Script, One-Liner, and Roast.

On the backend, Gemini powers:

-Live voice session tokens -Uploaded file analysis for README notes, PDFs, screenshots, and judging criteria -Canvas description for architecture or demo sketches -Structured artifact generation grounded in rubric criteria The app keeps everything local for the MVP. Session state is stored in localStorage, uploaded blobs live in IndexedDB, and the canvas persists client-side. When a Gemini key is not configured, the app still works in deterministic demo mode so the full flow remains testable.

Challenges we faced The hardest challenge was avoiding the usual AI-demo trap where the model says something interesting, but the product state does not actually change in a meaningful way. I wanted every important result to land in structured UI, not disappear into a transcript.

Another challenge was keeping the app greenfield and lightweight while still borrowing the best parts of a larger reference system. That meant deliberately choosing the session workspace pattern without dragging along auth, dashboards, or backend persistence that were irrelevant to the hackathon use case.

The last big challenge was grounding outputs in reality. A judge coach is only useful if it stays specific. So the system had to continuously refresh project context from the pitch itself, uploaded files, and canvas snapshots instead of defaulting to generic startup language.

What we learned I learned that the strongest AI workflow here is not “ask a chatbot for advice.” It is “convert messy multimodal project context into the next concrete deliverable.” That pushed the product toward artifact-driven state, tighter schemas, and more opinionated prompting.

I also learned how much product quality comes from narrowing scope. Keeping the MVP local-only, single-route, and ruthlessly focused on pitch improvement made the whole app sharper and faster to validate.

Built With

Share this project:

Updates