Inspiration

Every year, millions of people sit across from a mechanic holding a quote they can't read, under pressure to approve work they don't fully understand. A well-cited NBER field experiment found that women are quoted significantly more than men for the same repair when they're perceived as uninformed — not because they're less capable, but because information asymmetry is real and exploitable.

Most people don't know what to ask. Most people don't know what can wait. Most people just say yes.

I wanted to fix that. And I wanted to build it in a way that felt empowering rather than clinical — which is where the F1 inspiration came from. In Formula 1, no driver makes a pit stop decision alone. They have a race engineer on the wall reading the data, managing the risk, and calling the play. That's what every driver deserves at the repair shop.


What It Does

PitWall acts as your AI race engineer during a repair visit. You paste mechanic notes, upload a photo of your estimate, or run one of the built-in demo scenarios. In seconds you get:

  • Evidence-based urgency ratings — PIT NOW / NEXT LAP / MONITOR / UNCLEAR, assigned by what the mechanic can actually prove, not what they claim
  • Price reasonableness flags — typical national cost range per repair item so you know if you're being overcharged
  • Community approval rates — see how many other drivers approved or declined each repair and what they actually paid
  • Questions to ask — specific, respectful, evidence-demanding questions tailored to your exact quote
  • What to Say Next — a calm, confident script so you never feel stuck in the moment
  • Shareable briefing URLs — forward your analysis to a friend or family member before approving anything

How I Built It

The stack is a FastAPI Python backend paired with a Vite + React + TypeScript frontend, styled with a custom Mercedes W14 pit wall design system — near-black carbon background, Petronas teal accents, Space Mono for all data values.

AI is powered by Groq — Llama 3.3 70B handles text analysis, Llama 4 Scout 17B handles image/vision analysis for uploaded repair photos. The core prompt engineering challenge was making the AI skeptical: urgency is only assigned when the mechanic provides verifiable evidence (measurements, photos, test results), not just claims. A mechanic saying "this is dangerous" without proof gets flagged UNCLEAR, not PIT NOW.

Data is stored in Supabase. Every analysis is saved with a UUID that powers shareable briefing URLs. The community outcomes feature — approval rates and average prices paid — is built on a separate repair_outcomes table seeded with 600 realistic submissions across 30 repair types.


Challenges

The biggest prompt engineering challenge was getting the AI to be genuinely skeptical rather than just echoing the mechanic's urgency language. Early versions would see "dangerous" in a quote and immediately rate everything high risk. The breakthrough was redefining the urgency labels from the ground up: PIT NOW requires specific verifiable findings, and UNCLEAR is the correct label when urgency is claimed without proof.

On the technical side, the Groq vision model occasionally returned empty strings or markdown-wrapped JSON instead of clean output, requiring guard logic and a retry mechanism. The Supabase Python SDK also returned UUIDs as Python uuid.UUID objects rather than plain strings — a subtle bug that caused the share button to silently disappear until we traced it.


Accomplishments

Building a full-stack AI application with image upload, a shareable URL system, crowdsourced community data, and a polished custom design system in a single hackathon session felt genuinely ambitious. The feature I'm most proud of is the community outcomes layer — it's the thing that makes PitWall more than a GPT wrapper. Every user who reports back what they paid makes the next analysis more accurate. That's a real network effect, built in eight hours.


What I Learned

Prompt design is product design. The difference between a useful AI tool and a useless one isn't the model — it's the framing. Defining urgency by evidence rather than assertion completely changed the quality of the output, and that was a writing problem, not an engineering problem.

Also: always cast your database UUIDs to strings before they touch your API layer.


What's Next for PitWall

  • Shop reputation scoring — aggregate upsell rates by shop location so you can see which garages have a pattern of unnecessary recommendations before you even walk in
  • Pre-visit mode — describe your symptoms before going to the shop, get a prediction of what they'll recommend and what's actually likely to be needed
  • Negotiation assistant — real-time back-and-forth mode for when you're on the phone with the service advisor
  • Mobile app — snap a photo of the estimate in the waiting room and get your briefing before you sign

Built With

Share this project:

Updates