Nightlamp — The Story Behind the Build
What Inspired Us
It started with a frustration every builder in this room has felt.
A founder ships their app on a Tuesday. It works perfectly. They go to sleep. By Thursday, a Stripe webhook signing secret rotated automatically, an OpenAI API key hit its monthly quota, and an axios dependency silently stopped handling auth headers correctly. Three things broke. Zero alerts fired. The first signal was a customer support ticket that said "your app is broken."
The founder had no idea where to start. Sentry showed a stack trace they couldn't read. UptimeRobot said the site was "up." The AI tool that built the app offered no help keeping it alive.
That gap — between "it broke" and "I know what to do" — had no product.
We built Nightlamp to be that product.
What We Built
Nightlamp is an AI-powered diagnostic engine purpose-built for apps generated by tools like Cursor, Bubble, Lovable, and Bolt.
You give it three inputs:
- A live URL
- A GitHub repo
- An error log
Gemini reads your entire stack — live endpoint status, dependency versions, error signatures — matches them against a diagnostic playbook of the six most common AI-app failure modes, and returns a health report in plain English with:
- 🔴 Red — active failures breaking users right now
- 🟡 Yellow — warnings that will break within 30 days
- 🟢 Green — modules confirmed healthy
Every red and yellow issue comes with a plain-English explanation, step-by-step fix instructions, and an optional code patch — written for founders, not engineers.
How We Built It
Stack
Frontend → React + Tailwind (custom design system)
Backend → Node.js + Express
AI Brain → Gemini 2.5 Flash via Google AI API
Architecture
The data flow is intentionally simple:
User Input (URL + Repo + Logs)
↓
Backend pings the live URL → captures HTTP status + latency
↓
Backend fetches package.json from GitHub → reads dependency versions
↓
All signals assembled into a single structured prompt
↓
Gemini 2.5 Flash analyzes against diagnostic playbook
↓
Structured JSON health report returned
↓
React dashboard renders red / yellow / green module breakdown
Why Gemini is the brain — not just a wrapper
Most AI integrations use the model as a text generator. We use it as a pattern recognition engine.
The system prompt embeds a diagnostic playbook — six failure patterns that recur across every AI-built app:
$$ \text{Failure Patterns} = {BROKEN_WEBHOOK,\ EXPIRED_TOKEN,\ DEPRECATED_DEP,\ SCHEMA_DRIFT,\ RATE_LIMIT,\ SILENT_FAIL} $$
Gemini's job is to match real signal (live HTTP status, dependency versions, error log signatures) against known patterns and output structured JSON — not prose, not guesses. This is what makes the output actionable rather than just descriptive.
We enforced structured output with:
- A strict JSON schema in the system prompt
thinkingBudget: 0to disable Gemini 2.5's reasoning preamble- Aggressive JSON extraction via regex as a fallback
const jsonMatch = text.match(/\{[\s\S]*\}/);
if (!jsonMatch) throw new SyntaxError("No JSON found in response");
What We Learned
1. The hard part isn't the AI call — it's the output contract.
Getting Gemini to return consistent, parseable JSON every time was the real engineering challenge. Gemini 2.5 Flash has a thinking mode that prepends reasoning text before the JSON, which breaks JSON.parse(). We solved it with thinkingBudget: 0 plus regex extraction as a safety net.
2. Real signal makes Gemini dramatically better.
When we added live URL pinging and GitHub package.json reading, the quality of Gemini's diagnosis improved significantly. It's not guessing anymore — it's diagnosing. The difference in output precision was immediately visible.
3. The diagnostic playbook is the real product.
The prompts aren't just instructions — they're an encoded knowledge base. Each failure pattern we documented is a pattern we've seen break real apps. The playbook grows with every app diagnosed. That compounding knowledge base is the moat, not the UI.
4. Non-technical language is harder than technical language.
Writing explanations that a non-technical founder can act on at 2am — without dumbing them down to the point of uselessness — required more iteration than the backend architecture did.
Challenges We Faced
🔴 Model availability on free tier
The first model we targeted (gemini-1.5-pro-latest) returned a 404. The Gemini API had renamed its models between our planning and build phases. We burned an hour debugging a one-word fix.
Solution: We built a model fallback chain — three models tried in sequence. If the primary hits quota, the next fires automatically. The demo never crashes.
const MODELS = [
"gemini-2.5-flash",
"gemini-2.5-flash-lite",
"gemini-2.0-flash-lite"
];
🔴 Gemini 2.5 thinking mode breaking JSON parsing
gemini-2.5-flash defaults to returning a thinking preamble before its output. This is invisible in the playground but catastrophic when you're calling JSON.parse() on the raw response.
Solution:
generationConfig: {
thinkingConfig: { thinkingBudget: 0 }
}
🟡 Rate limits on free tier
Rapid testing during development burned through per-minute quotas quickly. We hit 429 errors repeatedly during the build phase.
Solution: Added a 2-second debounce before API calls and switched to gemini-2.5-flash-lite for development testing, reserving the primary model for demo runs.
🟡 GitHub raw URL conversion
Converting a standard GitHub repo URL to its raw package.json URL is straightforward — unless the repo uses master instead of main, or doesn't have a package.json at root level.
Solution: Silent fail with graceful degradation — if the fetch fails, Nightlamp runs diagnosis on URL + logs alone without crashing.
try {
const raw = repoUrl
.replace("github.com", "raw.githubusercontent.com")
+ "/main/package.json";
const pkg = await fetch(raw);
if (pkg.ok) packageJsonData = (await pkg.text()).slice(0, 3000);
} catch (e) {} // silent — diagnosis continues without it
What's Next
The hackathon MVP validates the core diagnostic loop. The next layer is what makes it a company:
- Auto-fix via GitHub PR — Gemini generates the patch, opens a PR, founder approves with one click
- Continuous monitoring — scheduled scans every 6 hours, not just on-demand
- Playbook database — every resolved failure pattern stored, making the 100th diagnosis faster than the 10th
- Native integrations — Bubble, Lovable, Bolt connect directly instead of requiring manual URL input
Built With
React · Node.js · Express · Google Gemini 2.5 Flash · Google AI API · Tailwind CSS · Space Mono · Outfit
Nightlamp was built in one night. The problem it solves has been keeping founders up for years.
Built With
- css
- express.js
- gemini-2.0-flash-lite
- gemini-2.5-flash
- gemini-2.5-flash-lite
- googleaiapi
- monospace
- node.js
- react
- tailwind
Log in or sign up for Devpost to join the conversation.