🧪 WhatIf_

Inspiration

Every major shift in history started as a question someone was afraid to ask out loud:

  • What if the internet became mainstream?
  • What if a pandemic shut down the world?
  • What if clean energy became cheaper than coal?

We wanted to build a tool that lets anyone ask those questions seriously — not just as thought experiments, but as structured simulations backed by AI reasoning.

The name came first: WhatIf_ — the underscore intentionally left open, waiting for the user to define the future.

Visually, we were inspired by the feeling of standing inside a time machine — not a clean sci-fi UI, but something dynamic, alive, and actively reshaping timelines.


What it does

WhatIf_ takes any hypothetical scenario or real-world news and simulates its future consequences across multiple dimensions:

⏱ Timeline

Short-term: 0 to 12 months

Mid-term: 1 to 3 years

Long-term: more than 3 years

🌍 Domain Impact

  • Technology
  • Economy
  • Society

⚠️ Risk Level

$$ r \in {\text{Low},\ \text{Medium},\ \text{High}} $$

🧠 Executive Summary

A concise 2–3 sentence synthesis designed for decision-making.

The system always returns a result. Even if the AI fails, a contextual fallback ensures the simulation never breaks — making it reliable for real-world use and demos.


How we built it

We kept the stack lean and fast for hackathon execution:

Layer Technology
Frontend Next.js (App Router), React
Styling Tailwind CSS + custom dark theme
Backend Next.js API Routes
AI (Cloud) Gemini API
AI (Local) Ollama (Gemma models)
Language TypeScript (strict)

🧠 Architecture

The system follows a structured pipeline:

User Input
↓
Orchestrator (lib/agents.ts)
↓
LLM Provider (Gemini → Ollama<Gemma4>)
↓
Response Parser (JSON validation + coercion)
↓
Simulation Result → API → UI

We originally designed a multi-agent system, but optimized it into a single structured prompt for speed and reliability.


🔄 Fallback System

We built a multi-layer fallback pipeline:

$$ \text{result} = \begin{cases} \text{Gemini}(m_i) & \text{if any cloud model succeeds} \ \text{Ollama (local)} & \text{if cloud fails} \ \text{Contextual fallback} & \text{if all fail} \end{cases} $$

This ensures:

  • Works online or offline
  • Works with or without API key
  • Demo never breaks

Challenges we ran into

🔀 Merge conflicts from parallel agents

We built frontend, backend, and AI pipeline in parallel branches. When merged, conflicts weren’t just syntax — they were logic-level mismatches. We had to reconstruct intent instead of line-by-line fixes.

⚡ API rate limits

Gemini free-tier limits caused inconsistent failures. We solved this by:

  • cascading across multiple models
  • adding Ollama local fallback

🧠 Multi-agent overhead

Originally, we made 7 LLM calls per simulation, which caused rate limits and latency.
We optimized to 1 structured prompt, achieving: $$ \frac{1}{7} \text{ cost, faster response, higher reliability} $$

🎨 UI geometry (unexpectedly hard)

Even visuals required math:

$$ \text{left} = -\frac{w}{2}, \quad \text{top} = 50\% $$

And for orbit animations:

$$ \text{transform-origin}_x = -r \quad \text{or} \quad 100\% + r $$


Accomplishments that we're proud of

  • 💪 The app never breaks thanks to a three-layer fallback system
  • 🎨 A fully animated UI built entirely with CSS (no canvas, no WebGL)
  • 🧠 A clean, type-safe architecture despite heavy merge conflicts
  • 🌐 A local-first AI system that works even without internet

What we learned

  • 🧠 One strong prompt beats many weak agents
  • 🔄 Fallback design is product design — reliability matters more than perfection
  • 🎨 UI and animations require real math, not just aesthetics
  • 🧩 Strong typing (TypeScript) saves time under pressure

What's next for WhatIf_?

  • 🔀 Scenario comparison (“What if we act vs don’t act?”)
  • 📚 RAG-based grounding with real historical data
  • 📊 Confidence intervals from multiple simulations:

$$ \hat{r} = \text{mode}(r_1, r_2, \ldots, r_n), \quad \sigma_r = \text{entropy}(P(r)) $$

  • 📤 Export/share simulations (PDF or link)
  • 🎤 Voice input for faster ideation

🚀 WhatIf_ turns curiosity into simulation — and simulation into insight.

Built With

  • claude
  • geminiapi
  • gemma4
  • google-gemini-api
  • next.js
  • next.js-(app-router)
  • next.js-api-routes
  • ollama
  • ollama-(gemma-models)
  • react
  • tailwind
  • tailwind-css
  • typescript
Share this project:

Updates