Inspiration

We wanted to build something more useful than a generic chatbot answer for big life decisions. People do not just want advice, they want to understand how different choices might ripple across time, money, career, stress, and personal growth. That idea led us to Reality Fork, an agent-powered decision simulator that explores multiple possible futures side by side.

What it does

Reality Fork lets a user describe a crossroads, such as choosing a major, taking an internship, switching careers, or making a relationship decision. The app then creates multiple "what-if" paths and simulates how each one might unfold over time.

Instead of returning one block of text, it generates:

  • parallel decision forks
  • narrated step-by-step timelines
  • side-by-side comparisons
  • metric charts across relevant dimensions like finances, career, psychology, health, time, and relationships

It can also ask clarifying questions when the prompt is ambiguous, save decision history, and store editable memory summaries of important past decisions.

How we built it

We built the project with Next.js 16, React 19, TypeScript, and Tailwind CSS 4. On the backend, we designed a multi-agent pipeline with shared Zod schemas to keep every step structured and type-safe.

The flow works like this:

  1. A planner agent reads the user's decision and chooses the right time horizon, number of steps, relevant dimensions, and possible forks.
  2. For each fork, specialized agents simulate that path across different dimensions in parallel.
  3. A narrator agent combines those outputs into a cohesive timeline.
  4. The frontend renders the results in a comparison view with charts, history, and memory features.

We also added support for uploading supporting files like PDFs, DOCX files, and text files so users can provide richer context.

Challenges we ran into

One of the biggest challenges was making multi-agent output reliable. Different model providers handle structured output differently, so we had to build around schema validation, retries, and provider-specific behavior.

Another challenge was speed and orchestration. A full simulation can trigger many LLM calls, so we had to parallelize carefully, track progress in the UI, and manage rate limits and 429 backoff without making the experience feel frozen.

We also had to think hard about product design: how deep should a simulation go, how many forks are useful, when should the app ask follow-up questions, and how do we present speculative results responsibly?

What we learned

We learned that building an AI product is not just about calling a model. Good results came from strong system design: schema-first contracts, agent coordination, rate limiting, persistence, and a UI that helps users trust and navigate complex output.

We also learned a lot about balancing ambition with usability. The best version of this idea was not "more AI," it was better structure, clearer comparisons, and a smoother experience from input to reflection.

Built With

  • google-gemini-api
  • minimax-api
  • next.js-16
  • ollama-cloud
  • postgresql
  • prisma
  • react-19
  • recharts
  • tailwind-css-4
  • typescript
  • vercel
  • zod
Share this project:

Updates