Inspiration

Humans have always wondered “What if?”.
What if the Roman Empire never fell? What if electricity was discovered in the Middle Ages? What if climate change policies had started in 1900?
We wanted to build a tool that doesn’t just answer those questions, but lets people explore alternate timelines as if they were real.

This inspiration came from a mix of science fiction, strategy games, and worldbuilding communities. By combining the power of open-source large language models with generative visualization, we set out to create a living multiverse simulator.


What it does

Multiverse Simulator allows anyone to:

  • Enter a “What if?” scenario and instantly generate an alternate world.
  • Explore a timeline of events in that universe.
  • Meet characters (agents) who evolve with goals and memory.
  • Ask freeform questions like “What’s science like in this world?” or “Who rules the empire in 2025?”.
  • (Optional) View artifacts of the world such as maps, flags, or even AI-generated music to bring the universe to life.

In short: it’s an interactive sandbox of alternate realities.


How we built it

  • Frontend: React + TailwindCSS for a clean, futuristic design.
  • Core Engine: gpt-oss-120b as the worldbuilder, generating timelines, characters, and narratives.
  • Agent Memory: Vector embeddings to give characters evolving goals and histories.
  • Exploration: A conversational Q&A interface powered by gpt-oss for real-time world deep-dives.
  • (Optional Multimedia): Hooks for generative image/audio APIs to render artifacts of the alternate universe.

The result is a modular system that can simulate entire worlds in minutes.


Challenges we ran into

  • Scope vs. Time: Alternate-history simulation can spiral infinitely. We had to design guardrails to keep worlds coherent.
  • Memory & Persistence: Getting characters to remain consistent across multiple queries was tricky.
  • Balancing Creativity with Plausibility: The model sometimes generated wild outcomes (like “Roman Empire inventing interstellar travel in 400 AD”). We had to tune prompts to keep results fun but believable.
  • Performance: Running large models with complex prompts meant optimizing API calls and caching results.

Accomplishments that we're proud of

  • Building a working multiverse exploration interface in a short hackathon timeframe.
  • Watching coherent alternate histories unfold in seconds.
  • Seeing emergent behaviors when characters with goals and memory started influencing timelines.
  • Designing a project that is both educational and entertaining, with potential real-world impact in classrooms, research, and creative industries.

What we learned

  • How to prompt and structure gpt-oss for worldbuilding and agent simulation.
  • The importance of UI/UX when dealing with infinite complexity — good design makes the simulator usable.
  • Techniques to balance imagination with logic, so results are fun but grounded.
  • How open-source LLMs can power tools that go far beyond chat — into dynamic, evolving simulations.

What's next for Multiverse Simulator

  • Education: Partnering with teachers to use it as an interactive history & futures-thinking tool.
  • Research: Adapting the simulator for scenario planning in climate change, policy, and global foresight.
  • Community Sandbox: Allowing users to share, remix, and expand universes.
  • Multimedia Expansion: Generating AI art, maps, music, and 3D worlds to make the simulation fully immersive.
  • Mobile/Offline Support: Bringing the multiverse to low-connectivity areas for maximum accessibility.

Our vision: a future where anyone can step into alternate worlds, not just imagine them. 🌌

Built With

Share this project:

Updates