Inspiration

Most AI tools today tell users what to do.
They give advice, suggestions, or answers — but they rarely show the consequences of a decision over time.

In real life, decisions are constrained, stateful, and irreversible.
Once you choose a path, you don’t get to rewind.

I built Decide.World to explore a different idea:

What if AI didn’t give advice — but instead simulated outcomes inside a controlled world?


What it does

Decide.World is a decision simulation engine, not a chatbot.

Users interact with predefined, real-world scenarios (such as Student Budget Survival or Corporate Crisis).
They do not type prompts.

Instead, users:

  • Observe a structured world state
  • Select decisions via buttons or cards
  • Watch the system update state variables over time

Each scenario maintains a finite world state, for example:

  • Budget
  • Stress
  • Capability
  • Opportunity index
  • Time step

Every decision:

  • Has irreversible consequences
  • Updates only allowed variables
  • Advances time forward
  • Produces an explanation tied directly to state changes

How it works

At the core of Decide.World is Gemini 3, used as a constrained reasoning engine.

Gemini 3 is responsible for:

  • Accepting the current world state
  • Applying predefined rules and constraints
  • Reducing state deterministically
  • Explaining why each variable changed
  • Highlighting tradeoffs and side effects

Gemini does not:

  • Generate free-form advice
  • Invent new mechanics
  • Override constraints
  • Behave like a conversational assistant

This keeps simulations:

  • Comparable
  • Repeatable
  • Explainable

How I built it

The system is designed around a strict simulation flow:

  1. Fixed scenario template
  2. Controlled parameter selection
  3. Initial world state
  4. User decision
  5. Gemini 3 state reduction
  6. Updated world state + explanation
  7. Time progression

The UI deliberately avoids:

  • Chat interfaces
  • Prompt boxes
  • Open-ended text input

All interactions are decision-driven, reinforcing the idea that this is a system, not an assistant.


Challenges I faced

The hardest part was resisting chatbot behavior.

Large language models naturally want to:

  • Give advice
  • Explain too much
  • Introduce new ideas

I had to carefully constrain Gemini 3 so that:

  • It only modified allowed variables
  • All reasoning stayed within scenario rules
  • Every output was tied to state transitions

Designing a system that feels intelligent without feeling conversational was the core challenge.


What I learned

  • AI becomes more trustworthy when it operates inside constraints
  • State matters more than text
  • Decisions feel more meaningful when consequences are permanent
  • Gemini 3 is extremely powerful when used as a reasoning engine, not a chatbot

What’s next

Future extensions include:

  • More curated real-world scenarios
  • Counterfactual comparisons between decision paths
  • Educational and training-focused simulations
  • Enterprise decision modeling use cases

Decide.World doesn’t tell users what to do —
it shows them what happens.

Built With

  • gemini-3-api
  • goolgle-ai-studio
  • javascript
  • simulation
  • state-based
  • ui
  • web
Share this project:

Updates