Inspiration

Most people learn investing from the internet — short videos, personal anecdotes, hype cycles, and confident opinions. The problem isn't a lack of information; it's that experience is expensive. By the time people develop good instincts, they've often already made costly mistakes.

What stood out to me was this gap: there are plenty of tools that teach what to do, but almost none that help people understand how they actually make decisions under pressure.

I was inspired by the idea of experience compression — giving users a safe way to practice high-stakes decisions, feel realistic stress, make mistakes, and then reflect on why those decisions felt right at the time. The goal wasn't to predict markets or give advice, but to train judgment before real money is involved.

That's how Is It Legit? was born.

What It Does

Is It Legit? trains how people make decisions under uncertainty — not what decisions they make.

The platform places users into realistic, high-pressure market simulations and instruments how decisions are made rather than optimizing for profit.

Here's what that looks like in practice:

A user buys HYPECOIN at $0.52 after seeing it trend on social media, doubles down at $0.70 on pure momentum, and panic-sells at $0.28. They lose $1,800 — but the reflection doesn't just show the loss. It shows them why each decision felt rational at the time, that their process score was 25/100, that FOMO was detected at 80% confidence, and what a professional trader would have done differently. Their next simulation then adapts to specifically target this FOMO pattern — and over 20 sessions, their process score climbs from 25 to 88.

Specifically, the app:

  • Simulates realistic markets with 14 realism features — from GARCH volatility clustering and circuit breaker halts to crowd behavior modeling, margin pressure, and news latency delays
  • Tracks decision behavior in real time — timing, attention patterns, risk allocation, confidence levels, order types, and stated rationale
  • Separates outcomes from decision quality — a user can profit and still receive a low process score
  • Uses Google Gemini to generate evidence-backed reflection and counterfactual analysis grounded strictly in the user's behavioral data
  • Challenges reasoning before commitment — users can ask the AI to critique their rationale and score it prior to executing a trade
  • Adapts future simulations to target the user's specific behavioral patterns, creating a personalized training loop

The result is a system that trains process over outcomes, helping users build better instincts rather than chase correct answers.

How I Built It

System Architecture

The application is built as an orchestrated agent system, not a prompt-based interface.

  • Frontend: React 18 + Vite + Tailwind CSS
  • Backend: FastAPI (Python)
  • Database: PostgreSQL (simulation runs, decision logs, behavior profiles)
  • AI Engine: Google Gemini (server-side only)
  • Real-time Streaming: Server-Sent Events (SSE) with auto-reconnect
  • Testing: 158 backend tests (pytest) + 22 frontend component tests (Vitest)
  • CI: GitHub Actions pipeline with backend tests, frontend tests, and build verification

Deterministic Simulation Engine

All market mechanics are fully code-controlled. Gemini does not influence prices, events, or outcomes.

The 1,070-line simulation engine implements 14 realism features including bid-ask spreads, transaction costs, liquidity constraints, volatility regimes, circuit breaker halts, order types, time pressure fills, news latency, correlated assets, margin/leverage, drawdown limits, macro indicators, and a behavioral crowd model. Scenarios progressively increase complexity — simpler runs activate 3–4 features, while advanced scenarios stack all 14.

Gemini as a Reasoning Agent

Importantly, Gemini does not control the simulation. All market behavior is deterministic; Gemini is used strictly to reason over what the user did.

Gemini operates in six distinct analytical roles:

  1. Behavior Profiler — Infers patterns such as impulsivity, social proof reliance, loss aversion, and overconfidence
  2. Counterfactual Generator — Shows alternate timelines ("What if you sold 30 seconds earlier?")
  3. Reflection Synthesizer — Explains decision quality with evidence tied to specific timestamps
  4. Bias Explainer — Maps decisions to likely cognitive biases using behavioral psychology
  5. Coaching Engine — Adapts tone and feedback based on a persistent behavior profile
  6. Scenario Planner — Generates AI-tailored simulations targeting the user's weakest behavioral pattern

Verification & Safety

  • All Gemini outputs are constrained to strict JSON schemas and validated with Pydantic
  • Every insight must reference supporting behavioral evidence from the simulation logs
  • The model is allowed to abstain when confidence is low
  • Heuristic/mock fallbacks exist for all Gemini calls so the system functions without API access
  • Gemini responses are cached to avoid redundant calls and reduce latency
  • Rate limiting protects all AI-calling endpoints (10/min standard, 3–5/min for heavy operations)

Challenges I Ran Into

  • Avoiding "LLM as a wrapper": I enforced a strict separation where simulations run identically with or without AI. Gemini is invoked only to analyze completed behavior.
  • Grounding AI feedback: Bias detection required triangulating timing, attention, and stated rationale rather than relying on text alone.
  • Preventing memorization: Scenario families vary surface details while preserving behavioral traps; adaptive generation ensures no fixed solution paths.
  • Balancing stress and usability: Multiple iterations were needed to apply pressure without overwhelming users. A briefing screen now contextualizes each scenario before the clock starts.
  • Market realism without overload: Progressive disclosure was critical — advanced realism is earned, not forced.
  • Managing AI reliability: Rate limits and latency required caching, cooldowns, and heuristic-first fallbacks so the system degrades gracefully.

Accomplishments I'm Proud Of

  • Demonstrating that profitable outcomes can still reflect poor decision-making
  • Designing a feedback loop focused on why decisions felt right at the time
  • Using Gemini as a constrained reasoning agent with six distinct analytical roles
  • Implementing counterfactual timelines users immediately understand
  • Building a deterministic, 1,070-line simulation engine with 14 realism features
  • Creating a real-time reasoning challenge before trade commitment
  • Conveying realistic market pressure without overwhelming first-time users

What I Learned

  • The most effective AI systems improve feedback loops, not answers
  • Behavioral signals like timing and attention often matter more than explicit input
  • Reflection works best when it is evidence-based and non-judgmental
  • Strict schema validation significantly improves AI reliability and trust
  • Progressive complexity is essential for onboarding under uncertainty
  • Separating "did it work?" from "was the process good?" changes how users think about their own decisions

What's Next for Is It Legit?

  • Expanding scenario libraries across market regimes and asset classes
  • Longitudinal tracking of decision patterns with trend analysis
  • Live, time-synchronized multiplayer stress simulations
  • Classroom and educational integrations with instructor dashboards
  • Applying the behavioral training framework to non-financial domains such as hiring, crisis response, and operational decision-making

Is It Legit? isn't about predicting the future. It's about understanding how you think when the future is uncertain.

Built With

Share this project:

Updates