From Space Simulator to Cognitive Platform
In the last two updates we shared what SGBrain looked like at deployment and the quality work that followed. What started as a multi-scenario orbital simulator — 16 scenarios, Kepler + Cowell engines, real-time 3D — has since grown into something fundamentally different. This update summarizes the architectural transformation that has taken shape over the past month.
SGBrain is no longer just a simulation tool. It is now a platform where physical systems are observed, interpreted, and remembered.
What Changed: The Architecture
The original codebase had 3 Django applications: core (auth), simulator (physics + entities + everything else), and synai (the AI brain). Everything lived inside simulator — entities, events, contexts, physics — all tightly coupled.
Today the backend consists of 7 specialized applications, each with a clear biological analogy that guides where new functionality belongs:
| App | Role | Biological Analogy |
|---|---|---|
| core | Auth, permissions, abstract models | Infrastructure |
| holonto | Entity ontology — what exists | Anatomy / Body |
| contexter | Scientific context — in what universe | Physics / Natural Laws |
| simulator | Physics execution — how things move | Metabolism / Life |
| brainaxis | Impulse routing — how things are perceived | Nervous System |
| synai | AI cognition — how the system thinks | Cerebral Cortex |
| ontodex | Immutable history — what was learned | Biography / Soul |
This is not just renaming folders. Each application has strict import rules, its own models, services, and tests. The dependency graph is enforced: holonto never imports from simulator, simulator never imports from synai, nobody imports from ontodex. Communication between domains happens through Django signals and a formal event bus.
What SGBrain Can Do Now
1. Entity Ontology (holonto)
Entities are no longer just "things in space." They have structured composition (an ISS has modules, each with instruments, crew, fuel tanks), hierarchical properties that propagate upward (total mass = sum of all component masses), and a full audit trail of every change. 21 dynamic category types with physics behaviors, render behaviors, and hierarchy roles — all database-driven and extensible without code changes.
2. Pluggable Scientific Contexts (contexter)
The simulation engine no longer hardcodes orbital mechanics. A formal SimContext protocol defines what a "domain" needs to provide: categories, constants, formulas, import services. Space is the first (and currently only) implementation, but the architecture is ready for biology, economics, or any system that needs entities, properties, and evolution over time.
Universal physical constants (G, c, k_B) and shared formulas (gravitational parameter, orbital period, escape velocity) live in a shared layer that any context can use.
3. Nervous System Architecture (brainaxis)
Every change in the physical world — a collision detected by the simulator, a fuel level crossing a threshold, a new entity created — is automatically transduced into a cognitive impulse. This is not a webhook or a queue: it is a biologically-inspired nervous system where the Impulse Transducer converts physical events into stimuli with calibrated energy levels, region targeting, and priority.
New in this period:
- Reflex System — pre-programmed rules that generate instant responses without engaging the AI engine. "If pressure > 90%, close valve" resolves in milliseconds.
- Organic Priority Queue — every impulse receives a friction score φ calculated from three dimensions: gradient (magnitude of change), survival (urgency), and connectivity (criticality of the affected entity). NOISE-band impulses are automatically discarded.
- Reflex Escalation — reflexes can suppress, execute, or escalate impulses. An escalated impulse gets boosted energy and fast-tracked to the cognitive engine.
4. Cognitive Economy (synai)
The AI brain now has a real metabolic economy:
- Unified Toll Formula — firing cost depends on region difficulty, model complexity, neuron fatigue, reputation, and pathway myelinization. Well-trained neurons on established connections cost less energy.
- Stimulus Gas Budget — each incoming impulse gets an individual energy budget ("gas"). When gas runs out, propagation stops for that stimulus without affecting the rest of the system. This prevents a single complex event from draining the entire brain.
- Forensic Thought Ledger — every stimulus processed generates an immutable trace with cryptographic hash chains (Merkle-DAG). You can reconstruct step by step which neurons fired, how much energy they consumed, and what decisions they made.
5. Immutable History (ontodex)
The newest application. Every state transition in the platform — entity changes, simulation events, cognitive impulses — is automatically sealed into a Merkle-DAG hash chain per entity. Tampering with any past seal invalidates the entire chain forward. State snapshots capture before/after entity states. An audit index provides a lightweight, queryable log of all sealed events.
Ontodex has no API endpoints for users. It cannot be reached from outside. Its "soul" is protected by design.
6. Domain-Agnostic Simulation Engine
The simulation core was decoupled from orbital mechanics. The engine now accepts any propagator and event detector registered by a context. Space scenarios work exactly as before — Kepler for analytical, Cowell for N-body, SGP4 for TLE, J2 + atmospheric drag + magnetic perturbations — but through a universal abstraction. A simulation now carries an explicit domain tag, enabling future mixed-domain scenarios.
Development Strategy: Two Pillars
Development is now organized into two sequential pillars:
Pillar 1 — Physical Layer: "What exists + how it behaves" (holonto → contexter → simulator) Validated through 5 progressive scenarios: from a standalone rocket engine (T1) to a complete spacecraft with composed entities, dynamic properties, context resolution, physics propagation, and maneuver planning (T5).
Pillar 2 — Intelligence Layer: "How it perceives + thinks + remembers" (brainaxis → synai → ontodex) Uses Pillar 1 validated scenarios as input data for impulse routing, cognitive processing, and immutable sealing.
The crown example that validates both pillars end-to-end: a rocket engine burn where entity creation → attribute definition → simulation activation → context resolution → physics propagation → fuel burn → mass decrease → trajectory change → event generation → impulse transduction → cognitive processing → immutable sealing all happen in a single, traceable chain.
By the Numbers
| Metric |
|---|
| Backend applications |
| Django models |
| Services/Engines |
| Backend tests |
| Test coverage |
| Covered statements |
| Missing lines |
| Source files covered |
| Documentation files |
| Scenario templates |
Platform Capabilities Summary
What SGBrain offers today, beyond orbital simulation:
- Multi-scale space simulation — from 6-hour LEO operations to 1.5-billion-year galaxy collisions, across 17 validated scenarios with 3 physics engines
- Entity composition — hierarchical object modeling with structured contents, property propagation, and dynamic category types
- Pluggable scientific contexts — domain-agnostic architecture ready for expansion beyond space
- Autonomous nervous system — event transduction, organic prioritization, reflexes, and escalation — all without human intervention
- AI cognitive economy — metabolic energy management, neuron reputation, adaptive costs, and per-stimulus gas budgets
- Forensic traceability — every decision, every state change, every impulse sealed with cryptographic hash chains
- Production-grade infrastructure — 100% test coverage gate, role-based access, subscription tiers, OAuth, bot protection, 65+ documented API endpoints
What hasn't changed: SGBrain is built on Google Cloud (Cloud Run, Memorystore, Cloud SQL, Cloud Storage), uses Angular + Three.js for the frontend, Django + Celery for the backend, and Gemini 2.0 for AI inference. The 16 original scenarios remain live at sgbrain.io exactly as submitted.
What has changed: the platform now has a nervous system, a metabolic economy, an immutable memory, and a modular architecture that can grow beyond space — while staying rooted in it.
The Discovery: Biology as Software Architecture
This is perhaps the most unexpected outcome of the project. We didn't start with a biological metaphor and then build software to match it. We started with a simulation engine, kept decomposing responsibilities into smaller, more focused applications, and one day looked at the result and realized: we had built an organism.
The 6 domain applications form a hexagonal molecular structure where core sits at the center as shared infrastructure, and each app occupies a vertex with specific bonds to its neighbors:
holonto
╱╱ ╲╲
2 ╱╱ ╲╲ 2
╱╱ ╲╲
ontodex _______ contexter
││││ | ╲ ╱ | ││
4 ││││ | -core- | ││ 2
││││ |_╱__╲_| ││
synai simulator
╲╲ ╱
2 ╲╲ ╱ 1
╲╲ ╱
brainaxis
Each connection has a measurable valence — the number of active communication channels between two apps. Some are bidirectional (holonto ↔ contexter: state feeds physics, physics returns computed results), some are intentionally monodirectional (simulator → brainaxis: telemetry flows in one direction only — the nervous system listens, it never instructs the physics engine).
The thickest bond in the system is between synai and ontodex — 4 channels. This is the connection between the mind and the memory: massive read/write of past states, hash comparisons, and immutable sealing. Ontodex itself has no external API. It cannot be reached from outside. If it could, the system's history would be vulnerable to corruption. Its soul, by design, is protected.
What makes this model useful — not just poetic — is that it predicts system behavior. A hexagon with many bidirectional bonds (rigid) produces faster responses but consumes more cognitive energy. A hexagon with monodirectional bonds (flexible) is slower but more reflective. This is a real design knob: the ratio of rigid to flexible connections determines whether SGBrain behaves as a reactive system or a deliberative one. We didn't plan this property. It emerged from the structure.
Every application answers exactly one biological question:
- holonto (Anatomy): What exists?
- contexter (Natural Laws): In what context?
- simulator (Metabolism): How does it behave?
- brainaxis (Nervous System): How is it perceived?
- synai (Cerebral Cortex): How does it think?
- ontodex (Soul): What did it learn?
If a piece of functionality doesn't clearly answer one of these six questions, it either belongs in core or needs rethinking. This rule has eliminated every architectural debate we've had since adopting it.
A Note on Where We Actually Are
We want to be transparent: much of what's described above is architectural foundation, not battle-tested production behavior. The 7-app structure is real, the 4,004 tests pass, the import rules are enforced, and every service works in isolation. But the full cognitive pipeline — a physics event traversing the nervous system, being prioritized, triggering a reflex or reaching the AI brain, and having the entire chain sealed in an immutable ledger — has only been validated through unit and integration tests, not through sustained real-world usage.
The progressive scenario validation (Tier 1 through Tier 5) exists precisely because we know the gap between "it compiles and passes tests" and "it works reliably under real conditions" is significant. We're building the architecture with confidence, but we treat every end-to-end claim as a hypothesis until the scenarios prove it. The two-pillar strategy is our way of being methodical about it: validate the physical layer first, then layer intelligence on top — never the other way around.
We're building something that doesn't just simulate orbits. It observes, prioritizes, thinks, acts, and remembers. And every step is auditable. But getting there reliably will take iteration, and we're committed to earning that trust one validated scenario at a time.
Log in or sign up for Devpost to join the conversation.