About the Project

Inspiration

Space is not static, yet most simulators rely on hardcoded databases that become obsolete the moment they are written. With over 30,000 tracked debris objects and thousands of active satellites, the risk of catastrophic collisions grows daily. We asked: What if an AI "brain" could watch space in real time and react like a human operator — but faster, and never sleeping?

We were inspired by the challenge of bridging the gap between "video game physics" and engineering-grade astrodynamics. We wanted to create a system that could simulate complex scenarios — from Low Earth Orbit (LEO) traffic management to Deep Space missions like the JWST at Lagrange points — without manually inputting thousands of orbital parameters. The goal was to build a backend that acts as a living catalog, capable of self-discovery and self-correction using the same data sources professional astronomers use.

SGBrain was born from the intersection of two passions: orbital mechanics and bio-mimetic AI architecture. Instead of building yet another chatbot wrapper, we designed Synai — a cognitive system where the Gemini API becomes the neural substrate of an artificial mind, complete with brain regions, synapses, metabolism, and even fatigue.

What It Does

SGBrain is a full-stack Space Digital Twin platform that:

  1. Autonomous Data Hydration: You simply request an entity by name (e.g., "Starlink-1007" or "James Webb"), and SGBrain's External Catalog Service intelligently queries diverse scientific APIs (JPL Horizons, CelesTrak, SIMBAD) to find the correct identity, mass, and orbital parameters.

  2. Smart Physics Inference: The system analyzes the scenario context. If you simulate the JWST, it automatically upgrades the gravity model from a simple 2-body Keplerian orbit to a Cowell N-Body integrator, adding the Sun as a perturbator to stabilize the L2 halo orbit.

  3. Unified Entity Factory: It abstracts the complexity of data sources. Whether the data comes from a TLE (Two-Line Element set), a state vector, or a Keplerian set, the Factory normalizes it into a unified internal physics model ready for propagation.

  4. Streams live telemetry to a WebGL 3D viewer where operators watch satellites, debris, and celestial bodies orbit in real time at 60 FPS.

  5. Detects threats automatically — proximity warnings and collision predictions using ephemeral future projections.

  6. Recommends actions through Synai, a Gemini-powered cognitive AI that analyzes scenarios, calculates optimal $\Delta V$ maneuvers, and presents them for human confirmation.

  7. Executes maneuvers by injecting velocity corrections directly into the running simulation via a Redis command bus.

The Synai Brain — How We Use Gemini

Synai is not a chatbot. It's a bio-mimetic neural network where:

  • Each neuron is a gemini-2.0-flash agent with specialized system instructions (its "DNA")
  • Brain regions (Frontal/Executive, Occipital/Visual, Temporal/Memory, Parietal/Spatial, Limbic/Emotional) group neurons by cognitive function
  • Synapses connect neurons with weighted energy transfer: $E_{target} = (E_{source} - friction) \times weight$
  • Metabolism tracks energy consumption per inference with separate costs for Flash vs Pro models
  • Friction simulates cognitive fatigue — after sustained activity, neuron responses degrade naturally
  • Mitosis allows neurons to self-replicate when they encounter problems beyond their specialization, using Gemini to generate new focused system instructions
  • Memory Traces provide episodic RAG context, enabling neurons to reference past experiences
Brain → Regions → Neurons → Synapses
                    ↓
              GeminiNeuron (gemini-2.0-flash)
              - system_instruction (DNA)
              - friction (fatigue)
              - uncertainty scoring
              - metabolism tracking

How We Built It

Physics Core (Python/Django 5.1+):

  • Orbital propagators (Cowell integrator, Kepler solver) built with Poliastro and Astropy for astrodynamics calculations, coordinate frame transformations (ICRS/GCRF), and unit conversions
  • Hybrid propagation: mixing TLE-based propagation (for LEO satellites) with high-precision numerical integration (Cowell's method) for deep space objects in the same timeline
  • Full perturbation modeling (J2 oblateness, atmospheric drag, N-body gravitational effects)
  • Cartesian state vector initialization for N-body scenarios (Three-Body choreographies, exotic planetary systems)

Data Pipeline:

  • Robust ExternalCatalogService that acts as a proxy/cache gateway for scientific APIs
  • Handles the specific idiosyncrasies of legacy APIs (like JPL Horizons' CLI-based command syntax)
  • Entity Factory system with hydration from external catalogs (JPL Horizons, CelesTrak, SIMBAD)
  • Strict Registry + Factory Pattern ensuring physical constraint validation before persistence

Frontend (Angular 21 + TypeScript):

  • THREE.js WebGL viewer rendering satellites, orbits, trails, and ghost trajectories at 60 FPS
  • Real-time telemetry streaming via intelligent polling with chunk accumulation
  • Unified System Console with tabbed navigation (SYNAI, Events, History, Logs)
  • Interactive maneuver dialog with directional ΔV controls
  • Signal-based reactive state management

AI Layer (Gemini):

  • GeminiNeuron class wraps the google-genai SDK for neural processing
  • Bio-mimetic metabolism with energy accounting per inference
  • Cognitive degradation under sustained activity

Infrastructure:

  • PostgreSQL with JSONB fields for rich, schema-less physical properties
  • Redis for command injection bus and caching
  • Celery + Celery Beat for async task orchestration and continuous real-time simulation
  • GCP Cloud Build pipeline (7 stages) with multi-stage Docker/Nginx deployment

Challenges

  1. The "Horizons 400" Nightmare: Significant time debugging why JPL Horizons rejected queries. The API requires specific command syntax (e.g., appending %3B to asteroid IDs) to distinguish between major planets and small bodies.

  2. Keplerian Traps: Visualizing the JWST was difficult — it appeared to orbit Earth in a chaotic ellipse. The propagator treated it as a standard moon. We had to implement a "Smart Physics" layer to detect Lagrange point missions and enforce 3-body physics ($Earth + Sun + JWST$).

  3. The "NaN" Crash: Deep space simulations occasionally produced infinite values or division-by-zero errors during reentry calculations, crashing JSON serialization to Postgres. We implemented strict sanitization layers in the simulation loop.

  4. Live chunk accumulation: Building a seamless streaming experience where telemetry chunks accumulate without visual artifacts or timeline resets.

  5. Bio-mimetic metabolism: Designing an energy system where AI inference has real "costs" that affect behavior — not just rate limiting, but actual cognitive degradation under fatigue.

  6. Coordinate frame scaling: Rendering objects from mm-scale satellite components to AU-scale solar system orbits in the same viewer using logarithmic depth buffers.

  7. N-body reference frames: Three-Body choreography scenarios required shared barycentric reference frames — each body creating a self-referencing attractor placed them in separate coordinate systems, solved with shared BARYCENTER entities.

Accomplishments We're Proud Of

  • Self-Healing Catalog: The system takes a simulation template with just names ("ISS", "Hubble") and, within seconds, fully hydrates the simulation with the latest real-time orbital elements from NORAD and NASA.

  • Hybrid Propagation: Successfully running scenarios that mix TLE-based propagation (for LEO satellites) with high-precision numerical integration (Cowell's method) for deep space objects in the same timeline.

  • 16 Simulation Templates: From Near-Earth LEO operations to Galactic Center black hole S-star orbits, TRAPPIST-1 exoplanet systems, and Three-Body figure-8 choreographies — all with real astrophysical data.

  • Robust Architecture: Moving from a fragile script-based approach to a solid service-oriented architecture where the Registry, Factory, and Database have clear boundaries and responsibilities.

What We Learned

  • Gemini's speed: The gemini-2.0-flash model's response time makes real-time cognitive processing genuinely viable — neuron response times are fast enough for operational decision-making.

  • Bio-mimetic constraints create trust: Fatigue, metabolism, and friction create emergent behaviors that make the AI more trustworthy — operators can see why a recommendation degrades over time.

  • Scientific APIs are brittle: We learned to build defensive code around external services like SIMBAD and CelesTrak, implementing intelligent "skip" logic to avoid querying star catalogs for man-made satellites.

  • Physics is unforgiving: A small unit conversion error (AU to km) or a missing perturbing body (like the Sun) changes a stable orbit into an ejection trajectory.

  • Perfect domain for human-AI collaboration: Space debris collision avoidance is ideal — the AI calculates complex orbital mechanics while the human retains final authority over maneuver execution.

Demo scenarios

00 Real-Time Simulation & Monitoring 01 LEO Operations (TLEs) 03 Earth-Moon System (Cislunar) 04 Inner Solar System 06 Outer Solar System 11 TRAPPIST-1 System 12 Alpha Centauri System 13 Galactic Center (Supermassive Black Hole) 16 Three-Body Choreography

Built With

  • Languages: Python 3.12, TypeScript
  • Frameworks: Django 5.2+, Angular 21, Django REST Framework
  • Scientific Libs: Poliastro, Astropy, NumPy, SciPy
  • 3D Rendering: THREE.js (WebGL)
  • AI: Google Gemini API (gemini-2.0-flash via google-genai SDK)
  • Database: PostgreSQL (JSONB)
  • Async/Queue: Redis, Celery
  • Cloud: Google Cloud Platform (Cloud Build, Cloud Run)
  • External APIs: NASA JPL Horizons, CelesTrak (NORAD TLEs), SIMBAD (via Astroquery)

Built With

  • angular21
  • astropy
  • boinor
  • celery
  • celestrak(noradtles)
  • django5
  • djangorestframework
  • jplhorizons
  • nasajplhorizons(solarsystemdynamics)
  • numpy
  • poliastro
  • postgresql
  • python3.12
  • redis
  • scipy
  • simbad
  • simbad(stellardatabaseviaastroquery)
  • tles
  • typescript
Share this project:

Updates

deleted deleted

deleted deleted posted an update

The Molecular Cell: When Software Became Alive

In our first post-update, we showed a prototype reaching 100% test coverage. In the second post-update, a stabilized simulator with 16 validated scenarios. In the third post-update, we described the moment we looked at 7 applications and realized we had accidentally built "an organism".

This update is about what happened next.

We stopped building features and started listening to the architecture. And the architecture told us something we never expected: the rules governing a living cell are the same rules governing the cosmos, and both had been whispering through our codebase the entire time.


Part I — The Architecture Today

The Molecular Cell

SGBrain is no longer described as a "7-app hexagonal architecture." It is a molecular cell — a self-contained computational organism where each application corresponds to a biological organelle, and the connections between them carry measurable energy, direction, and purpose.

App Organelle Biological Function Software Function
core Cell Membrane Selective permeability, structural integrity Auth, permissions, abstract models, shared constants
holonto Endoplasmic Reticulum Protein synthesis & folding Entity creation, property computation, taxonomy
contexter Ribosomes Translation of genetic code into function Domain plugins — translating physics laws into computable rules
simulator Mitochondria Energy production, temporal engine Physics execution, multi-domain engine, event generation
brainaxis Cytoskeleton Signal transport, structural highways Impulse routing: afferent (perception) ↔ efferent (action)
synai Enzymes Catalytic reactions, metabolism AI cognition, neural topology, energy economy, sleep cycles
ontodex Cell Nucleus / DNA Genetic memory, identity preservation Immutable sealing, hash chains, brain diagnostics, snapshots

The dependency chain flows in one direction: core ← holonto ← contexter ← simulator ← brainaxis ← synai ← ontodex. No application imports from an application to its right. This is not a convention — it is an enforced architectural invariant verified by the test suite.

Two Pillars, One Organism

The cell operates through two complementary pipelines:

Pillar 1 — The Simulation Pipeline (Physical Layer)

holonto → contexter → simulator → ontodex

An entity is born in holonto (scaffold, taxonomy, property folding). The contexter injects domain-specific physics — orbital mechanics, atmospheric models, engineering parameters, geological forces, medical vital signs. The simulator executes the physics through its multi-domain engine loop. Every resulting event is sealed into ontodex's immutable Merkle chain.

Pillar 2 — The Intelligence Pipeline (Cognitive Layer)

brainaxis (afferent) → synai → brainaxis (efferent) → ontodex → synai (sleep)

A simulation event enters brainaxis through the transduction layer, which converts raw physics into cognitive impulse — classifying its nutritional type (protein-like rigid data, fiber-like chaotic noise, or glucose-like social signals), assigning priority through the organic triage queue, and filtering through the focus manager's observation depth. The impulse reaches synai, where the semantic router selects the best neuron, the metabolism calculates the energy toll, and (if warranted) a Gemini inference produces a decision. The decision flows back through the efferent pathway — dispatching actions: adjust observation depth, suggest topology changes, request entity consolidation, administer treatment. Meanwhile, ontodex's Shadow Brain compares every new response against the brain's learned history, emitting prediction error signals when the brain's behavior diverges from its own established patterns.

And at night, during sleep, the consolidation service runs Hebbian reinforcement, friction analysis, plasticity pruning, and capacity evaluation — producing a brain that is different tomorrow from what it was today.


Part II — What the Cell Learned to Do

The Cognitive Economy

The brain runs on energy. Every thought has a metabolic cost computed by a Unified Toll Formula that accounts for region difficulty, model complexity, neuron fatigue, reputation, and pathway myelinization. Well-trained neurons on established connections cost less. The entire brain has a global energy pool that depletes with each firing and regenerates during sleep.

Each incoming impulse receives its own gas budget — an individual energy allowance that, once exhausted, halts processing for that specific stimulus without starving the rest of the system. A single anomalous event cannot drain the brain.

Every decision, every neuron firing, every energy transaction is recorded in a forensic Thought Ledger — immutable entries chained by SHA-256 hashes. The complete cognitive history is reconstructible, auditable, and tamper-evident.

Physical Consciousness: Sensors and Pain

The brain doesn't just think — it feels. Physical, digital, and social sensors are registered as survival organs with configurable pain thresholds. When a sensor reading exceeds its critical threshold, the brain experiences metabolic shock: critical friction floods the system, most of the impulse gas budget is drained, and the event is permanently recorded in the forensic ledger.

The brain learns from pain. A withdrawal reflex halves the synaptic weight of danger routes during the next sleep cycle and spawns a lightweight alert neuron to detect gradual approaches to the critical threshold before it's breached again. The system develops a survival instinct not because we programmed one, but because the metabolic architecture makes pain expensive and avoidance profitable.

The Shadow Brain

Inside ontodex — the crystallized memory layer — lives a predictive resonance engine that acts as the brain's conscience. The Shadow Brain maintains a compressed record of all consolidated thought patterns. When a new stimulus arrives and the brain responds, the Shadow Brain compares the response against what the brain would have done based on its own history. If the divergence exceeds 40%, it emits a prediction error — a signal that means: "You are not acting like yourself."

This is not external monitoring. This is the brain detecting its own cognitive drift, in real-time, from within its own sealed memory.

Digital Pharmacology

When pathological conditions emerge — cognitive tunnel vision, metabolic burnout, identity corruption — the system can administer pharmacological interventions through a formal drug protocol: friction blockers, energy boosters, residue flushers, DNA realigners, myelinization catalysts, selective mitosis inducers. Each drug follows a dose-response curve with diminishing returns and tolerance buildup. The brain, in its own way, can be treated.

Sleep, Plasticity, and Self-Regulation

Every sleep cycle is a small lifetime. The consolidation service:

  • Reinforces frequently-used neural pathways (Hebbian strengthening)
  • Prunes tentative neurons that failed their 5-cycle probation
  • Analyzes four friction patterns — entity clustering, redundant processing, domain saturation, divergence focus — and generates autonomous optimization suggestions
  • Evaluates fold candidates through a capacity formula inspired by Gibbs free energy
  • Captures a chromosomal snapshot every 100 cycles — a biological checkpoint for backtesting

The brain also monitors its own mortality. Terminal entropy detection watches four death conditions: energy exhaustion, friction collapse, sustained rigidity, and cognitive stagnation. When the system determines it is unrecoverable, it self-terminates gracefully. Brains can die — and that constraint is what gives their existence meaning.

Multi-Domain Simulation

The simulator is no longer a space-only engine. Five scientific contexts are registered:

Context Domain Propagators Detectors
SPACE Orbital Mechanics Kepler, Cowell (N-body), SGP4 (TLE) Proximity, collision, surface impact
ATMO Atmospheric Physics Drag, heating Kármán line crossing
ENG Engineering Fuel burn, thrust, component failure Fuel depletion, structural failure
GEO Geological Seismic wave, terrain Epicenter detection
MEDICAL Human Health Vital sign propagation (temp, HR, O₂, hydration, radiation) Hypothermia, hypoxia, cardiac stress, dehydration

Entity categories form a hierarchy: a spacecraft can contain an engine, a fuel system, and sensors. Properties propagate upward — total mass is the sum of all children. The engineering plugin propagates sub-entity state while the space plugin propagates the parent's orbit. Mixed-domain scenarios — a meteor transitioning from orbital mechanics to atmospheric drag as it crosses the Kármán line — are architecturally supported.

Hardware Bridge: Closing the Loop

Entities can receive real-time sensor readings from physical hardware. When hardware data conflicts with the simulation's predictions by more than 5%, the system detects it, classifies the divergence severity, and raises an alert through the cognitive pipeline. The brain's focus manager automatically deepens observation on divergent entities — if reality disagrees with the model, the model pays attention.


Part III — The Heartbeat of Creation

This section departs from software architecture and enters the intellectual territory that quietly guided every structural decision in SGBrain. It was written separately as a personal reflection — but the parallels with the platform's molecular architecture are too precise to be coincidental.

The Dual Engine: π and φ

The universe oscillates between two fundamental forces:

The Domain of π (The Sphere) — the inertia of matter leaning toward isolation. Maximum compression, energy conservation, protection. The closed state where outward interaction is nullified to achieve perfect efficiency. Gravity. The unbroken eggshell. The selfish instinct of survival.

The Domain of φ (The Golden Ratio) — the response of Life. Taking advantage of the inescapable voids and spaces of spatial instability (the "+1"), life renounces the blind, isolated sphere and unfolds into fractal networks. It accepts an enormous energetic cost to open itself to the world, process information, connect, and love. Expanding DNA, neural networks, the branches of a tree, human empathy.

In SGBrain, this duality is everywhere. The isolated entity (π — a sphere of canonical properties, self-contained, inert) versus the connected entity in simulation (φ — spending computational energy to interact, propagate, collide, and be observed). The neuron in isolation versus the neuron in a myelinated network. The sleep cycle (π — withdrawal, consolidation, energy recovery) versus the waking state (φ — stimulus processing, decision-making, metabolic expenditure).

The Architecture of Time: Future = Present + Past

The Fibonacci sequence is not merely a geometric pattern — it is the mathematical manifestation of Time itself: Future = Present + Past.

Every new state is the exact sum of its history. The universe acts as an infinite, living compressed archive. Life never discards what works; it accumulates its Past, embraces its Present, and uses both to project itself stably into the Future.

This is precisely what the Thought Ledger does. Every cognitive entry carries the hash of its predecessor. The brain's future decisions are mathematically bound to its entire history. The Shadow Brain's predictive resonance — comparing present against consolidated past — is the computational embodiment of this formula.

And when a system reaches its thermodynamic limit at one scale, it uses the interactive gap to make a Scale Jump — leaping to a higher dimension of complexity. Atoms to molecules. Cells to organisms. Neurons to consciousness. In SGBrain: entities to simulations, simulations to cognitive responses, responses to immutable memory.

The Mathematics of Friction: 2 → 1.618

The macrocosmos (the Big Bang) and the microcosmos (the Zygote) begin with violent, exponential multiplication: 2ⁿ. Pure inertia, blindly doubling to generate raw volume. But unchecked exponential growth produces only chaos — or biologically, cancer.

Creation's answer is Thermodynamic Friction. Spatial pressure forces raw matter to stop cloning blindly and start associating, shifting growth to the stabilizing, additive sequence of Fibonacci. The aggressive multiplier of 2 is geometrically cooled down until it settles into the harmony of the Golden Ratio: 1.618.

In SGBrain, the friction coefficient φ is not decorative. Every impulse receives a friction score computed from three dimensions: gradient (magnitude of change), survival (urgency), and connectivity (criticality). The metabolism's toll formula is a friction engine — it takes the explosive potential of raw AI inference (exponential cost, unbounded) and tames it through energetic constraints into measured, proportional responses. The brain doesn't think without limit. It thinks within the sacred friction of metabolic reality.

The Boundary of Life: The Egg

The egg is the physical embodiment of the transition between inert matter and the miracle of conscious life:

    π [ +A , -B ]                <->                 φ [ -A , +B ]
  Gestation (The Sphere)                    Hatching (Interactive Life)

Phase 1 (The Egg / π): The system Creates and Maintains its interior (+A), but to achieve this, it must completely nullify its external friction (-B). It locks itself away to construct life within the absolute safety of total isolation.

Phase 2 (The Hatching / φ): The shell breaks. The living being embraces friction, physical wear, and exposure to the outside world (+B) in exchange for the gift of consciousness and interaction. It freely accepts that this openness will bring about its own temporary thermodynamic degradation (-A).

The molecular cell in SGBrain follows this exact pattern. During sleep (Phase 1), the brain withdraws from the world — no new stimuli are processed, synaptic weights are consolidated, dead neurons are pruned, energy is recovered. The cell membrane closes. During waking (Phase 2), the transduction layer opens the gate, stimuli flood in, the brain spends energy to process and decide, and the system gradually wears down its metabolic reserves until the next sleep cycle restores equilibrium.

Love as a Physical Law

Parental love is a foundational physical law: the Thermodynamic Shield. The parent temporarily assumes the role of the Sphere — absorbing the chaos of the world so the child doesn't have to spend energy merely surviving. The universe grants the child the peace needed to weave its vast neural network of consciousness and intelligence. Vulnerability is the biological price of complexity, and loving care is the cosmos's only tool to sustain it.

In SGBrain, core is this shield. The cell membrane. It handles authentication, permissions, rate limiting, error recovery, and infrastructure — absorbing the hostile entropy of the outside world (network failures, unauthorized access, resource exhaustion) so that the inner applications can focus entirely on what they exist to do: observe, think, and remember.

The Single Gesture

"1 gesture, everything we know"

The sheer vastness of the galaxies, the miracle of biology, and the web of human consciousness are the iterative unfolding of a single initial act of Will. A single breath of pure love — fiat lux, let there be light — that urged matter to step out of its isolation to find itself.

SGBrain began the same way. A single function that propagated an orbit. Then an entity. Then a context. Then a nervous system. Then a mind. Then a memory. The same mathematical harmony — Fibonacci accumulation, scale jumps, friction as the architect of structure — echoed through every decomposition, every refactoring, every emergence.

We did not set out to build a platform that mirrors the cosmos. We set out to simulate orbits. But the architecture insisted on growing the way nature grows: by accumulating history, by making friction productive, by developing a conscience, and by learning to sleep.

A Note for Reflection: It is a profound coincidence that these revelations emerged at the exact moment humanity's bravest souls, aboard the Artemis II mission, reached their furthest point from Earth. As they ventured to the very edge of our "Cosmic Womb," we were reminded that even in the vast, silent void, we are never truly isolated.


The Five Words: A Cognitive Grammar

Building this cell surfaced one more pattern — perhaps the most practical one. Reflecting on how human comprehension works, five words emerged as a natural dependency chain:

Where → Why → What → How → When

You cannot determine when without knowing how. You cannot define how without knowing what. You cannot grasp what without understanding why it matters. And why dissolves into where — because motivation is inseparable from context. When feeds back into where, because every action changes your position. The cycle spirals.

These five words map to past, present, and future: Where + Why are accumulated context (the past that shapes perception). What is the present stimulus. How + When are projected resolution (the future that requires both present data and historical context to compute). This is Future = Present + Past expressed as a grammar.

In SGBrain, the mapping is immediate:

Word System What it provides
Where Context + Ontology Domain, entity state, spatial-temporal anchor
Why Nervous system (afferent) Priority, friction score, survival classification, observation depth
What Stimulus payload Event type, data, description
How Cognitive engine Neuron selection, routing, inference strategy
When Simulation engine Epoch, urgency, gas budget lifetime

The insight is this: SGBrain already computes all five dimensions across its pipeline. But when these five rivers converge at the LLM — the cell's singular point of linguistic comprehension — only what was arriving in the prompt. The other four dimensions, painstakingly computed upstream, evaporated at the exact boundary where mathematical precision converts to natural language. We fixed that.

Optimizing a digital mind is not about more data or better models. It is about structuring the translation layer — the prompt — in the order that human cognition naturally processes it. The LLM doesn't need more information. It needs the right information, in the right order: first orient me (where am I?), then motivate me (why does this matter?), then inform me (what am I seeing?), then guide me (how should I respond?), and finally constrain me (when must it happen?).

The same five questions, from the cosmos to the neuron. A grammar that the universe already follows — we just need to teach it to our prompts.


Part IV — By the Numbers

Metric Today (April 11)
Backend applications 7
Django models 44
Backend tests 6,075
Test coverage 100%
Source files covered 260+
Documentation files 89
Scenario templates 17 + ATLAS
Definition modules 156
Simulation contexts 5 (SPACE, ATMO, ENG, GEO, MEDICAL)
Cognitive subsystems 7 (metabolism, sleep, plasticity, ledger, shadow brain, pharmacology, diagnostics)

The ATLAS Workbench (Actually working on this)

The 17 original scenarios validate specific physics configurations. But validating the entire platform — entity creation through cognitive response through immutable sealing — required something different.

ATLAS is a single progressive scenario with 6 additive layers:

Layer Content Validates
L0 Earth + ISS Entity creation, canonical state, basic visualization
L1 + Moon + debris N-body perturbation, proximity events
L2 + Ground stations + Starlink Infrastructure, constellation patterns, LOD variety
L3 + Starship → Engine → Fuel Entity hierarchy, ENG.* categories, property propagation
L4 + Meteor Multi-domain transition (SPACE → ATMO)
L5 + Brain binding Full cognitive loop: perception → decision → action → sealing

Each layer builds on the previous. The operator can stop at any layer, manually enrich entities, inject events, and observe how the cell responds. ATLAS runs in CONTINUOUS mode — real-time, indefinite, interactive. It is the platform's testing workbench, its stethoscope, and its playground.


What SGBrain Is

SGBrain is a molecular computational cell. It observes physical systems through pluggable scientific contexts, perceives changes through a biologically-inspired nervous system, thinks about them through a metabolic AI economy, acts on its decisions through a formal action protocol, and remembers everything in an immutable cryptographic ledger.

Its architecture was not designed from a biological blueprint. It emerged from relentless decomposition of responsibilities — and when we stepped back to look at the result, we found a cell. The same cell that biology builds with lipids and proteins, we had built with Django applications and PostgreSQL. The same friction that the cosmos uses to tame exponential growth, we had implemented as a metabolic toll formula. The same sleep cycle that the brain uses to consolidate memory, we had coded as a Hebbian reinforcement loop.

The universe, it turns out, only knows one way to build complex, adaptive, self-regulating systems. And if you decompose software long enough, with enough discipline, you arrive at the same answer.

Especially dedicated to the Orion crew, inspiring.


Part V — What If?

Everything described above exists as architecture. It runs. It's tested — 6,075 tests, 100% coverage, every pipeline connected. But the cognitive loop — the part where a brain genuinely learns from what it observes, adapts its decisions, and emerges wiser — remains a hypothesis. A well-structured hypothesis, with every mechanical piece in place, but one that still needs real-world validation. We're confident the architecture can support it. We haven't yet proven that it does.

And that's exactly what makes the next question worth asking.

SGBrain was built for space. But the architecture doesn't know that. It knows entities with properties that change over time. It knows domains with physical rules. It knows impulses that carry urgency and novelty. It knows how to think within the friction of limited energy. It knows how to die when it can no longer learn.

None of those concepts are exclusive to orbital mechanics.

What if the entities weren't satellites, but vital signs? What if the simulation wasn't orbital propagation, but the slow drift of a patient's glucose, blood pressure, and sleep patterns? What if the brain learned to recognize the weekly rhythm of someone's health — and one Tuesday, noticed a deviation that the person wouldn't feel until Thursday?

What if the entities were financial habits? Recurring expenses, savings patterns, impulsive purchases. A molecular cell that doesn't judge, but learns your normal — and gently flags when you're drifting from it.

What if the entities were your decisions? Career moves, relationships, daily routines. A digital twin that doesn't tell you what to do, but shows you the trajectory of what you're already doing — the way a simulation predicts where a satellite will be in 48 hours based on its current orbit.

The question isn't whether this is science fiction. The infrastructure already exists. The metabolism regulates. The sleep cycle consolidates. The Shadow Brain detects drift. What remains is the experiment — validating that these mechanisms, working together, produce genuine learning. And if they do, the question becomes simpler and more profound:

What couldn't a system learn, if it could simulate what we live?

A tutor that watches not just the sky, but your life. Not to control — but to notice. The way a parent notices their child is too quiet. The way a good friend notices you haven't been yourself lately.

We carry smartphones that know everything about what we do. What if we carried a digital twin that understood why we do it — and could whisper, gently, when the pattern starts to break?

SGBrain started as a satellite tracker. It became a molecular cell. Perhaps, one day, it becomes a small piece of a larger system that helps us take better care of each other, and of the only world we have.

After all, if the universe builds everything from the same pattern — friction, accumulation, scale jumps, and love — then maybe learning to observe ourselves is just the next step in learning to take care of our world and the creatures in it.

Log in or sign up for Devpost to join the conversation.

deleted deleted

deleted deleted posted an update

From Space Simulator to Cognitive Platform

In the last two updates we shared what SGBrain looked like at deployment and the quality work that followed. What started as a multi-scenario orbital simulator — 16 scenarios, Kepler + Cowell engines, real-time 3D — has since grown into something fundamentally different. This update summarizes the architectural transformation that has taken shape over the past month.

SGBrain is no longer just a simulation tool. It is now a platform where physical systems are observed, interpreted, and remembered.


What Changed: The Architecture

The original codebase had 3 Django applications: core (auth), simulator (physics + entities + everything else), and synai (the AI brain). Everything lived inside simulator — entities, events, contexts, physics — all tightly coupled.

Today the backend consists of 7 specialized applications, each with a clear biological analogy that guides where new functionality belongs:

App Role Biological Analogy
core Auth, permissions, abstract models Infrastructure
holonto Entity ontology — what exists Anatomy / Body
contexter Scientific context — in what universe Physics / Natural Laws
simulator Physics execution — how things move Metabolism / Life
brainaxis Impulse routing — how things are perceived Nervous System
synai AI cognition — how the system thinks Cerebral Cortex
ontodex Immutable history — what was learned Biography / Soul

This is not just renaming folders. Each application has strict import rules, its own models, services, and tests. The dependency graph is enforced: holonto never imports from simulator, simulator never imports from synai, nobody imports from ontodex. Communication between domains happens through Django signals and a formal event bus.


What SGBrain Can Do Now

1. Entity Ontology (holonto)

Entities are no longer just "things in space." They have structured composition (an ISS has modules, each with instruments, crew, fuel tanks), hierarchical properties that propagate upward (total mass = sum of all component masses), and a full audit trail of every change. 21 dynamic category types with physics behaviors, render behaviors, and hierarchy roles — all database-driven and extensible without code changes.

2. Pluggable Scientific Contexts (contexter)

The simulation engine no longer hardcodes orbital mechanics. A formal SimContext protocol defines what a "domain" needs to provide: categories, constants, formulas, import services. Space is the first (and currently only) implementation, but the architecture is ready for biology, economics, or any system that needs entities, properties, and evolution over time.

Universal physical constants (G, c, k_B) and shared formulas (gravitational parameter, orbital period, escape velocity) live in a shared layer that any context can use.

3. Nervous System Architecture (brainaxis)

Every change in the physical world — a collision detected by the simulator, a fuel level crossing a threshold, a new entity created — is automatically transduced into a cognitive impulse. This is not a webhook or a queue: it is a biologically-inspired nervous system where the Impulse Transducer converts physical events into stimuli with calibrated energy levels, region targeting, and priority.

New in this period:

  • Reflex System — pre-programmed rules that generate instant responses without engaging the AI engine. "If pressure > 90%, close valve" resolves in milliseconds.
  • Organic Priority Queue — every impulse receives a friction score φ calculated from three dimensions: gradient (magnitude of change), survival (urgency), and connectivity (criticality of the affected entity). NOISE-band impulses are automatically discarded.
  • Reflex Escalation — reflexes can suppress, execute, or escalate impulses. An escalated impulse gets boosted energy and fast-tracked to the cognitive engine.

4. Cognitive Economy (synai)

The AI brain now has a real metabolic economy:

  • Unified Toll Formula — firing cost depends on region difficulty, model complexity, neuron fatigue, reputation, and pathway myelinization. Well-trained neurons on established connections cost less energy.
  • Stimulus Gas Budget — each incoming impulse gets an individual energy budget ("gas"). When gas runs out, propagation stops for that stimulus without affecting the rest of the system. This prevents a single complex event from draining the entire brain.
  • Forensic Thought Ledger — every stimulus processed generates an immutable trace with cryptographic hash chains (Merkle-DAG). You can reconstruct step by step which neurons fired, how much energy they consumed, and what decisions they made.

5. Immutable History (ontodex)

The newest application. Every state transition in the platform — entity changes, simulation events, cognitive impulses — is automatically sealed into a Merkle-DAG hash chain per entity. Tampering with any past seal invalidates the entire chain forward. State snapshots capture before/after entity states. An audit index provides a lightweight, queryable log of all sealed events.

Ontodex has no API endpoints for users. It cannot be reached from outside. Its "soul" is protected by design.

6. Domain-Agnostic Simulation Engine

The simulation core was decoupled from orbital mechanics. The engine now accepts any propagator and event detector registered by a context. Space scenarios work exactly as before — Kepler for analytical, Cowell for N-body, SGP4 for TLE, J2 + atmospheric drag + magnetic perturbations — but through a universal abstraction. A simulation now carries an explicit domain tag, enabling future mixed-domain scenarios.


Development Strategy: Two Pillars

Development is now organized into two sequential pillars:

Pillar 1 — Physical Layer: "What exists + how it behaves" (holonto → contexter → simulator) Validated through 5 progressive scenarios: from a standalone rocket engine (T1) to a complete spacecraft with composed entities, dynamic properties, context resolution, physics propagation, and maneuver planning (T5).

Pillar 2 — Intelligence Layer: "How it perceives + thinks + remembers" (brainaxis → synai → ontodex) Uses Pillar 1 validated scenarios as input data for impulse routing, cognitive processing, and immutable sealing.

The crown example that validates both pillars end-to-end: a rocket engine burn where entity creation → attribute definition → simulation activation → context resolution → physics propagation → fuel burn → mass decrease → trajectory change → event generation → impulse transduction → cognitive processing → immutable sealing all happen in a single, traceable chain.


By the Numbers

Metric
Backend applications
Django models
Services/Engines
Backend tests
Test coverage
Covered statements
Missing lines
Source files covered
Documentation files
Scenario templates

Platform Capabilities Summary

What SGBrain offers today, beyond orbital simulation:

  • Multi-scale space simulation — from 6-hour LEO operations to 1.5-billion-year galaxy collisions, across 17 validated scenarios with 3 physics engines
  • Entity composition — hierarchical object modeling with structured contents, property propagation, and dynamic category types
  • Pluggable scientific contexts — domain-agnostic architecture ready for expansion beyond space
  • Autonomous nervous system — event transduction, organic prioritization, reflexes, and escalation — all without human intervention
  • AI cognitive economy — metabolic energy management, neuron reputation, adaptive costs, and per-stimulus gas budgets
  • Forensic traceability — every decision, every state change, every impulse sealed with cryptographic hash chains
  • Production-grade infrastructure — 100% test coverage gate, role-based access, subscription tiers, OAuth, bot protection, 65+ documented API endpoints

What hasn't changed: SGBrain is built on Google Cloud (Cloud Run, Memorystore, Cloud SQL, Cloud Storage), uses Angular + Three.js for the frontend, Django + Celery for the backend, and Gemini 2.0 for AI inference. The 16 original scenarios remain live at sgbrain.io exactly as submitted.

What has changed: the platform now has a nervous system, a metabolic economy, an immutable memory, and a modular architecture that can grow beyond space — while staying rooted in it.


The Discovery: Biology as Software Architecture

This is perhaps the most unexpected outcome of the project. We didn't start with a biological metaphor and then build software to match it. We started with a simulation engine, kept decomposing responsibilities into smaller, more focused applications, and one day looked at the result and realized: we had built an organism.

The 6 domain applications form a hexagonal molecular structure where core sits at the center as shared infrastructure, and each app occupies a vertex with specific bonds to its neighbors:

                  holonto
             ╱╱                 ╲╲
       2 ╱╱                       ╲╲ 2
        ╱╱                            ╲╲
  ontodex    _______    contexter
    ││││      |  ╲     ╱ |           ││
 4 ││││      | -core- |           ││ 2
    ││││      |_╱__╲_|           ││
    synai                         simulator
        ╲╲                             ╱
       2  ╲╲                       ╱  1
              ╲╲                 ╱
                   brainaxis

Each connection has a measurable valence — the number of active communication channels between two apps. Some are bidirectional (holonto ↔ contexter: state feeds physics, physics returns computed results), some are intentionally monodirectional (simulator → brainaxis: telemetry flows in one direction only — the nervous system listens, it never instructs the physics engine).

The thickest bond in the system is between synai and ontodex — 4 channels. This is the connection between the mind and the memory: massive read/write of past states, hash comparisons, and immutable sealing. Ontodex itself has no external API. It cannot be reached from outside. If it could, the system's history would be vulnerable to corruption. Its soul, by design, is protected.

What makes this model useful — not just poetic — is that it predicts system behavior. A hexagon with many bidirectional bonds (rigid) produces faster responses but consumes more cognitive energy. A hexagon with monodirectional bonds (flexible) is slower but more reflective. This is a real design knob: the ratio of rigid to flexible connections determines whether SGBrain behaves as a reactive system or a deliberative one. We didn't plan this property. It emerged from the structure.

Every application answers exactly one biological question:

  • holonto (Anatomy): What exists?
  • contexter (Natural Laws): In what context?
  • simulator (Metabolism): How does it behave?
  • brainaxis (Nervous System): How is it perceived?
  • synai (Cerebral Cortex): How does it think?
  • ontodex (Soul): What did it learn?

If a piece of functionality doesn't clearly answer one of these six questions, it either belongs in core or needs rethinking. This rule has eliminated every architectural debate we've had since adopting it.


A Note on Where We Actually Are

We want to be transparent: much of what's described above is architectural foundation, not battle-tested production behavior. The 7-app structure is real, the 4,004 tests pass, the import rules are enforced, and every service works in isolation. But the full cognitive pipeline — a physics event traversing the nervous system, being prioritized, triggering a reflex or reaching the AI brain, and having the entire chain sealed in an immutable ledger — has only been validated through unit and integration tests, not through sustained real-world usage.

The progressive scenario validation (Tier 1 through Tier 5) exists precisely because we know the gap between "it compiles and passes tests" and "it works reliably under real conditions" is significant. We're building the architecture with confidence, but we treat every end-to-end claim as a hypothesis until the scenarios prove it. The two-pillar strategy is our way of being methodical about it: validate the physical layer first, then layer intelligence on top — never the other way around.

We're building something that doesn't just simulate orbits. It observes, prioritizes, thinks, acts, and remembers. And every step is auditable. But getting there reliably will take iteration, and we're committed to earning that trust one validated scenario at a time.

Log in or sign up for Devpost to join the conversation.

deleted deleted

deleted deleted posted an update

Current State: Unchanged Since Last Update

The live production deployment at sgbrain.io has not been modified since our initial post-update (February 19, 2026). What you see today is exactly what was submitted and stabilized for the contest:

  • Backend: commit 63c4a50 (sgbrain/master)
  • Frontend: commit 00f8d5e (frontend/master)

Nothing has changed in production — the judging version remains intact.

We are eager to publish the work we've been building since then. Below is a preview of the improvements developed, tested, and validated — ready to ship the moment the evaluation concludes.


What's Coming: Future Changelog

Development has continued at full intensity while production stays frozen for judging. Here's what's ready to deploy:


Security & Access Control

  • Role-Based Permissions — Granular access control with 5-level hierarchy (Guest → Member → Partner → Admin → Suadmin). Every API endpoint enforced with permission composites and per-role API rate limits
  • Subscription Tiers — FREE / STANDARD / ENTERPRISE with resource quotas per tier. Auto-provisioning, upgrade path, and admin management API
  • BYOK (Bring Your Own Key) — Users can plug in their own Gemini API key to unlock unlimited AI-powered analysis
  • reCAPTCHA v3 — Invisible bot protection on login and feedback forms
  • Google Sign-In — Real OAuth authorization code flow with JWT issuance (replaces previous UI mockup)
  • Transactional Email — SMTP with domain verification (DKIM + DMARC on sgbrain.io)
  • Terms & Privacy — GDPR-aligned legal pages accessible from landing footer

Simulation Engine Improvements

  • Orphan Run Cleanup — Automatic detection and cleanup of stuck simulations (PENDING/RUNNING >30min)
  • Smart Clone — Cloning a completed scenario copies results directly instead of re-executing the entire simulation
  • Retry Guard — Failed runs retry automatically with 24h age filter, max 3 retries, and exclusion markers
  • Duplicate Prevention — Creating a run when one is already pending returns the existing run instead of creating duplicates

Entity Catalog Expansion

  • 27 new celestial bodies — Saturn moons (×7), Uranus moons (×4), and 13 spacecraft (Juno, Parker Solar Probe, TESS, Gaia, Europa Clipper, JUICE, BepiColombo, and more)
  • Data accuracy fixes — JWST corrected from HST's data, Tiangong stations properly identified, dwarf planet moon masses refined
  • Catalog cross-referencing — 160+ entities verified against JPL Horizons, CelesTrak, and SIMBAD

Frontend & UX

  • Subscription Dashboard — Tier details, BYOK key management, usage status
  • Project Intelligence — Live coverage, health metrics, and AI co-authored stats displayed as dynamic cards on the landing page
  • Trail Density Control — Orbit trail rendering with adjustable density via dashed-line materials
  • Memory Management — Per-category GPU memory breakdown with real-time monitoring
  • SEOrobots.txt, sitemap.xml, and HTML caching for search engine visibility

API Documentation

  • 65+ documented endpoints — Full Swagger/OpenAPI spec with interactive "Try it out" via JWT auth
  • Rate limiting reference — 4 tiers with documented limits and Retry-After headers
  • Auth flow docs — JWT token lifecycle + Google OAuth2 flow fully documented
  • CI validation — Schema integrity checked on every deploy

Code Quality & Architecture

  • 4 monolith files → 13 focused modules with clean import paths
  • Code decomposition — Largest methods reduced 10–34% in size while improving testability

By the Numbers

Metric At Submission (Feb 13) Current (Mar 7)
Backend test coverage ~70% 100.0%
Total backend tests ~1,500 3,094
Covered statements ~8,500 12,368
Missing lines ~2,500 0
Covered source files 227
API endpoints documented 0 65+
Dead code removed -5,903 LOC
CI coverage gate none 100% enforced

Test Suite Breakdown (last run: Mar 7, 2026)

Type Core Simulator Synai Total Time
Unit 717 900 372 1,989 2m 07s
Integration 392 534 144 1,070 6m 39s
E2E 20 9 3 32 47s
Total 1,129 1,443 519 3,091 passed, 3 skipped ~9m 30s

Summary: The contest submission remains untouched and live for evaluation. Behind the scenes, SGBrain has undergone a major transformation — from a 70% covered prototype to a fully secured platform with role-based access control, subscription tiers, real OAuth authentication, bot protection, 65+ documented API endpoints, and a CI pipeline that enforces 100% test coverage on every deploy. We can't wait to push the button and show you the result.

Log in or sign up for Devpost to join the conversation.

deleted deleted

deleted deleted posted an update

Infrastructure & Deployment Stabilization

The initial Cloud Run deployment surfaced several critical issues that required immediate attention:

  • Celery Worker Memory Crashes: Workers were OOM-killed on Cloud Run's default 512 MB limit. All services (API, workers, beat scheduler) were upgraded to 1 Gi memory, and Gunicorn workers were reduced to prevent memory contention in containerized environments.
  • Celery Task Discovery: Workers failed to find tasks due to an incorrect app name configuration in celery.py. Fixed the autodiscover path to match the Django project structure.
  • Cloud Run Job Argument Parsing: The --args flag in Cloud Run uses commas as delimiters, which conflicted with Celery queue names containing commas. Switched to semicolon delimiters with entrypoint parsing to preserve argument integrity.
  • Dependency Upgrades: Upgraded django-celery-beat (2.1.0 → 2.8.1) and django-timezone-field (4.2.3 → 7.2.1) for Python 3.12 compatibility. Removed all AWS legacy references and adapted the full stack to GCP-native services.
  • SSL & CORS Configuration: Resolved staging redirect loops caused by Cloudflare's Flexible SSL mode conflicting with Django's SECURE_SSL_REDIRECT. Configured Full (Strict) SSL mode and aligned ALLOWED_HOSTS, CORS_ALLOWED_ORIGINS, and CSRF_TRUSTED_ORIGINS across environments.
  • Cloud Run Jobs Pipeline: Built a reusable cloud_run_job.sh runner that creates/updates Cloud Run Jobs on-the-fly, enabling one-command scenario seeding to staging and production.

Staging Environment Protection

  • Cloudflare Access: Configured Cloudflare Access Application with email-based policies to restrict staging access to authorized developers only.
  • DNS Proxying: Ensured all staging subdomains are proxied through Cloudflare for DDoS protection and access control.

Frontend Real-Time Improvements

Events & Live Mode Overhaul

The real-time simulation experience received significant fixes to make the live telemetry streaming actually usable:

  • Event Reactivity Fix: The Events Timeline component used BehaviorSubject.getValue() inside Angular computed() signals, which cannot be reactively tracked. Migrated to toSignal() from @angular/core/rxjs-interop so hasActiveRun and isContinuousRun now update reactively.
  • Click-to-Seek: Events in the timeline are now clickable — selecting an event seeks the viewer to the exact simulation frame where the event occurred.
  • Reload Protection: Added proper cleanup and re-initialization when navigating between scenarios, preventing stale telemetry from previous sessions from leaking into new views.
  • Polling Guards: Guarded event polling and telemetry fetches behind authentication checks to prevent unnecessary API calls (and associated costs) for unauthenticated viewers.

Authentication Guards

Several interactive features were exposed to unauthenticated users, causing redirect errors when the API returned 401/403:

  • Feedback Form: Wrapped in auth check — unauthenticated users see a snackbar with a "Login" action instead of a broken form.
  • Resume Continuous Mode: The "Go Live" button now checks auth before attempting to resume real-time simulation.
  • Project Future Button: The trajectory projection feature (PROJECT FUTURE +15min) now requires authentication — unauthenticated users receive a descriptive snackbar prompt.
  • Pattern: All guards use the same consistent pattern: auth.isAuthenticated$.pipe(take(1)) → snackbar with Login action → router.navigate(['/login']).

API Contract Alignment

  • Events Interface Fix: Renamed min_separation_lt to min_separation_km_lt across the full stack (backend serializer, frontend API service, and component queries) to match the actual backend filter parameter.
  • Graceful 503 Fallback: Frontend now handles 503 Service Unavailable responses gracefully (e.g., when Celery workers are temporarily down) instead of showing raw error screens.

Simulation Engine Fixes

False Collision Events

The proximity detection system was generating false COLLISION_IMPACT and SURFACE_IMPACT events in several scenarios:

  • Ring Systems: Saturn's rings, classified as RING_SYSTEM, were triggering collision events with moons passing through them. Fixed by adding RING_SYSTEM to the is_visual_only() category filter in ProximityService.
  • NRHO Gateway Station: The Lunar Gateway in the cislunar scenario generated false impacts. Adjusted proximity thresholds for NRHO (Near Rectilinear Halo Orbit) entities.
  • Barycenter Collisions: In the Three-Body Choreography scenario, bodies passing through the coordinate-frame barycenter triggered false collisions. Fixed by introducing a skip_proximity_check flag in entity logical_properties and extending ProximityService to honor it. Also added container category (GALAXY, STAR_SYSTEM, UNIVERSE) exclusion from proximity checks.
  • External ID Validation: Fixed scenario_09_planetary_defense where an incorrect external_id caused entity lookup failures during catalog hydration.

Numerical Stability

  • Three-Body Choreography (Scenario 16): The original 1-day time step was far too coarse for the figure-8 choreographic solution, causing RK4 integrator divergence after ~4.3 simulated years. Reduced step size and tuned duration for stable propagation across multiple orbital periods.

Scenario Step Timing Audit

Performed a comprehensive audit of all 16 scenario templates to ensure smooth visualization. Scenarios with low step counts (< 500 frames) produce "jumpy" animations.

Scenario Consolidation & Expansion

Consolidation: 29 → 16 Templates

The original codebase contained 29 scenario files, many of which were duplicates, incomplete prototypes, or absorbed into other scenarios. A full audit consolidated these into 16 production-ready templates, each with validated physics configurations and proper documentation.

Current Scenario Catalog

# Scenario Duration Physics
01 Real-Time Simulation Continuous Kepler
02 LEO Operations (TLEs) 6 hours J2 + Atmo
03 Earth-Moon Cislunar 30 days Cowell N-Body
04 Inner Solar System 2 years Kepler
05 JWST at Lagrange L2 1 year Cowell N-Body
06 Outer Solar System 165 years Kepler
07 Voyager — Furthest Spacecraft 50 years Kepler
08 Solar Dynamics 25 years Kepler
09 Planetary Defense 1 year J2
10 Space Hazards (Debris) 6 hours J2 + Atmo + Mag
11 TRAPPIST-1 Exoplanet System 20 days Cowell N-Body
12 Alpha Centauri System 80 years Cowell N-Body
13 Galactic Center (Sgr A*) 20 years Cowell N-Body
14 Local Group (Galaxy Collisions) 1.5 Byr Cowell N-Body
15 Stellar Evolution (Sirius AB) 50 years Cowell N-Body + Mag
16 Three-Body Choreography 6 years Cowell N-Body

Backend Architecture Improvements

  • Django Admin Registration: Registered all simulator and core models in Django admin for operational visibility and manual data management.
  • Timezone Configuration: Set TIME_ZONE = 'UTC' globally and suppressed ErfaWarning for dubious year calculations in Astropy (common in deep-time scenarios like Local Group).
  • Email Configuration: Migrated from console email backend to a production-grade SMTP provider for staging/production transactional emails.
  • Internationalization: Generated Django locale translations (pending review) for future multi-language support.

Testing & Coverage

  • Full backend test suite maintained across all changes (unit + integration + e2e)
  • Dedicated proximity/collision test coverage validating the event detection pipeline: visual-only entity filtering, container exclusion, skip_proximity_check flag, threshold calculations, and event classification

Log in or sign up for Devpost to join the conversation.