Inspiration
Artificial intelligence is scaling rapidly, and the infrastructure that powers it is approaching real physical limits. Data centers already consume roughly four percent of total electricity in the United States, and by 2030 demand from AI and cloud workloads is expected to at least double. Many facilities also rely on water intensive cooling systems in regions facing water stress.
Space offers a fundamentally different energy environment. In low Earth orbit, systems receive approximately 1,361 watts per square meter of direct solar radiation outside Earth’s atmosphere. However, orbital platforms operate under strict constraints, including eclipse periods roughly every ninety minutes, limited battery storage, extreme thermal swings, and intermittent ground communication windows.
We were inspired by a simple question: what if compute in space were scheduled around physics instead of assuming unlimited power and cooling? Instead of treating sunlight, temperature, and connectivity as side conditions, we wanted to make them the core of decision making. That vision became AetherNode.
What it does
AetherNode is a physics aware orchestration system for orbital edge computing. Rather than assuming compute can run continuously, it treats sunlight availability, thermal headroom, battery state, and communication windows as hard operational constraints.
Our system simulates an orbital data center and continuously generates telemetry such as temperature, power levels, eclipse versus sunlight phase, and ground station connectivity. The orchestration engine evaluates these constraints in real time and determines whether high intensity AI workloads can safely execute.
If temperature approaches critical limits or simulated battery levels drop during eclipse, workloads are paused or deferred. When conditions improve, compute resumes automatically. The system behaves like a mission control layer that dynamically adapts to environmental constraints.
A live dashboard streams telemetry and scheduling decisions over WebSockets, allowing users to observe mission control logic operating in real time.
How we built it
We built AetherNode using a Python backend powered by FastAPI. The system leverages asynchronous execution through asyncio to continuously evaluate system constraints and update scheduling decisions without blocking other processes.
Telemetry values are generated and processed by the backend, then passed through scheduling logic that determines whether workloads should run, pause, or resume based on defined thresholds and state transitions.
The frontend dashboard was built using HTML, CSS, and JavaScript. It connects to the backend via WebSockets to visualize system state, including thermal conditions, orbital phase, power status, and job execution decisions.
Challenges we ran into
One of the biggest challenges we faced is simply how new this field is. Orbital data centers are still largely theoretical, and only in the past few years have major players like Google, Starlink, and Blue Origin begun seriously exploring large-scale space infrastructure. There is no established playbook for how compute should actually be orchestrated in orbit, no mature frameworks to study, and no production systems to benchmark against. That meant we could not rely on existing case studies or best practices. We had to dig through research papers, mission reports, and technical documentation to piece together realistic constraints around orbital mechanics, eclipse cycles, solar radiation, thermal dissipation, and ground station visibility. Because this ecosystem is still emerging, we were building from first principles rather than adapting an existing cloud architecture.
Another challenge was thinking carefully about differentiation. Large aerospace and technology companies are exploring space infrastructure, satellite networks, and launch systems. However, most focus on hardware, connectivity, or transportation. We had to clearly define how AetherNode is different. Instead of competing on hardware or launch capability, we focused on the orchestration layer, the software intelligence that schedules compute around physics itself. That required us to think deeply about what layer of the stack remains unsolved and where software innovation can create meaningful leverage.
Balancing ambition with practicality was also difficult. We wanted the concept to feel technically grounded and differentiated from massive aerospace players, while still building something achievable within a hackathon timeframe. This forced us to make disciplined design decisions and focus on demonstrating the core insight clearly.
Accomplishments that we're proud of
We are proud that we transformed a highly theoretical concept into a working, interactive prototype. Instead of simply discussing orbital compute at a high level, we built a functioning orchestration layer that dynamically responds to simulated sunlight cycles, thermal limits, and connectivity windows in real time.
We successfully created a live mission control system that streams telemetry and scheduling decisions through an asynchronous backend. The system continuously evaluates environmental constraints and adapts compute behavior automatically, demonstrating that physics driven scheduling can be implemented in software today.
Most importantly, we approached space-based computing as a systems orchestration problem rather than purely a hardware challenge. That conceptual clarity is something we are especially proud of.
What we learned
We learned that constraint driven design is fundamentally different from traditional cloud architecture. Most modern infrastructure assumes stable power, cooling, and connectivity. Designing for orbital environments forced us to treat variability as the default state rather than the exception.
We also learned how to architect asynchronous backend systems that evaluate state continuously while maintaining real time user interface updates. Coordinating telemetry generation, scheduling logic, and live dashboard streaming deepened our understanding of distributed systems thinking.
Beyond technical skills, we learned how to scope an ambitious systems level idea into a focused, demonstrable prototype while maintaining conceptual depth.
What's next for AetherNode
The next step is integrating real orbital data so scheduling decisions align with actual satellite trajectories and eclipse predictions. This would ground the system in real world orbital dynamics rather than purely simulated cycles.
We also plan to move from threshold based logic to predictive optimization models that forecast thermal and power trends, allowing compute to be scheduled proactively instead of reactively.
In the longer term, we envision expanding AetherNode to coordinate multiple orbital nodes, enabling distributed, energy aware constellations that intelligently balance workloads across satellites.
Built With
- adafruit
- arduino
- claude
- css
- docker
- gemini
- github
- grok
- heygen
- html
- javascript
- openai
- python
- raspberry-pi
- vite
Log in or sign up for Devpost to join the conversation.