Inspiration
A multiplayer hacking simulation built on real AWS infrastructure.
Attacks are real UDP floods, capacity is real CPU, and nodes are actual Fargate tasks.
It’s a distributed systems project disguised as a game.
Inspired by the events of US-East-1, Packet Royale is a competitive learning experience about resilience, monitoring, and recovery. Including attacking real networks, teams deploy microservices into isolated, local cloud simulators and face chaos scenarios. The goal is to survive the longest while demonstrating graceful degradation, automated recovery, and clear observability. This teaches systems design, incident response and ethical security practices in a fun, hands-on way.
What it does
Core Gameplay
- A hexagonal grid of AWS Fargate nodes.
- Players capture territory by UDP flooding enemy nodes.
- Combat uses real packet loss measured via ACKs.
- Capturing a capital triggers a “final kill” flood to the client.
- The last player standing wins.
Technical Highlights
- Distributed state managed by Raft consensus.
- On-demand ECS spawning and teardown.
- Rust backends with a TypeScript + Phaser.js frontend.
- Clients coordinate worker attacks; workers fight autonomously on the grid.
How we built it
Architecture Overview: Three-Tier Distributed System
1. Master Node (Infrastructure Control Plane)
- Orchestrates workers via AWS ECS.
- Exposes REST API to spawn/kill worker tasks.
- Built in Rust using AWS SDK.
2. Worker Nodes (Grid Combat Soldiers)
- Run Raft consensus and combat logic.
- Use UDP flooding + ACKs for fights.
- Leader node handles captures and overload detection.
- Two types:
- Regular: 256 CPU / 512 MB
- Capital: 512 CPU / 1024 MB
- Regular: 256 CPU / 512 MB
3. Client (Player Interface)
- Provides HTTP/WebSocket REST endpoints for player actions.
- Coordinates master node for spawning and joining games.
- Executes the final kill WebSocket flood.
Event-Driven Architecture
- State derived from events like
PlayerJoin,NodeCaptured,NodeMetricsReport. - Enables replay, auditability, and fault tolerance.
Algorithms that we're proud of
RAFT Consensus
- Library: OpenRaft with RPC
- Event-sourced game state and linearized Raft log
- Leader applies only; followers replicate
- Split-brain safety, minority-tolerance, ~50ms leader failover
Packet Loss Measurement
- Sequence ACKs on UDP floods
- Track sent/acked; loss = (sent − acked)/sent
- ACK every 100ms
- Aggregates multiple attacks
Network Manager (Auto-Discovery)
- Match ECS public IP to game coordinate
- Open UDP to attacker; no preconfig
- 1:1 grid combat
- Optional final-kill fan-out
Dynamic Task Orchestration
- Spawn/kill Fargate tasks on demand
- Task families: regular/capital
- IP discovery, auto-registration, peer discovery, lazy init
Aggregated Metric Reporting
- Leader aggregates node metrics
- Timed captures; overload tracking; bandwidth via sliding window
- Raft events for metrics/captures
What we learned
- Rust: Async consensus, static typing, and memory safety.
- Raft: Leader election, replication, and membership changes.
- AWS ECS: Fargate networking, task lifecycle, and cost control.
- Cross-stack integration: CORS, graceful degradation, and type-safe APIs.
- Distributed systems: Fault tolerance, consensus, and event sourcing.
- Networking: ACK patterns, bandwidth control, and packet loss.
- Rapid prototyping: Real infrastructure, measurable systems, and efficient deployment.
Log in or sign up for Devpost to join the conversation.