Project Submission: HyperRoute AI Inspiration Global supply chains are fragile. A single disruption—a port strike, a fuel spike, or a natural disaster—can cost businesses millions in hours. Traditional tools run simulations sequentially on single machines, taking hours to yield results. We wanted to democratize High-Performance Computing (HPC) for supply chain risk analysis, making it instant, accessible, and intelligent. We were inspired to build a tool that doesn't just "report" data but actively "crunches" thousands of future scenarios to find the safest path forward.
What it does HyperRoute AI is a distributed supply chain optimization platform. It allows users to:
Talk to their Supply Chain: Use natural language to define complex simulation parameters (e.g., "Run 250 scenarios varying fuel prices by ±60%"). Simulate Risks at Scale: Instantly spins up a distributed compute grid to run Monte Carlo simulations on thousands of routings in parallel. Optimize for Resilience: It doesn't just find the cheapest route; it finds the most robust route that balances cost, speed, and reliability under volatility. Visualize Real-Time Progress: Watch as worker nodes spin up and crunch numbers in real-time, delivering actionable insights via a live dashboard. How we built it We embraced "Vibe Coding"—leveraging advanced AI agents to rapidly scaffold, iterate, and refine a complex distributed architecture.
Frontend: React + Tailwind CSS for a responsive, modern interface. Backend: FastAPI handling REST and WebSocket connections. Compute Engine: A custom Distributed Grid using Celery workers and Redis as the message broker. Infrastructure: Docker Compose for orchestration, with a custom Autoscaler service that monitors queue depth and dynamically scales worker containers up/down based on load. AI Integration: LLMs (Claude via OpenRouter) translate user intent into mathematical simulation plans. Challenges we ran into Distributed State Management: Keeping the frontend in sync with 3+ asynchronous worker nodes was tough. We solved it by implementing a robust WebSocket + Polling fallback mechanism. Docker Networking & Race Conditions: We faced issues where workers would try to restart before fully shutting down, or the autoscaler would kill active workers. We had to implement graceful shutdowns and smarter health checks. Serialization Nightmares: Passing complex Pydantic models between the API and Celery workers led to serialization errors, which we fixed by strictly defining our data contracts and using primitive types for task payloads. Accomplishments that we're proud of True Parallelism: We aren't faking the progress bar. We actually have multiple Docker containers running separate simulation processes in parallel. Self-Healing Architecture: If a worker crashes or a task hangs, the system detects it and retries. Seamless UX: We successfully hid the immense complexity of distributed computing behind a simple chat interface. The user just "asks," and the cluster "works." What we learned "Vibe Coding" requires valid foundations: You can't just "vibe" your way through distributed systems without understanding the underlying constraints of Docker and Celery. The AI helps write code fast, but system design proficiency is still critical. Latency Matters: When simulating thousands of scenarios, even a 10ms delay in result aggregation adds up. Streaming updates is superior to batch polling. What's next for HyperRoute AI Kubernetes Integration: Moving from Docker Compose to K8s to scale across actual physical servers, not just one machine. Live Data Feeds: Integrating real-time fuel, weather, and port congestion APIs to make the simulations "live." Multi-Agent Negotiation: Having AI agents representing "suppliers" and "buyers" automatically negotiate contracts based on simulation results.
Built With
- next.js
- python
Log in or sign up for Devpost to join the conversation.