Inspiration
The modern power grid operates on a fragile knife-edge. During summer heatwaves, when millions of people return home at 5:00 PM and turn on their air conditioning, the grid experiences massive demand spikes. Currently, utility companies are mostly reactive—they wait for a transformer to blow, then send a truck to fix it.
We asked ourselves: What if we could see the "heart attack" coming and administer the cure before the grid collapses? This led us to build Grid-Pulse, a proactive, AI-driven Digital Twin that stabilizes urban infrastructure before a blackout can occur.
What it does
Grid-Pulse is an Operational Digital Twin of a 64-node urban micro-grid. It combines a real-time thermodynamic physics simulation with a dual-model machine learning architecture to achieve one goal: Automated Peak Shaving.
-The 3D Simulation: An interactive, isometric visualization of a neighborhood. As you adjust the ambient temperature slider, you physically watch houses transition from a stable state (cyan) to a critical thermal load (pulsing orange).
-The Dual-Model AI: * The Forecaster (GBR): Analyzes the last 15 minutes of grid data to project the load curve 30 minutes into the future.
-The Classifier (RF): Calculates the exact percentage probability of a catastrophic transformer failure.
-Protocol Alpha (Peak Shaving): When the AI predicts an anomaly with >80% certainty, it automatically diverts virtual battery reserves and throttles non-essential infrastructure. This "shaves" the peak off the demand curve, saving the transformer, reducing emergency CapEx costs, and preventing massive CO2 emissions from dirty "peaker plants."
How we built it
We split the architecture into a high-performance rendering frontend and a continuous-loop physics backend.
The Physics Engine: Built in Python/Flask, the backend simulates the grid using a localized thermodynamic formula:
$$Load_{total} = \sum_{i=1}^{64} \left( B_i + V_i \cdot \max(0, T_{amb} - T_{target}) \right) + \epsilon$$
Where $B_i$ is the base load, $V_i$ is the weather sensitivity coefficient, $T_{amb}$ is ambient temperature, and $\epsilon$ is random noise.
The Machine Learning Pipeline: We utilized scikit-learn for the intelligence layer. To avoid the "Cold Start" problem where AI outputs garbage data on boot, we engineered a pre-warming function that generates 1,000 simulated historical ticks on startup, ensuring the models are fully trained and lightning-fast the second the app goes live.
The Frontend: We designed a strict "Dark Mode Excellence" industrial command center using React and Vite. The 3D neighborhood was built with react-three-fiber. We utilized zustand for state management and bypassed the standard React render cycle, injecting our live telemetry directly into the 60FPS WebGL useFrame loop.
Challenges we ran into
-WebGL Memory Leaks & DOM Thrashing: Initially, piping 64 nodes of live telemetry data through standard React props caused massive garbage collection stutters and jittery charts. We had to rewrite the 3D meshes to read state directly inside the WebGL render loop, bypassing React's virtual DOM entirely for a flawless 60FPS. We also had to safeguard against NaN poisoning that silently crashed our WebGL context.
-Serverless Execution Limits: We originally tried to deploy the backend to Vercel, but quickly learned that serverless AWS Lambdas immediately murder background Python while loops. We had to pivot our deployment architecture, moving the backend to a persistent Render web service while keeping the frontend on Vercel.
-Thread Deadlocking: Running a continuous loop for the physics engine alongside a Flask REST API led to severe Mutex deadlocks when trying to reset the simulation. We solved this by carefully separating the state initialization logic from the threading locks (self._lock).
Accomplishments that we're proud of
-Zero-Latency Interactions: Moving the slider and instantly seeing the local ML models react, the charts update, and the 3D meshes glow feels incredibly satisfying and professional.
-Real AI, Not Faked: We didn't just hardcode a math formula or wrap an LLM prompt. We successfully implemented, trained, and served real Gradient Boosting and Random Forest models that evaluate time-series data locally.
-Translating Tech to Business: We are incredibly proud of our ESG impact dashboard. We didn't just show "watts saved"—we calculated the exact dollars in CapEx saved and pounds of CO2 prevented, proving the real-world business value of our system.
What we learned
-How to orchestrate complex spatio-temporal data between a continuous Python backend and a 3D JavaScript frontend.
-The stark architectural differences between deploying persistent background processes versus serverless edge functions.
-Advanced React rendering optimizations—specifically, why you should never use prop-drilling for rapidly changing continuous data in a 3D canvas.
What's next for Grid-Pulse: Urban Energy AI
The next step is hardware integration. We plan to replace our Python physics simulator with live data streams from actual IoT smart meters (like the ESP32) and connect the "Peak Shaving" protocol to real-world battery storage APIs (like Tesla Powerwall). We also want to implement a multi-agent Reinforcement Learning system to allow the grid to dynamically optimize its own baseline load distribution over time without human intervention.
Log in or sign up for Devpost to join the conversation.