About the Project

What Inspired Me

I didn’t start this thinking “I’m gonna build something for power grids.”

It honestly came from two things I wanted to push harder: running LLMs locally (no cloud at all) and playing with multi-agent systems through Fetch / Agentverse. Then it became more of a question like… what kind of problem actually makes those worth it?

Power grids just clicked. It’s one of those systems where decisions actually matter, like immediately. If something goes wrong, it doesn’t just sit there, it cascades. And at the same time, most of what exists is either dashboards or fully manual control. There’s not really a clean middle ground where a system helps you decide and still stays safe.

So yeah, this project is basically me exploring that gap.


How I Built It

The whole system is built around one idea: don’t let the AI actually control anything.

Instead, I have a local LLM take in a request and turn it into a structured decision. Not just text, but something like “do this action on these nodes with these constraints.” But that output isn’t trusted.

Everything goes through a deterministic controller that checks if it’s valid, enforces constraints, and blocks anything unsafe. On top of that, there’s a simulation layer that keeps track of the grid and lets the system reason about what might happen next.

So the flow is basically:

$$ \text{Final Action} = \text{Validate}(\text{LLM Proposal}) $$

The AI suggests. The system decides what actually happens.

Also everything runs locally, no cloud. That wasn’t just for fun, it actually makes it feel more like something you could deploy instead of just demo.


Challenges I Faced

LLMs are messy. Getting consistent structured outputs was honestly more annoying than expected. I had to add strict validation and fallback logic just so the system doesn’t break on weird formatting.

Another thing was separating simulation from control. It’s really easy to accidentally mix “this could happen” with “this is allowed to happen,” and that causes a lot of subtle bugs.

And then just keeping everything feeling real-time while doing all this validation + planning wasn’t trivial either.


What I Learned

The biggest takeaway is how AI actually fits into real-time systems.

It’s not about letting it take over. That’s honestly where things break. In systems like this, especially during emergencies, you don’t want something unpredictable making final calls.

What actually works is using AI to suggest decisions under pressure, while a separate system makes sure those decisions are safe.

That matters a lot in real-time situations where:

  • things are moving fast
  • you don’t have time to think through every scenario
  • and a bad decision can cascade

AI is good at quickly proposing options or pointing out what might happen next. But you still need something deterministic to enforce constraints and keep everything stable.

I also realized that once you force that boundary, the system becomes way more usable. It’s less about “can the model be perfect” and more about “can the system handle imperfect suggestions safely.”

And yeah, running everything locally made that even clearer. You start thinking less like “call an API and hope it works” and more like “this actually has to hold up in a real environment.”

So overall, it’s not really about AI replacing anything. It’s about using it as a layer that helps you think faster in situations where mistakes actually matter.

Built With

Share this project:

Updates