Inspiration

Most of the "AI for industry" projects we looked at assumed clean REST APIs and modern infrastructure. The reality in manufacturing plants is messier. Data lives in SQL databases nobody wants to touch, sensors that output raw hex, and PLC systems that predate the internet. The gap between that world and what modern AI can do felt like an obvious problem worth solving. We wanted to see if an agentic system could actually reach into that kind of environment and do something useful, without pretending the legacy layer doesn't exist.

What it does

AegisCore lets a plant operator type a natural language question and get a grounded answer pulled from real systems. It can cross-reference a live inventory against a legacy SQLite database, poll industrial sensor telemetry, and surface discrepancies or anomalies in plain language. For any action that would write or modify something, the system stops and waits for explicit human approval before proceeding.

How we built it

The backend is a Python WebSocket bridge that routes between the frontend and Amazon Bedrock's Nova 2 Lite via the Converse API. Tool use is handled through a custom MCP server that translates legacy SQL rows and raw sensor values into clean JSON the model can reason over. The frontend is a React dashboard that shows the agent's reasoning steps live as it works through a query. The whole backend is containerised with Docker.

Challenges we ran into

Getting reliable tool-call sequencing out of the model took more iteration than expected. Early versions would occasionally skip a tool call or misinterpret the output format from the MCP server. We also spent a fair amount of time on the human-in-the-loop gate — it sounds simple, but deciding exactly when to pause, what to surface, and how to block execution cleanly without breaking the agent loop required careful handling.

Accomplishments that we're proud of

The HITL gate works the way we wanted it to. The agent genuinely stops, shows its reasoning, and does nothing until a human signs off. That was the hardest part to get right and the most important. We're also happy with how the MCP layer turned out — it handles the translation between raw legacy data and model-readable JSON cleanly enough that Nova can reason over it without hallucinating context it doesn't have.

What we learned

Tool-use models behave very differently from chat models. The OODA loop structure — observe, orient, decide, act — is a genuinely useful mental model for designing agentic systems, not just a marketing term. We also learned that grounding is the hard part. Getting the model to reason well is table stakes. Getting it accurate data to reason over is where the real engineering work is.

What's next for AgeisCore

Three things we want to build out. First, edge deployment — running the bridge on AWS Greengrass devices so the system can operate closer to the hardware. Second, computer vision integration using Nova's multimodal capabilities to process thermal camera feeds alongside sensor data. Third, automated PDF reporting for regulatory audits, so the system can produce a traceable log of every decision and approval without manual documentation.

Built With

Share this project:

Updates