Inspiration

The same news cycle kept repeating: an AI medical chatbot misdiagnoses a patient, an autonomous trading algorithm makes an unauthorized transaction, a healthcare tool recommends a dangerous treatment. Nobody could trace the decision. Nobody could hold the AI accountable. It's 2026. AI agents are no longer simple chatbots. They initiate payments, manage credit, and make clinical decisions without human intervention. But the infrastructure to verify, enforce, and penalize their actions simply does not exist. We have "rogue autonomy" and "logic drift," where an agent's reasoning pattern shifts over time and quietly starts violating regulations or causing real harm. I wanted to build something that flips the script: a system where an AI cannot act unless its decision is provable, enforceable, and economically accountable. That's why I built Aegis Morpheme X.

What it does

AMX turns every AI decision into an Executable Morpheme-X unit, a cryptographic proof that contains the AI's intent (hashed), which model version made the decision, the context (risk score, patient data fingerprint), and a trigger defining what action to take (payout, block, alert). This unit is submitted to the Hedera Consensus Service. Once the network confirms it (around 3 seconds), the system executes the trigger automatically and immutably. A Meta-Sentinel watches every agent in real time. Using a 2-sigma anomaly detection rule, it spots when an agent behaves unexpectedly. If a diagnosis agent says "No risk" when the acoustic risk score is 0.9, the Sentinel blocks the action, logs the event to Hedera, and slashes the agent's stake. The system then automatically retrains the agent using the failed example, creating a self-improving accountability loop. An adaptive parametric insurance engine uses One Health data (weather, livestock disease reports) to dynamically adjust payout thresholds. In a high-risk environment like Dhaka, a moderate cough can trigger an automatic micro-payout. In a stable environment like Singapore, the threshold is higher. The numbers shift based on real conditions, not hardcoded values. Agents can also hire other agents from a decentralized registry (simulated HCS-10/OpenConvAI), paying them micro-amounts of HBAR. This turns AMX into a marketplace of verifiable intelligence rather than a closed box. The bottom line: every decision is proven, enforced, and accountable, or it does not execute.

How I built it

The system is built as three layers: an AI decision engine, a verification layer on Hedera's public ledger, and an economic penalty mechanism that automatically punishes bad agent behavior. The backend runs on FastAPI and Uvicorn, with LangGraph orchestrating the agent mesh (Triage, Diagnosis, Finance, Epidemiology, Morpheme-X, Sentinel). Each agent is a Python node with deterministic logic. For Hedera integration, I used the Hedera Python SDK. HCS topics store Morpheme-X messages and sentinel events. An HTS token (AMXSTAKE) handles agent staking and slashing. I built both a live mode for real testnet transactions and a simulation fallback for demo stability. The frontend is built in React with WebSockets for real-time updates and Chart.js for anomaly visualization (rolling window with mean and 2-sigma bands). Every Morpheme-X transaction links to a clickable HashScan proof. The backend is deployed on Render and the frontend on Vercel. For testing, I wrote 19 pytest cases covering the sentinel, finance, API, graph, and Hedera layers, plus React Testing Library tests for the frontend. GitHub Actions runs the full suite on every push.

Challenges I ran into

The TinyML cough-to-risk model is a documented simulation. I made a deliberate decision to focus the innovation on the governance layer rather than ML accuracy, and I documented that clearly throughout the codebase. Getting the 2-sigma anomaly detection loop right was harder than expected. Handling edge cases like window sizes under 3 or zero standard deviation, and integrating that with LangGraph's state machine without breaking the flow, took multiple iterations. Hedera testnet was occasionally unpredictable. Transaction fees, topic creation, and token transfers sometimes failed with cryptic errors. Building a robust fallback to simulation mode meant the demo never crashed during recording. WebSocket state synchronization also gave me trouble. Broadcasting agent decisions from the backend to the React dashboard in real time while keeping the event log clean required a dedicated ConnectionManager class. There's also what I call the "executable trust gap." Judges will reasonably ask: who actually executes the trigger? The answer is that the orchestrator only acts after Hedera consensus confirms the Morpheme-X, making it a constrained executor rather than a trusted authority. Getting that documented clearly in both code and docs took real effort.

Accomplishments I'm proud of

The first working prototype of an Executable Morpheme-X is the one I'm most proud of. It's not just a log entry. It's a cryptographic unit that can trigger real actions. The live Hedera transaction in the demo is something judges can verify themselves by clicking the HashScan link. That immutable proof being publicly visible matters. The Meta-Sentinel actually blocks unsafe actions, slashes agent stakes, and schedules retraining. That loop is genuinely novel and I have not seen it in other AI governance projects. The adaptive parametric insurance threshold changes dynamically based on outbreak risk and poverty index using real OpenWeatherMap data. The test suite is complete: 19 backend tests, frontend component tests, an end-to-end script, and CI/CD on GitHub Actions. The entire system runs on free tiers, no credit card required.

What I learned

AI governance is not a feature you bolt on later. It's a necessity. The technical community is building increasingly powerful models without any real accountability layers, and this project taught me how to embed verifiability at the protocol level from day one. Hedera turned out to be genuinely well-suited for agentic systems. Sub-3-second finality, fixed low fees, and carbon-negative operations make it a strong fit for real-time healthcare decisions. Being upfront about what's real versus what's simulated is something judges respect, and it let me focus on where the actual innovation was. Testing saved my demo more than once. The CI pipeline caught a missing environment variable and a broken WebSocket reconnect right before recording. One concrete "wow moment" is worth more than ten slides of technical depth. The live Hedera explorer verification is what people remember walking away.

What's next for Aegis Morpheme X

The code is on GitHub under the MIT license. Next up is a technical whitepaper and an open contributor call. On the real-world side, I'm exploring deployment of a simplified AMX version for respiratory disease monitoring in low-resource settings, in early conversations with a Dhaka-based community health organization. On the technical roadmap: replacing the simulated cough model with a real TensorFlow Lite model that runs entirely on-device, moving the agent-to-agent commerce layer from simulation to a live HCS-10 OpenConvAI registry, and eventually expanding the same verifiable governance layer into supply chain AI, financial compliance, and autonomous vehicle systems.

Built With

Share this project:

Updates