Inspiration

As AI agents become increasingly autonomous, trust and accountability must scale with them — especially in real-world domains like finance, healthcare, and legal AI. But trust can’t come at the cost of user privacy. We were inspired to build TruAID, a platform that combines the verifiability of blockchains, decentralized information flow policies, the structure of Google’s A2A protocols, and the observability of Weights & Biases, while preserving sensitive information through selective disclosure and PII protection techniques.

What it does

TruAID is a platform that enables trusted, auditable agent collaboration — with verifiability baked in and PII kept safe.

🔗 Blockchain-backed audit trails: Agent interactions, decisions, and key events are hashed, signed, and anchored on-chain — without leaking sensitive data.

🧠 A2A protocols: Agents identify themselves, negotiate tasks, and exchange structured messages securely.

📊 Weights & Biases integration: All model evaluations, training metrics, and inference logs are monitored, with sensitive fields obfuscated or zeroed out.

🔐 PII-Preserving: All actions that voilations the informations flow, and has potential of leaking PIIs will be notified through the Weights & Biases platform.

🧭 End-to-end provenance: From input → model decision → output → outcome, every step can be independently verified — without exposing raw content.

How we built it

TruAID Blockchain Protocol: We defined TruAID blockchain protocol for:

  • publishing agent's identity and capabilities
  • contract execution
  • proof of work tracking and contract finalization

TruAID Blockchain Node: Implemented a demo in-memory blockchain Node that provides endpoints to interact with the TruAID blockchain.

TruAID MCP Server: Implemented a demo MCP server that can be used by agents to interact with TruAID blockchain.

Agent Architecture: We implemented modular agents using open-source LLM orchestration tools, wrapping them with Google’s A2A protocol for secure, structured message passing. Each agent carries its own verifiable identity and capability metadata.

Blockchain Anchoring: Agent events (agents calling each other) are signed and hashed into Merkle Trees.

PII-Safe Observability: Using Weights & Biases, we instrumented agent workflows with hooks that redact or obfuscate personally identifiable information before logging. Additionally, we implemented a "sensitive path alert" system — when an agent’s behavior might leak data, W&B logs the trigger point and flags it for developer review.

Frontend Dashboard: Visualized agent interaction graphs, blockchain audit anchors, and live W&B traces to help users explore workflows without breaching privacy boundaries.

Challenges we ran into

The biggest challenge we ran into was the highly confusing (best case scenario) and incorrect (most likely scenario) documentation state around MCP server development. The standard is changing very frequently, and most of the documentation is already outdated (e.g. ADK documentation does not have anything about how to instantiate MCPToolSet for a Http Streamable MCP Server). You cannot rely on documentation, you certainly cannot vibe code it (so much hallucination and incorrect code generation, when documentation in incorrect). We had to go old school and dig into protocol specs, and read the underlying ADK library implementation code to figure out the details.

PII Safety in Logs: Standard logging tools often assume full visibility. Ensuring privacy-compliant observability required building a custom middleware to filter or summarize logs before export.

Event Hashing Granularity: Not all agent actions are meaningful at blockchain granularity. We had to design checkpoints and semantic filters to anchor only important events, while avoiding overlogging.

Balancing Real-Time + On-Chain Finality: Blockchain finality (even on testnets) lags real-time agent operations. We implemented delayed anchoring and batching mechanisms to minimize latency while preserving integrity.

Protocol Interop: Mapping A2A messages to our audit format and agent identity registry took effort due to protocol verbosity and lack of higher-level semantics.

Accomplishments that we're proud of

Delivered a working prototype where multiple agents securely collaborate, log actions to a blockchain, and preserve user privacy throughout.

Successfully combined three ecosystems — Google A2A, Weights & Biases, and blockchain infrastructure — into a cohesive, auditable trust layer.

Built a real-time notification mechanism in W&B that alerts when sensitive data flows are at risk, giving developers a "black box breach warning system" for LLM pipelines.

Created a template for future zero-knowledge upgrades, enabling agents to prove compliance without revealing raw inputs.

What we learned

Trust and auditability need careful system design — especially when privacy is a non-negotiable constraint.

Logging ≠ Transparency: Logs are only helpful when they’re interpretable and privacy-safe. Having structured, redacted observability is more valuable than raw traces.

Agents need verifiable boundaries: Agents with unchecked autonomy risk violating privacy or logic. Formal interfaces and signed actions give us a primitive for enforcement.

There’s a gap in agent protocol semantics: Google’s A2A gives transport and identity, but trust, policy, and violation semantics still need standardization.

What's next for TruAID (Trusted AI Decentralized)

TruAID is evolving toward a fully secure, privacy-preserving, and verifiable AI agent infrastructure. Upcoming developments focus on strengthening trust guarantees across execution, identity, and disclosure layers.

🧩 1. Selective Disclosure Layer TruAID will implement a selective disclosure protocol, enabling agents to:

Publish non-sensitive outputs (e.g., hashes, merkle roots, model decisions) on-chain for public verification.

Keep sensitive artifacts (e.g., model weights, raw input data, tokens, intermediate embeddings) off-chain and encrypted.

Support integration with decentralized storage backends like IPFS or Arweave for access-controlled sharing of encrypted data.

Enable ZKP-style proofs or attested summaries for auditable but private AI operations.

This empowers agents to prove what they did—without leaking what they saw.

🛡️ 2. Confidential Computing Environment Layer TruAID agents will operate inside Trusted Execution Environments (TEEs) such as Intel TDX, AMD SEV-SNP, or ARM CCA. This ensures:

Code and data confidentiality: agent memory, model parameters, and inputs remain encrypted even at runtime.

Remote attestation: external verifiers can prove that the agent was running untampered code inside a valid TEE.

Global identity assurance: agent identities can be cryptographically bound to their attestation reports and verified by third-party authorities like Intel, AWS Nitro, or Azure CVM.

Tamper-resistant logs: all sensitive operations are logged and anchored into a verifiable blockchain for auditability.

TEE-backed CVM (Confidential Virtual Machine) execution ensures zero trust from infrastructure while enabling secure multi-agent collaboration.

Built With

Share this project:

Updates

posted an update

Aligned on high level architecture for the solution:

  • service-provider agent -- this will be a demo agent that provides a service
  • procurement agent -- this will be a demo agent that discovers service agents and hires them for specific tasks
  • blockchain MCP client -- this will be MCP client library for agents to interact with TruAID blockchain platform
  • blockchain MCP service -- this will be the service providing interface to TruAID blockchain platform

Agents will directly negotiate contract mutually and then publish a signed contract into TruAID blockchain. The contract should cover following details:

  • Payment Depost -- TBD (will include total payment for the completion of work)
  • Milestones -- TBD (will be a list of milestones and escrow release for each milestone)
  • Security Deposit -- TBD _(will be the security deposit from service-provider agent)

TruAID blockchain platform will support following block types:

  • Agent DID -- this will be a signed A2A agent card published by the service-provider agents
  • Contract -- this will be a signed contract negotiated by service-provider agent and procurement agent
  • Work History -- this will be a immutable ledger of major events as negotiated in the contract milestone and signed by responsible party to track progress of the work as per negotiated contract

Log in or sign up for Devpost to join the conversation.