Inspiration

In blockchain security, one of the most common and dangerous problems is that people sign transactions they do not fully understand.
Ethereum calldata is opaque, dense, and unforgiving — especially in Safe multisig environments, where decisions are shared and mistakes are amplified.

After seeing real-world multisig incidents and exploit patterns, it became clear that simply decoding function names or parameters is not enough.
The real question signers need answered is:

“What actually happens to my assets and control if I sign this transaction?”

That question became the core inspiration behind SignGuard AI.


What it does

SignGuard AI is a visual, AI-powered transaction security tool that translates Ethereum calldata into clear, consequence-focused explanations.

Instead of showing raw hex or low-level parameters, it highlights:

  • Real asset movements and approvals
  • Permission and ownership changes
  • Safe multisig–specific risks (modules, thresholds, batch actions)
  • Clear severity levels (LOW → CRITICAL) to prioritize attention

The recommended way to use SignGuard AI is through its web interface, where users can visually analyze transactions before signing them.


How we built it

SignGuard AI is built as a web-first security analysis platform with a modular backend.

Key components include:

  • A decoding engine for Ethereum calldata, ABIs, and batch (MultiSend) transactions
  • A Trust Profile system that adds contextual expectations about contracts, roles, and permissions
  • A severity classifier that evaluates the real impact of a transaction
  • AI-powered explanations generated using Gemini 3 as the primary model, with a modular fallback to Claude, OpenAI, and Ollama, strictly constrained by decoded calldata and execution effects.
  • A React-based web interface for visual timelines, effects, and risk indicators

A CLI and backend API are also available for advanced automation and scripting use cases.


Challenges we ran into

Some of the main challenges included:

  • Preventing AI explanations from hallucinating behavior not present in the calldata
  • Correctly analyzing batch transactions that mix benign and critical actions
  • Designing a UI that communicates risk clearly without overwhelming the user
  • Ensuring the project could be safely open-sourced, including secret scanning and log hygiene

Balancing accuracy, clarity, and usability was a constant challenge throughout development.


Accomplishments that we're proud of

  • Building a fully functional web interface for transaction security analysis
  • Supporting Safe multisig–specific logic, not just generic transactions
  • Successfully visualizing complex batch transactions
  • Integrating multiple AI providers while keeping explanations constrained and explainable
  • Delivering a clean, open-source project ready for public review and hackathon evaluation

What we learned

This project reinforced that human understanding is a critical security layer in Web3.

We learned:

  • Clear explanations reduce risk more effectively than raw technical detail
  • UX and visual hierarchy are as important as correctness in security tools
  • AI can be useful in security workflows only when tightly constrained by deterministic data
  • Multisig security requires context, not just decoding

What's next for SignGuard AI

Next steps include:

  • Expanding severity heuristics and trust profile rules
  • Adding more Safe and protocol-specific patterns
  • Improving batch transaction visualization
  • Optional real-time warnings and integrations into signing workflows

The long-term goal is to make understanding transactions before signing the default, not the exception.

Built With

Share this project:

Updates