Inspiration

We live in an era where "seeing is no longer believing." With the explosion of Generative AI, creating hyper-realistic deepfakes and fake news is terrifyingly easy. We realized that the biggest problem facing the internet isn't generating content, it's verifying it.

We wanted to build a "digital chain of custody" for AI media. We asked ourselves: What if every AI-generated image came with an unforgeable, cryptographic birth certificate anchored on the blockchain?

What it does

Our platform is a Verifiable AI Image Generator that bridges the gap between Web2 AI and Web3 immutability. It operates on four core principles:

  • Open Registry: We built a decentralized protocol where any image generation model provider can register themselves and their specific model versions on our blockchain.
  • Generation & Attestation: Whenever a registered provider generates an image, our system intervenes before the file is returned to the user. We cryptographically hash the image and submit that hash to a Smart Contract.
  • Immutable Proof: This creates a permanent on-chain record proving exactly when the image was created, which provider generated it, and which model was used.
  • Trustless Verification: We provide a public verification endpoint. Anyone can upload an image, and the system re-hashes the file to query the smart contract. If even a single pixel has been tampered with, the hashes won't match, and the verification fails.

How we built it

We engineered a cohesive full-stack architecture that seamlessly merges generative AI with decentralized verification standards:

  • Decentralized Registry Protocol: We deployed a custom smart contract architecture on Ethereum. This layer acts as the immutable source of truth, maintaining a dynamic whitelist of authorized AI providers and securely storing the cryptographic fingerprints of every generated asset.
  • AI Integration Layer: For our reference implementation, we leveraged advanced multimodal models (Google Gemini via OpenRouter). This allows us to generate high-fidelity images while maintaining the flexibility to plug in other models or providers in the future.
  • Cryptographic Bridge: Our backend infrastructure serves as the secure orchestrator between Web2 and Web3. It manages the delicate prompt engineering required for consistent AI output, handles the precise cryptographic hashing of binary image data, and automates the signing of blockchain transactions to anchor proofs in real-time.
  • Byte-Perfect Verification: We built a strict validation engine that analyzes raw file uploads. By recreating the cryptographic hash from the user's file and cross-referencing it with the blockchain state, we ensure that the data being verified matches the original generation down to the very last byte.

Challenges we ran into

  • The "Byte-Perfect" Verification Problem: Figuring out how to reliably verify images was harder than expected. Because cryptographic hashes change if even a single bit is altered, we struggled with data consistency. We had to architect our backend to handle binary buffers with extreme precision, ensuring that the image data sent to the blockchain for signing was bit-for-bit identical to the file downloaded by the user, avoiding issues with metadata corruption or encoding differences.
  • The Zero-Knowledge Complexity Wall: Our initial ambition was to implement a full Zero-Knowledge (ZK) proof system to verify images without ever revealing their hashes on-chain. However, we quickly realized that generating ZK proofs for large inputs (like high-res image files) involves massive computational overhead and complex circuit design (e.g., Circom/Halo2). To ensure we shipped a working product within the hackathon timeline, we made the strategic decision to pivot to a Keccak256 commitment scheme for the MVP, effectively balancing privacy goals with delivery speed.
  • Architecting for Pluggability: It would have been easy to hardcode the system for just our own AI model. Making the product pluggable for any provider was significantly more difficult. We had to design the ProviderRegistry smart contract to be generic and permissionless, allowing different entities to register distinct model IDs and manage their own signing keys. This introduced complex state management issues and forced us to rigorously test our solidity modifiers to prevent unauthorized entities from polluting the registry.

Accomplishments that we're proud of

  • Creating a Standard, Not Just a Tool: We didn't just build an image generator/verifier; we built a registry system that other developers can plug into.
  • Zero-Trust Architecture: The verification doesn't rely on our database. It relies entirely on the blockchain state. If our servers go down, the proof of the image still exists on Ethereum.
  • Zero-Cost Verification for Everyone: We engineered the verification process to be entirely free. Checking an image's authenticity requires no gas fees and no crypto wallet. This democratizes access to truth, allowing anyone with an internet connection to verify media instantly.
  • Privacy-Preserving Architecture: We successfully built a system that verifies the content, not the person. The on-chain attestation links the image strictly to the AI Model Provider, ensuring that neither the original creator’s identity nor the verifier’s personal details are ever exposed or stored on-chain.

What we learned

  • The Fragility of Smart Contract Security: We realized just how terrifyingly easy it is to leave silent vulnerabilities in Solidity code, where a single missing modifier can compromise the entire registry. We utilized Kairo to help audit our contracts, which allowed us to spot and fix critical permission gaps and logic errors that human review initially missed.
  • The "Data vs. Gas" Dilemma: We learned the hard way that the blockchain is a premium storage medium, not a database. With the expectation of handling terabytes of AI-generated content, we had to ruthlessly optimize our on-chain footprint. We learned to architect a system that stores only the bare minimum cryptographic proofs on-chain to keep gas fees viable at scale.
  • Designing Protocols, Not Just Products: We discovered that making a system "pluggable" is much harder than making it standalone. We learned to shift our engineering mindset from building a closed app to designing an open interoperability standard, ensuring that our registry is flexible enough for any developer or AI model provider to integrate without rewriting their existing stack.

What's next for TruePixel

  • Zero-Knowledge Content Privacy: We plan to implement Zero-Knowledge Proofs (ZK-SNARKs) to decouple the verification from the raw data. This would allow us to prove an image was generated by a specific model without ever storing the image’s hash on-chain, ensuring that even the metadata of sensitive or private generations remains completely hidden.
  • Resilient Verification (Perceptual Hashing): Current cryptographic hashes break if a single pixel changes. We aim to upgrade to Perceptual Hashing or robust watermarking algorithms. This will allow our system to verify images even after they have undergone benign edits like compression, cropping, or format conversion.
  • Incentivized Decentralized Network: To prevent centralization, we plan to design a crypto-economic model that rewards community members for hosting verification nodes and AI inference endpoints. This ensures the registry remains censorship-resistant and highly available.
  • Regulatory Standardization: We aim to collaborate with policymakers and industry bodies to establish this registry as a recognized technical standard. Our goal is to make this protocol a foundational layer for compliance with future regulations, such as the EU AI Act, ensuring that "TruePixel" becomes the global norm.

Built With

Share this project:

Updates