Inspiration
When you were younger, you’d hear about older people getting scammed over the phone or by email and think, that could never be me.
But AI changed the game. Now a voice call can sound like your friend, a video can look like a real event, and an image can “prove” something that never happened. The scam isn’t just getting smarter, it’s getting indistinguishable.
Deloitte predicts that generative AI could drive U.S. fraud losses from $12.3 billion in 2023 to $40 billion by 2027—a 32% annual growth rate.
That’s why we built Reclaim: a trust layer for AI media that puts the ball back in the creator’s court. Reclaim attaches tamper-evident, cryptographic provenance to images at the moment they’re created or shared, so anyone can instantly verify what’s real, what’s AI-generated, and what’s been altered, with proof, not guesswork.
What it does
Reclaim gives media the ability to prove itself.
When an image is created or uploaded, Reclaim attaches a tamper-evident authenticity stamp that records whether the content is real or AI-generated, who created it, and how it’s been modified. That information travels with the file, not the platform.
Anyone; creators, viewers, platforms, can drop an image into Reclaim and instantly verify its authenticity, origin, and edit history in one check.
How we built it
Reclaim combines a few core layers to make authenticity easy to understand. We use OpenStego to embed an invisible steganographic signature into each image that marks whether it’s real or AI-generated. Each image is then linked to a Solana wallet, giving creators a verifiable way to claim authorship. C2PA metadata records creation and edit history so any tampering becomes visible, and MongoDB Atlas stores wallet and provenance references for fast verification. Together, this creates a simple system where anyone can check real vs AI and trace changes instantly.
Challenges we ran into
From the start, we wanted to build something in the space of AI responsibility and ethics, especially as deepfakes became a growing threat to privacy, trust, and safety. After extensive research, we realized a hard truth: even the best AI models struggle to reliably detect deepfakes at scale. That pushed us to pivot from detection to prevention and provenance. Instead of guessing whether something is fake, we focused on creating a standard that tags media at creation so authenticity can be verified with certainty later. Along the way, we were also overwhelmed by ideas, from authenticated art marketplaces, to AI systems that analyze C2PA metadata and generate trust scores, which forced us to narrow our scope and focus on building the core foundation first.
Accomplishments that we're proud of
- Built an end-to-end system that stamps, links, and verifies media authenticity
- Created a working foundation for a global industry standard way to separate real from AI-generated content, that empowers creators.
- Successfully combined steganography, cryptography, and open standards into a single, understandable flow
- Designed Reclaim to be infrastructure-first, not a closed or platform-locked product
What's next for Reclaim
Next, we want to make Reclaim even more robust and seamless. We plan to expand our C2PA support to include more authenticity signals, giving clearer insight into how media was created and modified. We’re also exploring stronger steganographic models and end to end encryption so authenticity data stays secure from creation to verification. On the product side, we want to use Gumloop to automate the entire workflow, so media can be tagged and verified directly from your camera roll or digital art tools, without any manual steps. Our goal is to make authenticity invisible for creators, but obvious for everyone else.
Built With
- c2pa
- mongodb
- openstego
- phantom
- solana
- stephanography
- typescript

Log in or sign up for Devpost to join the conversation.