🎯 Inspiration
VerifAI was built entirely inside Google AI Studio, because it was the only tool that allowed me to turn an artistic idea into a real working app without writing a single line of code outside the platform. I’m not a programmer — I’m a photographer, this was the first time for me to make an app and participate in a Hackathon!
I learned everything in real time while building it — deploying on Cloud Run, securing API keys with Secret Manager, and connecting multiple AI agents through Gemini 2.5 Flash.
This project proves that AI Studio empowers creators, not just coders. If someone with no technical background can build an ethical infrastructure for verifying human authorship, then AI Studio is doing exactly what it was meant to do.
We're entering an era where AI amplifies human creativity exponentially. But as creation accelerates, a critical question emerges:
How do we prove the human mind guided the process?
When a student uses AI to write an essay, a researcher to analyze data, or an artist to create—where's the boundary between human direction and AI execution?
VerifAI was born to answer this question with mathematical certainty.
Built entirely inside Google AI Studio using Gemini — 0 manual code, 100% compliant with the AI Studio Challenge rules. Deployed directly to Cloud Run.
🚀 What it does VerifAI is an intelligent, dual-mode certification system that audits artifact quality and validates human agency in AI-assisted work.
Upload your work (and optional ledger) → Receive a verifiable certificate in seconds.
The certificate intelligently adapts to the level of proof you provide:
Mode 1: Artifact-Only (No Ledger Provided)
Audits the intrinsic quality of the work (Originality, Integrity, and academic signals like citations).
The score is ethically capped (e.g., VER-1) and the UI guides you to add a ledger for a higher score.
Mode 2: Hybrid Mode (Work + Ledger Provided)
Runs the full PPM-HAS analysis on your collaboration history.
Measures deep process metrics like Human Influence (HI) and Process Direction (PD).
The score can reach the Ethical Cap of 75 (VER-3), providing verifiable proof of authorship.
Mode 3: Mismatch (Invalid Ledger)
If the artifact and ledger are semantically unrelated, the system detects the mismatch.
The score is 0 (VER-0), protecting against fraudulent submissions (como tu prueba de la imagen + ledger).
All certificates include a 🔐 Cryptographic SHA-256 Hash for immutable proof of authenticity.
Sección Actualizada: 🏗️ How we built it
🏗️ How we built it
Architecture: The Conditional Multi-Agent System
VerifAI's "brain" is not a single prompt; it's a conditional policy engine built on Gemini 2.5 Flash and Google's Structured Outputs.
Agent 1 (The Triage): Detects if a valid ledger is present. It also runs a cosine similarity check to ensure the artifact and ledger are semantically related. If they mismatch, it triggers a VER-0 certificate.
Agent 2 (The Sensor): Based on the triage, this agent calls Gemini using one of two distinct JSON Schemas:
HybridSchema: (If ledger is valid) Instructs Gemini to run the full PPM-HAS analysis (HI, PD, ORG).
ArtifactSchema: (If no ledger) Instructs Gemini to only audit intrinsic qualities (ORG, INTEG, Citations) and forces HI and PD to be 0.
Agent 3 (The Calculator): A deterministic TypeScript engine that applies the correct formula and ethical cap (75 for Hybrid, ~60 for Artifact-Only) based on which mode was run.
Agent 4 (The Notary): Seals the final certificate with a SHA-256 hash, making it a permanent, verifiable asset.
2. Determinism vs. LLMs: LLMs are probabilistic. How do we achieve deterministic results?
Solution: LLM as sensor (measures semantic signals), code as judge (applies fixed formula). Gemini provides 9 scores [0-1], our TypeScript engine calculates HAS deterministically.
3. Privacy: How do we verify work without storing sensitive data?
Solution: Process everything in-memory. Generate certificate + hash. Discard inputs immediately. Zero persistence = zero privacy risk.
4. Real-world validation: We tested VerifAI on actual use cases:
- 6-month historical research paper (Gastón Gadín case) → Wikipedia article
- Creative projects (Arkan character design)
- Academic papers (PPM methodology itself)
The tool verified itself being built. Meta-validation achieved.
🏆 Accomplishments
✅ Working product in production (not just a demo)
✅ Novel methodology (PPM-HAS v0.3) understandable by any LLM
✅ Ethical framework coded into architecture (75% cap)
✅ Real users (ourselves—we use it daily)
✅ Self-validating (VerifAI verified its own creation process)
✅ Multi-agent architecture deployed on Cloud Run
✅ Zero infrastructure drama (Secret Manager + AI Studio deploy = 10 minutes)
✅ My first app! now I can make apps for fellow photographers
✅ My community is already using the app for their work!
🕵️♂️ Case Study: The Researcher's Dilemma — VerifAI vs. GPT Zero
This case study demonstrates the core problem VerifAI solves, using a real-world, high-stakes example: the validation of original historical research.
The User: A historical researcher (the author) investigating a 110-year-old cold case, "El caso Gastón Gadín."
The Process: Over several months, the researcher conducted a deep investigation involving hemerographic analysis, genealogical cross-referencing, and critical deconstruction of historical narratives. AI (Gemini) was used extensively as a research assistant and writing partner to analyze sources, synthesize findings, and draft the final 40-page paper.
The Human-Led Discovery: The research culminated in a significant, original human discovery: correcting a century-old error by identifying the true identity of "Ana Mayeregger" as Ana Basilia Caballero Santa Cruz.
The Pain Point (The "Validation Crisis")
The final paper, despite being the product of months of human-led investigation and insight, was submitted to a leading AI detector.
The Result: The detector failed to see the human contribution. It returned a score of 69% AI-generated and only 20% Human.
The Impact: This result is a catastrophic failure of validation. It falsely punishes the researcher for using modern tools and, more importantly, invalidates the months of human work and the original historical discovery at its heart.
The Solution (VerifAI Validation)
The exact same 40-page paper was then submitted to VerifAI, using the "Artifact Only" mode (without a chat ledger, as the research was too extensive to be contained in one).
The Result: VerifAI returned the highest possible score: 75/75 (Human Agency Score), with a "VER-1" certification (as no ledger was provided).
Why? VerifAI's semantic sensors, powered by Gemini 2.5 Flash, are not designed to "detect" AI. They are designed to measure human intellect. The system correctly identified the paper's profound:
Originality (ORG): 95
Human Influence (HI): 98
Process Direction (PD): 95
Integrity (INTEG): 95
Conclusion
This case study provides definitive proof of concept. Where AI detectors see only patterns and falsely punish collaboration, VerifAI successfully identifies the deep markers of human agency—originality, critical judgment, and intellectual direction. It validates the researcher's work, restores their authorship, and provides the "Certificate of Authorship" they need to prove their work in a world of AI suspicion.
📚 What we learned
1. Generation + Verification = Complete Ecosystem
AI expands what's possible. VerifAI ensures its integrity. Neither is complete without the other.
2. Simplicity > Complexity
We spent 8+ hours trying custom Dockerfiles and complex backends. Final solution? AI Studio's "Deploy to Cloud Run" button + Secret Manager. Done in 10 minutes.
Lesson: Use Google's blessed path. It exists for a reason.
3. Ethics as Code
"No system should claim 100% certainty" isn't a policy document. It's const ETHICAL_CAP = 75; in production code. Philosophy → Implementation → Reality.
4. Privacy is Paramount
When you verify sensitive work (research, proprietary code), users need to trust you won't store it. Architecture decision: No database. Ever. Process → Certificate → Discard.
🔮 What's next
Phase 1 (Current): Basic certification
Phase 2 (Q1 2026): Enhanced insights
Integrate Gemini 2.5 Flash as a qualitative companion:
- Summarize collaboration dynamics
- Identify key turning points in creative process
- Distinguish between "Director Mode" (human leads) vs "Synergy Mode" (true collaboration)
- Provide actionable feedback: "Your iteration depth is strong, but consider more upfront direction"
Phase 3 (2026+): Ecosystem integration
- Academic institutions: Thesis submission verification
- Publishers: Manuscript authorship certification
- Portfolio platforms: "Verified Creator" badges
- Music industry: Royalty-eligible AI-assisted compositions
With the launch of Google Search Grounding (announced by Google on Nov 10th), we will immediately begin implementing our VER-4 "Peer Review Tier."
This new layer will use Grounding to power an anti-plagiarism and fact-checking agent that will:
Extract all citations, references, and key factual claims from the document.
Use Google Search to verify that every citation is real and not plagiarized.
Fact-check the core claims against primary sources.
VerifAI V2 (Today) validates the HUMAN. VerifAI V3 (Next) will validate the WORK.
Human Agency + Academic Integrity = Complete Trust.
Vision: Every significant AI-assisted work should have a VerifAI certificate—not as gatekeeping, but as transparency. Like nutrition labels for food, but for creative agency.
🛠️ Tech Stack
- **Built and Launch directly from AI Studio
- Cloud Run (serverless deployment)
- Gemini 2.5 Flash (semantic analysis)
- TypeScript (backend + calculator)
- React (frontend)
- Secret Manager (API key security)
- SHA-256 (cryptographic hashing)
- PPM-HAS v0.3 (our proprietary methodology)
🔗 Links
- Live Demo: https://verifai.astigarraga.art/
- GitHub: https://github.com/dargor1406/VerifAI-CR
- Video Demo: https://youtu.be/cs3ZDLeOpXc
🎬 Quick Start
- Visit https://verifai.astigarraga.art/
- Upload your work (text, image, or PDF)
- (Optional) Paste your AI collaboration history
- Click "Verify My Work"
- Receive certificate in ~20 seconds
No signup. No storage. Just verification.
🧠 The Philosophy
VerifAI exists because:
The future isn't "humans vs AI" or "AI replacing humans."
It's "humans directing AI."
And when that direction matters—for academic integrity, creative attribution, professional credibility—proof should exist.
Not to punish AI use. To celebrate human agency.
VerifAI: Where generation meets verification.
Built With
- aistudio
- base64
- cloud
- cloudrun
- encoding
- express.js
- gemini
- node.js
- react
- run
- typescript
- unicode
Log in or sign up for Devpost to join the conversation.