Inspiration

Deepfake scams, account takeovers, and social engineering attacks have surged with increased accessibility of gen AI — and the tools banks give customers to fight back haven't kept pace. We kept asking: why does fraud detection only kick in after money moves? Why does "security" mean harassing every customer with One-Time-Passwords and bank calls to validate transfers, even in a high-stakes, time-sensitive concert ticket lottery?

What it does

Our app is a risk-gated biometric payment gateway that sits between a user and their money, asking one question per transaction: Does this actually need a challenge?

Every payment gets a real-time risk score and is judged using the transaction details and, if deemed necessary, a short video of the account owner.

How we built it

Gemini 2.0 Flash analyzes the transaction — amount, recipient familiarity, device history, velocity — and returns a risk percentage. Routine payments sail through instantly with zero friction.

When something looks off, we freeze the funds and issue a 3-second face video challenge. That challenge runs three independent ML layers simultaneously:

Deepfake Detection — analyzes frame-by-frame edge artifacts, skin-tone color anomalies, texture irregularities, and temporal inconsistency across frames. AI-generated faces flicker; real ones don't.

Optical Flow Liveness — measures non-rigid facial micro-motion. Reads the user's actual heartbeat from their webcam by detecting sub-pixel green channel oscillations in the skin caused by blood volume changes. A deepfake doesn't have a pulse.

Challenges we ran into

The bootstrapping trap. Our initial flagging logic was if len(triggers) > 0: challenge. But every first-time user has a new device and a new payee, so every single transaction triggered a challenge — including completely routine ones. We had to rethink the scoring entirely: new device or new payee alone no longer flags. Only high velocity, or high amount combined with novelty, triggers a challenge. And we seed demo users with realistic transaction history so Gemini has context to evaluate against.

Accomplishments that we're proud of

Five AI systems coordinated as one pipeline in under 3 seconds. Gemini 2.0 Flash (transaction risk + biometric decision), Presage SmartSpectra rPPG, for optical flow liveness, and Solana for blockchain, all run against the same video concurrently. The result is a single structured decision with confidence scores, reasons, and flawless bookkeeping.

What we learned

Biometric thresholds are deeply coupled to context. A liveness score of 0.4 means something completely different for a $10 transfer than for a $10,000 wire. Building a single threshold that works across the full amount and novelty space forced us to think in policy terms, not just ML terms.

What's next for Can’t HackHer

Real deepfake model weights. The FakeModel heuristic works well enough for demo but a proper EfficientNet-B4 or ViT fine-tuned on FF++ would push detection accuracy from "reasonable" to "production-grade." We have the pipeline; we just need the weights.

Built With

Share this project:

Updates