TrustVerify - Deepfake fraud detection for KYC
What inspired us
We built TrustVerify because we realized something simple: a KYC system can be fooled if it blindly trusts the webcam feed. If someone uses a virtual camera or injected video, the system might still see a “face” and accept it. We wanted a liveness check that depends on real-world camera physics and real-time user actions.
What we built
TrustVerify is a lightweight KYC liveness flow with three checks only:
- Fisheye — user moves close to the camera, we measure a fisheye-style ratio change
- Squint — user squints, we measure a facial ratio change
- Light — we flash light on/off and detect whether the webcam feed reacts like a real camera scene
The goal is not to be perfect against everything, but to reliably catch the common bypass: injected / virtual camera video.
How we built it
Frontend
- Open the webcam in the browser
- Show short, clear instructions for each step (approach / squint / light on-off)
- Stream frames to the backend in real time
- Display pass/fail per check and a final verdict
Backend
- Receive frames and run the three methods
- Keep simple session state so we know which check is running
- Return results to the UI immediately so the demo feels “live”
The three methods (strictly what we used)
1) Fisheye check (approach-the-camera)
What we ask: move closer to the camera.
What we measure: a “fisheye ratio” that should change when a real 3D face moves closer to a real lens.
Simple idea: when you get closer, facial geometry in the image shifts in a predictable way. Injected video often can’t match that change naturally at the right time.
2) Squint check (facial ratio)
What we ask: squint.
What we measure: a facial ratio tied to the eye region changing shape.
If the user squints, that ratio should noticeably move. With deepfake/injected feeds, the motion can be delayed, softened, or inconsistent.
3) Light check (light on/off reaction)
What we ask: shine a light on and off (phone flashlight).
What we measure: whether the webcam feed reacts like a real scene (brightness shift, contrast change).
A real camera feed will show an immediate lighting response across the face/background. Injected video often won’t respond correctly because the lighting is baked into the video.
What we learned
- The strongest liveness tests are the ones that depend on real-world interaction (distance, muscle movement, lighting).
- A simple system can still be convincing if the UI gives clear instructions and instant feedback.
- “Only 3 checks” forced us to make each one easy to understand and demo.
Challenges we faced
- Lighting variability: some rooms are dim, and webcam auto-exposure can behave differently per device, making thresholds tricky.
- User consistency: if the user doesn’t move close enough, squints weakly, or toggles light too slowly, the signal gets noisy.
- Timing: we had to align the instruction timing with the measurement window so the system evaluates the right moment.
- False positives/negatives: we had to balance sensitivity so real users pass reliably while injected feeds still fail.
What we would improve next
- Make each step more guided (distance meter for fisheye, “squint strength” indicator, better light timing prompts).
- Add device-specific normalization (handle auto-exposure better).
- Collect more test cases to tune thresholds across different webcams.
Built With
- css3
- fastapi
- framer-motion
- mediapipe
- numpy
- opencv
- python
- react
- typescript
- uvicorn
- vite
- websockets
Log in or sign up for Devpost to join the conversation.