Inspiration

As of 2026, generative video models (Google Veo, Sora, etc.) have reached a point where visual artifacts like warped pixels or unnatural eye movement are virtually non-existent. Traditional deepfake detectors that rely on "looking for glitches" are no longer reliable. We need a deterministic, biological signal that AI cannot yet simulate with physical accuracy.

What it does

PulseVerify is a forensic tool that uses Remote Photoplethysmography (rPPG) to verify liveness. It detects the fluctuations in skin color caused by the human heartbeat. By analyzing these micro-signals across different regions of the face, we can determine if the subject is a living human or a synthetic generation.

If a video lacks a rhythmic pulse, or if that pulse is a uniform "digital shimmer" rather than a staggered biological wave, PulseVerify flags it as synthetic.

This method is based on the 2020 Paper: Hernandez-Ortega, J., Fierrez, J., Morales, A., & Erdogmus, T. (2020). DeepFakesON-Phys: DeepFakes Detection based on Heart Rate Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.

How we built it

Computer Vision: We use MediaPipe Face Mesh to isolate 468 landmarks and track specific Regions of Interest (ROI) on the forehead and cheeks.

Signal Extraction: The system extracts the mean intensity from the Green channel and the CIELAB 'a' channel* (red-green axis) in real-time.

  • Signal Processing: Detrending: We remove slow-frequency drifts caused by head movement or lighting changes.
  • Filtering: A 2nd-order Butterworth Bandpass Filter restricts the signal to the human heart rate range (0.75Hz to 3.0Hz). Frequency Analysis: We use a Fast Fourier Transform (FFT) to calculate the Power Spectral Density (PSD) and identify a dominant pulse peak.

Heuristics: We analyze the Spatial Phase Shift. In a real human, blood reaches the chin and forehead at slightly different times. Generative AI models typically apply noise or "shimmer" globally across all pixels simultaneously, resulting in a zero-lag phase, a telltale sign that a video is not authentic.

After analyzing a video, the app displays the verdict, along with the confidence, and all graphs useful for analyzing the video. The app also offers the option to download a PDF report about the findings.

Accomplishments that we're proud of

Based on our limited testing, our model is able to detect realistic generated videos from Google's Veo3 model as deepfakes, while marking real user videos as human.

What we learned

I was really impressed by the original research paper and how successful even more primitive approaches to AI deepfake detection can be.

Built With

Share this project:

Updates