Inspiration

In an era where generative AI can mimic reality with terrifying precision, the concept of "seeing is believing" is under threat. We noticed a dangerous gap: most deepfake detection happens in the cloud, forcing users to sacrifice their privacy to verify the truth. We were inspired to build Sentin-Edge to give users a "Reality Firewall" that lives entirely on their device—protecting their data while exposing synthetic deception in real-time.

What it does

Sentin-Edge is an on-device forensic suite optimized for the Samsung S25 Ultra. It employs a four-layer "Forensic Firewall" to analyze images and videos:

Safety Layer: Scans raw file URIs for metadata provenance (C2PA/IPTC) and known AI watermarks.

Visual Ensemble: Uses NPU-accelerated Vision Transformers (ViT) and ResNet50 to detect sub-pixel artifacts and generative noise.

Temporal Engine: Tracks facial landmark stability (HRNet) across 20-frame windows to catch the "jitters" common in real-time deepfake overlays.

Forensic Reasoning: Utilizing Gemma 4, the app translates complex technical scores into clear, human-readable explanations.

How we built it

The project is built on the Snapdragon 8 Elite platform using the Qualcomm AI Stack. We utilized LiteRT (formerly TensorFlow Lite) to deploy a multi-model pipeline.

We leveraged the CompiledModel API for AOT (Ahead-of-Time) compilation, ensuring our vision models hit the Hexagon NPU with minimal latency.

We integrated LiteRT-LM to run the 2GB Gemma 4 E2B model with NPU acceleration, enabling agentic reasoning without ever leaving the device.

The UI was built with Jetpack Compose, designed to provide a "security-first" dashboard experience.

Challenges we ran into

The primary challenge was Memory Orchestration. Running a 2GB LLM alongside high-fidelity vision models like HRNet and ViT pushed the mobile RAM limits. We overcame this by implementing a staggered initialization strategy and using 8-bit quantization (w8a8) for all models to maximize NPU efficiency. Another hurdle was preserving metadata during analysis; we solved this by implementing a Direct-URI access system that interrogates the raw file system before standard Android media compression occurs.

What we learned

We learned the true power of On-Device AI. Moving from CPU-based inference to the NPU didn't just make the app faster—it enabled a new category of "Privacy-Preserving Forensics" that wasn't possible before. We also discovered that "Explainable AI" is just as important as the detection itself; giving a user a reason is far more effective than just giving them a score.

What's next for Sentin-Edgde

We plan to expand our temporal engine to include audio-visual sync detection (matching lip movements to vocal frequencies) and integrate a system-wide "Screen Guardian" mode that can protect users during live video calls in any application.

APK is available on this github release: https://github.com/namrathavpatil10/google_hackthon/releases/tag/v1

Built With

  • android
  • c2pa
  • gemma-4-e2b
  • google-litert-(tensorflow-lite)
  • hexagon-npu
  • hrnet
  • iptc-metadata
  • jetpack-compose
  • kotlin
  • litert
  • litert-lm
  • qnn-delegate
  • qualcomm-ai-hub
  • resnet50
  • snapdragon-8-elite-(sm8750)
  • vision-transformer-(vit)
Share this project:

Updates