https://github.com/avleenmehal/guardrock-ai
Inspiration
In the age of AI-generated content, short-form videos have become a powerful vehicle for manipulation. They create false urgency, exploit emotions, and push people toward impulsive decisions, all within 60 seconds. Therefore, we decided to tackle this challenge by building GuardRock AI which is a real-time system that detects deceptive and manipulative patterns in short videos and warns users before they fall for it.
What it does
It is a web platform where users can upload and browse videos through a content filter. User choose what risk level they are comfortable seeing safe, vulnerable, or risky and the platform only shows you content that matches your preference.
Also, we extended this to a browser extension that monitors your YouTube Shorts feed in real-time and overlays a risk assessment directly on the video you're watching. Think of it as a manipulation detector sitting right on your shoulder.
Behind both of these is a distributed pipeline that analyzes videos using Twelve Labs' multimodal video AI that actually processes the video, scores them for manipulation and urgency, and delivers results back to the user in seconds.
How we built it
We started with youtube shorts and fed data into Twelve Labs API. The API indexes and extracts content to give insightful analysis on the video. This analysis is further fed into an LLM system which decides on the deceptiveness of the content. We extended this idea by building an application which allows users to filter content based on their preferences of safety choices.
Challenges we ran into
- Ensuring real time streaming of metadata while keeping the systems decoupled.
- Integrating with Twelve Labs API for indexing and analysis of video content with minimum latency.
Accomplishments that we're proud of
- Correctly determining the deceptiveness of AI generated videos
- Full stack web application to seamlessly stream uploaded video content
- Building a robust browser extension with scalable data ingestion pipeline.
What we learned
- Concepts related to system design that has real world implications on performance and reliability
- Concepts related to Vision LLMs and appropriate system prompt engineering.
What's next for Guardrock AI
- Expanding our idea of monitoring content from more social media platforms.
- Integrating a mobile interface for our application.
- Verifying LLM generated output with trusted sources from the Internet.
Built With
- html
- javascript
- node.js
- openai
- python
- supabase
- valkey
Log in or sign up for Devpost to join the conversation.