Inspiration
Misinformation is easier than ever to create (think ChatGPT, deepfakes) and harder than ever to spot. We noticed that most people don’t have time to fact-check everything they read online. So we set out to create Haven, a tool that empowers users to assess what they can trust rapidly.
What it does
Haven is an extension you can install into your browser. It analyzes text-based claims in real time, compares them against trusted sources, and returns a verdict with a confidence score. Haven's AI Lens also identifies manipulated or AI-generated images with forensic certainty so you can fact-check what you read and see right in your browser.
How we built it
Haven is implemented as a Chrome extension written in JavaScript, HTML, and CSS, with a Node.js/Express backend. We used eleven labs as an AI companion to narrate stories, so we increased the accessibility of our extension. To detect misinformation, we use Ollama to run a local llama3 model to extract claims and check those against the articles we retrieve using the News API. To analyze images, we wrote a pipeline implemented in Python with OpenCV, NumPy, and several other packages to run forensic analysis such as frequency, texture, and metadata analysis.
Challenges we ran into
Extracting well-defined claims from noisy, unstructured text was difficult, particularly with opinion or incomplete statements. Response time is another factor, since verification often requires several processing steps to complete if you want accuracy. With images, creating a strong detection pipeline required us to use a combination of signals.
Accomplishments that we're proud of
We're proud to have created a functional prototype that unifies misinformation detection and image forensics into a single pipeline. The pipeline returns actionable results as soon as they are available, and it is built in a modular fashion to allow room for improvement. We've also managed to successfully implement a multi-signal approach while providing users with greater transparency rather than a black-and-white answer.
What we learned
We learned how complex misinformation and AI-generated content detection really is, especially when dealing with ambiguous claims and varying source reliability. We also gained hands-on experience building AI-powered systems that need to balance speed, accuracy, and usability. Most importantly, we learned how to take a large, abstract problem like misinformation and AI-generated content and turn it into a practical, real-world tool that people can actually use in their everyday lives.
What's next for Haven
Next, we plan to improve the accuracy and speed of our system by refining our models and optimizing how we match claims with reliable sources. We also want to expand our AI Lens to support the detection of AI-generated video content and improve real-time performance for a smoother user experience. As we scale, we plan to further fine-tune our AI models specifically for misinformation detection, while transitioning to more scalable cloud infrastructure and databases to efficiently handle larger volumes of data and users. In addition, we plan to offer Haven as a product and API that companies can integrate directly into their platforms. This includes social media platforms like Meta and TikTok, search and content platforms like Google, and media organizations like The New York Times. These companies can use Haven to detect misinformation and AI-generated content in real time and improve trust on their platforms. Long term, our goal is to build Haven into a scalable system that supports both individual users and large organizations, helping create a more trustworthy digital environment.
Built With
- axios
- c2pa
- css
- elevenlabs
- express.js
- html
- javascript
- llama3
- multer
- newsapi
- node.js
- numpy
- ollama
- opencv
- pillow
- python
- scipy
- supabase
Log in or sign up for Devpost to join the conversation.