VeriFYP: Your AI Research Assistant for the "For You Page" for a More Sustainable Future

Upholding SDG 16: Peace, Justice, and Strong Institutions by Ensuring Public Access to Information

Inspiration

Let's be honest: the TikTok "For You Page" can feel like the Wild West of information. One minute you're learning a new recipe, the next you're watching a video claiming a miracle cure for a serious disease. While some claims are harmless fun, others are genuinely dangerous and threaten global progress. As a team of two, we've both had that moment of scrolling through comments, watching arguments unfold, and thinking, "I wish someone could just... check that."

We realized this wasn't just a social media problem; it was a direct threat to the United Nations Sustainable Development Goals. Progress on Goal 3 (Good Health and Well-being), Goal 5 (Gender Equality), and Goal 13 (Climate Action) is impossible when the public square is flooded with misinformation.

This inspired us to build a tool that directly addresses SDG 16: Peace, Justice, and Strong Institutions. Specifically, its target to "ensure public access to information." Our goal wasn’t to build a “truth police,” but rather a co-pilot, an AI agent that makes fact-checking scalable, transparent, and user-friendly, strengthening the very foundation of an informed society. Enter VeriFYP.

What it does

VeriFYP is an agentic fact-checking system designed specifically for TikTok. It operates behind the scenes to analyze video content, identify factual claims, and generate constructive, sourced responses that can be posted directly in the comments. By arming users with verifiable facts, VeriFYP helps create a more accountable and transparent digital ecosystem, a cornerstone of SDG 16.

Here’s how our prototype works:

  1. Submit a Link: The user provides a link to a TikTok video they want to investigate.
  2. Extract the Data: The app gets to work, pulling a full transcript of the video's audio and any on-screen text.
  3. AI Investigation: Our agentic system analyzes the transcript to identify verifiable claims. It then scours a curated list of trusted sources, news outlets, and fact-checking organizations to gather evidence.
  4. Generate a Response: Based on its findings, the agent drafts a neutral, constructive, and well-sourced comment. If a claim is true, it provides sources to back it up. If it's false or misleading, it offers a gentle correction with supporting evidence.
  5. Review and Deploy: The final, polished response is presented to the user. They have the final say and can copy the text to post directly into the TikTok comments, armed with facts.

Our goal is to help users add light, not heat, to online discussions, fostering the kind of informed dialogue necessary to achieve all the SDGs.

How we built it

As a lean, mean, two-person team, our development process was all about collaboration and strategic execution. We started with a brainstorming session, throwing every wild idea onto a virtual whiteboard. This helped us map out the grand vision before we started writing a single line of code.

We adopted a hybrid approach to development. We used pair programming for the most complex part of the project: architecting the AI agent with LangGraph. Bouncing ideas off each other in real-time was crucial for untangling the logic of an agentic workflow. For other tasks, we used a "divide and conquer" strategy, with one of us tackling the frontend with React, Tailwind CSS, and Framer Motion, while the other focused on the backend and the agent's core logic, using flask and diving deep into LangChain and LangGraph.

Challenges we ran into

Our biggest challenge, by far, was our own ambition. We dreamed of building the entire misinformation-fighting Death Star on day one, complete with user-AI collaboration chats and proactive scanning modes. Reality and the clock quickly forced us to be ruthless with our scope. We had to make tough decisions to shelve some of our most exciting features, like the interactive "Refinement & Collaboration" chat, to ensure we could deliver a polished and functional core product.

We also learned that prompt engineering is a delicate art. Getting the AI to be consistently neutral and avoid sounding like a scolding robot took more trial and error than we expected. And, of course, no project is complete without that classic developer moment of spending an hour debugging a critical API connection, only to discover a single, infuriating typo in an environment variable. We've all been there.

Accomplishments that we're proud of

Our proudest accomplishment is, without a doubt, getting the LangGraph agentic flow to work. We didn't just build a simple chatbot; we created a multi-step process where one agent drafts a response and a second "Red Team" agent tries to critique and find flaws in it. Seeing the AI self-correct its own work to produce a more robust and neutral output was a genuine "it's alive!" moment.

On a practical level, finding a reliable API for pulling TikTok video transcripts felt like striking gold and saved us from a massive engineering headache. While we had to leave some cool features on the cutting room floor, we're incredibly proud that we finished. We successfully built a functional, end-to-end prototype that proves our core concept is not just possible, but powerful.

What we learned

This project was a masterclass in the importance of the Minimum Viable Product (MVP). We learned that tackling a global challenge like the SDGs requires more than just good intentions; it demands robust, ethically-designed technology. Fact-checking in short-form media isn't just a technical problem; it’s a design and ethics challenge directly impacting Goal 16 (Strong Institutions) and Goal 4 (Quality Education).

In an environment where engagement is king, the tone is a feature. A fact-checker that sounds arrogant or combative is worse than no fact-checker at all. We realized that for a young, broad audience, transparency and a constructive tone matter just as much as the truth itself. Modern agentic frameworks like LangGraph are incredibly powerful, but they require a completely different way of thinking about application logic and user interaction.

What's next for VeriFYP

We're just getting started! Our roadmap is focused on refining our current product and expanding our vision to scale our impact on the SDGs.

  • Immediate Priorities: First on the list is resurrecting those features from the cutting room floor, especially the "Refinement & Collaboration" chat to give users more control. Following that is aggressive testing and tuning, tweaking our prompts, and significantly expanding our list of vetted, trustworthy sources to ensure maximum reliability.

  • High-Level Features: Looking ahead, we plan to build out Mode 2 (The Guardian's Queue) and Mode 3 (The Proactive Mandate). We also want to expand our capabilities to other platforms and incorporate multilingual support, a key step to tackle global misinformation and promote Goal 10 (Reduced Inequalities).

Our ultimate goal is to build a suite of tools that empowers digital citizens, making the internet a safer, more reliable space to engage with the critical issues at the heart of the Sustainable Development Goals.

Built With

Share this project:

Updates