Inspiration
In today’s digital landscape, misinformation spreads at an alarming rate—often outpacing efforts to correct it. The shock value and sensational nature of falsehoods make them more engaging and, unfortunately, more believable than the truth. As a result, misinformation often takes hold before accurate information can even surface. With the rise of AI-generated content, misinformation is no longer limited to text as deepfakes are becoming increasingly sophisticated. This poses a significant threat, particularly to public figures whose reputations and credibility rely on trust. The consequences of such falsehoods extend beyond individual harm, influencing public perception, decision-making, and even societal stability. Recognizing the urgency of this issue, our team has developed a platform designed to help users efficiently debunk misinformation. By consolidating fact-checking tools into a single, accessible space, we aim to empower individuals to discern truth from falsehood with ease. Our goal is to create a reliable, intuitive solution that fights misinformation at its source—ensuring that accurate information prevails in the digital age.
What it does
inCredible AI is your trusted companion for reliable fact-checking and deepfake detection. Our fact-checking feature, inFact, allows users to upload text or images containing text they want verified. Using advanced AI, we analyze the content and provide a falsehood percentage that indicates the likelihood of misinformation, along with verified sources that debunk any false claims. To ensure clarity, we also summarize all supporting evidence, making it easier for users to understand the facts. As a visual aid, we generate a storyboard with descriptions that illustrate the possible consequences of the misinformation and the motives behind it. For deepfake detection, our inDetect feature enables users to upload images or videos they suspect have been manipulated. If a deepfake is detected in a video, our system captures the exact frame where alterations occur and alerts the user. They will then receive a prompt with the option to share the flagged image with their contacts, helping to prevent the spread of misleading content. Additionally, visitors to our website can explore the latest debunked news stories, with the three most recent ones featured on our homepage that are updated daily. By clicking the ‘Read Article’ button, users can access the full reports and stay informed about the latest misinformation trends.
How we built it
For the front-end development, we built a sophisticated website using HTML, CSS, and React, ensuring a seamless and engaging user experience. Additionally, we designed our own logo to give inCredible AI a distinct identity. Our fact-checking feature, inFact, leverages Google’s Fact Check API to verify information. The results are then processed by Gemini, which generates a falsehood percentage along with explanations on why the news is likely true or false. To provide users with additional context, we integrated Serp API to retrieve related debunking sources for further reading. For the storyboard, we used DALL·E 3 to generate a four-panel visual representation of the potential consequences and motives behind misinformation. To make the insights more accessible and engaging, we utilized OpenAI’s API to generate concise explanations for each panel, helping users absorb key takeaways in a fun and educational way. If users choose to upload a screenshot of a news article, AWS Textract extracts the text, which is then processed by inFact for fact-checking. Through these advanced technologies, inCredible AI delivers an intuitive and reliable platform for combating misinformation. Our deepfake detection feature, inDetect, ensures efficient storage and analysis of uploaded images and videos. Files are securely stored in AWS S3, optimizing storage management and accessibility. For deepfake detection, the Arya API analyzes the content to determine whether it has been manipulated. If the uploaded file is a video, it undergoes preprocessing with a YOLO model, which extracts the specific frame containing the deepfaked figure, allowing for precise identification of alterations. In the debunked myths section, users can explore the latest misinformation that has been exposed. By leveraging Python’s BeautifulSoup library, we scrape data from reliable sources to feature three of the most recent debunked news stories. We used Flask as the backend framework to handle the core logic of inCredible AI. Flask serves as the bridge between the front-end and back-end, managing data processing, API integrations, and communication between different components of the system. It enables seamless interaction between our fact-checking and deepfake detection features by handling requests, processing inputs, and returning structured responses. When a user submits a request—such as uploading text for fact-checking or an image/video for deepfake detection—the front-end, built with React, sends a POST request to the Flask server. Flask then processes the request, interacts with various APIs and compiles the results. Once the analysis is complete, Flask sends the processed data back to React, which dynamically updates the user interface with the results.
Challenges we ran into
One of the biggest challenges was fact-checking accuracy and reliability. Since misinformation can be nuanced and context-dependent, we had to carefully select and integrate Google’s Fact Check API while also refining how Gemini interprets and calculates the falsehood percentage. Ensuring that the sources retrieved by Serp API were credible and relevant required fine-tuning search parameters to avoid misleading or biased information. For deepfake detection, preprocessing video files proved to be a technical challenge. We had to implement YOLO for frame extraction and integrate Arya API efficiently, ensuring that deepfake detection was both accurate and fast. Storing large media files also posed a problem, which we solved by leveraging AWS S3 for scalable and cost-effective storage. On the frontend, balancing a visually appealing UI with responsiveness was a key concern. We built the site using React, HTML, and CSS, but optimizing the loading times—especially for storyboard generation using DALL·E 3—was challenging. Generating four-part storyboards while ensuring a smooth user experience required careful API request handling and caching mechanisms. On the backend, managing multiple API calls and processing uploaded images, videos, and text efficiently was complex. Using Flask, we had to ensure that request handling was optimized to avoid slow response times, particularly when fact-checking large text inputs or analyzing deepfake videos. Additionally, enabling seamless communication between Flask and React required careful API structuring and error handling. Web scraping for the debunked myths section also came with its own difficulties. Since misinformation spreads across different platforms, we had to ensure that the scraped data using BeautifulSoup was always up-to-date and sourced from reputable fact-checking sites. Keeping track of constantly changing website structures also meant that our scraping scripts needed regular updates. Despite these challenges, overcoming them strengthened our technical skills and reinforced the importance of scalability, accuracy, and user experience in building a robust misinformation detection platform. Each obstacle helped shape inCredible AI into a more reliable and effective tool, and we continue to refine our system to improve its performance.
Accomplishments that we're proud of
This was our first time completing a Hackathon project, and we are incredibly proud of what we achieved. As beginner hackathon participants, we successfully created a product we truly believe in. The time and effort we dedicated to this project, along with the valuable lessons we learned, made this experience truly rewarding. Kudos to the TechFest 2025 Committee for organizing such an amazing and relevant hackathon!
What we learned
We learned that to create something impactful, dedication and perseverance are key—even if it means sacrificing sleep! The experience reinforced the importance of teamwork, problem-solving, and continuously iterating on solutions to build something that can genuinely make a difference.
What's next for Team29_inCredible AI
We’re committed to continuously enhancing our platform to make fact-checking and deepfake detection even more accessible, reliable, and impactful. Currently, inFact and inDetect primarily support English, but misinformation spreads across all languages and regions. We aim to expand our capabilities by integrating multi-language fact-checking and deepfake detection, allowing users to verify information in different languages. By leveraging advanced Natural Language Processing (NLP) models and multilingual APIs, we can break language barriers and provide fact-checking support on a global scale. To enhance transparency and prevent tampering, we plan to store fact-checked results on a blockchain ledger. This decentralized approach ensures that verified information remains immutable and accessible, allowing users to trace the credibility of fact-checks over time. By integrating blockchain technology, inCredible AI can offer a trustless verification system that reinforces the authenticity of fact-checking reports. Misinformation evolves rapidly, and community input can play a crucial role in identifying new falsehoods. We aim to introduce a crowdsourced fact-checking system, where users can submit claims they believe need verification. Additionally, a voting and credibility scoring system will allow the community to contribute by rating the reliability of certain claims. This feature will empower users to actively combat misinformation while ensuring fact-checks remain accurate and community-driven. Fact-checking and deepfake detection should be available anytime, anywhere. To make our platform more accessible, we plan to develop a mobile app version of inCredible AI for both iOS and Android. This app will allow users to fact-check news articles, detect deepfakes, and access verified information on the go, ensuring misinformation can be addressed in real time. Currently, inDetect requires users to upload images or videos for analysis. In the future, we aim to introduce deepfake detection from URLs, allowing users to simply paste a YouTube or social media video link to check if it contains manipulated content. By automating this process, we can help users identify potential deepfake videos without requiring manual uploads, making deepfake detection more seamless and user-friendly.
Log in or sign up for Devpost to join the conversation.