Fact or Fake About Me
Analyze the credibility of articles with AI-powered insights.
Who We Are
We are a team of students dedicated to helping our generation navigate the flood of information online. Our goal is to make it easier to understand the reliability of news articles, identify trustworthy sources, and know which information is credible and which is not.
Why We Chose This Solution
With the rise of misinformation, we wanted a solution that could automatically analyse articles for credibility using AI. We chose a combination of natural language processing and evidence scoring because it allows precise, sentence-level evaluation of content.
How It Solves the Problem
This tool addresses the challenge of identifying reliable information by: Highlighting key sentences and words that contribute to credibility. Providing an overall reliability score for each article. Giving readers a quick visual indicator (badge) of trustworthiness. Enabling community feedback through upvotes, downvotes, and comments.
Features
Automatic highlighting of key words and phrases in articles. Highlighting of sentences classified as evidence or hearsay. Reliability badges that display the credibility rating of each article. Community interaction features: upvote/downvote articles and add comments.
How the AI Algorithm Works
Our AI evaluates articles using natural language processing and evidence scoring: Splits text into sentences using SpaCy. Identifies entities such as people, organizations, dates, money, and percentages. Classifies sentences into categories using a zero-shot classifier: Verifiable evidence: official reports, expert quotes, or primary sources. Second-hand but attributed information: statements by named sources or organizations. Anonymous or speculative information: rumors or unverified claims. Neutral context: background information without claims. Calculates subjectivity and polarity scores for tone analysis. Aggregates all metrics into a reliability score for the article. Highlights both sentences and key words contributing to the score.
Reliability Badges
Articles are assigned one of four reliability ratings, displayed as badges: CAP Badge Icon CAP: Predominantly false or misleading information. SUS Badge Icon SUS: Somewhat unreliable or questionable content. MID Badge Icon MID: Moderately reliable, mix of verified and unverified information. GOATED Badge Icon GOATED: Highly reliable, strong evidence and factual reporting.
What We Would Do In the Future
We have many ideas about what would make a better solution if we had more time, but we could definitely boil it down to four key points:
We would love to improve by using a bigger and more thorough language models. We did not have a sufficiently fast computer running the model in order to have the commands execute in a suitable time-frame, so we had to size down.
We would love to refine are model by using our upvoting and comment systems to weight our model during training for more accurate results. Due to our time-constraints, we were only able to add the influence of our community recommendations in a rudimentary manner.
If we were given more time to attack this problem, we would have developed a login/user system for leaving comments, with functional nested replies.
Finally, if we were given more time we would have improved our search for related articles on our canvas. Due to the nature of a demo, we limited the amount of similar articles from other sites to 4, but we would be interested in developing an algorithm to measure similarity and more.
Log in or sign up for Devpost to join the conversation.