NoirMore - AI-Powered Fact Checker

Inspiration

In an era of rampant misinformation and deceptive AI-generated content flooding social media, we recognized an urgent need for accessible fact-checking tools. The challenge of distinguishing truth from fiction in our digital landscape inspired us to create NoirMore—a detective-themed fact verification system that empowers users to investigate claims with confidence. Our ambition to tackle this critical problem in the AI Ethics & Safety track drove us to build a practical solution that anyone can use.

What it does

NoirMore is an AI-powered fact-checking tool that investigates claims and statements found online. Users simply enter any claim they want to verify, and our detective system springs into action:

  • Multi-Source Analysis: Searches across Wikipedia, Google Scholar, Reuters, Associated Press, BBC, NPR, and other trusted sources
  • Deep Content Extraction: Fetches and analyzes full article content, not just snippets
  • Intelligent Pattern Recognition: Uses advanced regex patterns and contextual analysis to identify supporting, contradicting, and neutral evidence
  • Confidence Scoring: Provides a confidence percentage (0-95%) based on weighted analysis of source reliability and evidence strength
  • Visual Reporting: Presents findings in a film noir-themed "case file" interface with color-coded verdicts (green for likely true, red for likely false, yellow for disputed)
  • Source Transparency: Links to all analyzed sources so users can verify findings themselves

The system renders verdicts as LIKELY_TRUE, LIKELY_FALSE, DISPUTED, or INSUFFICIENT_DATA, giving users clear, actionable insights into the veracity of online claims.

How we built it

Backend (Python/Flask)

  • Web Scraping: BeautifulSoup and Requests to fetch content from news sites, academic sources, and encyclopedias
  • URL Extraction: Custom DuckDuckGo redirect decoder to access real article URLs
  • Content Analysis Engine:
    • Extracts claim keywords using NLP techniques
    • Identifies claim-relevant sentences (minimum 2 keyword matches)
    • Applies regex patterns to detect agreement/disagreement/uncertainty
    • Implements weighted scoring based on source reliability (9/10 for academic/news, 8/10 for Wikipedia, 5/10 for general web)
  • API Design: RESTful endpoints with CORS support for seamless frontend-backend communication

Frontend (React/Vite)

  • Component Architecture: Modular React components (UrbanNoirBackground, Report, Source)
  • State Management: React hooks (useState) for dynamic report updates and loading states
  • Urban Noir UI: Custom CSS with vintage detective aesthetics
    • Aged paper textures and typewriter fonts (Special Elite, Courier Prime)
    • Color-coded confidence levels and verdicts
    • Animated "new report" notifications with red pulse effect
    • Responsive modal for project information
  • User Experience: Real-time feedback, hover effects, and keyboard shortcuts (Enter to submit)

Integration

  • Vite proxy configuration to route /submit requests to Flask backend (port 5050)
  • JSON data exchange between frontend and backend
  • Error handling with user-friendly alerts and console logging

Challenges we ran into

  1. Scope Refinement: Our original idea was too ambitious for a 24-hour hackathon. We had to pivot to a more focused concept that was both unique and implementable with our current skill level.

  2. False Positives in Analysis: Initial keyword-based analysis flagged articles as "supporting" when they merely mentioned keywords without actually agreeing with the claim. We solved this by:

    • Implementing context-aware sentence extraction
    • Using regex patterns for grammatical structures (e.g., "is true" vs. just "true")
    • Requiring claim-relevant sentences (minimum keyword overlap)
  3. URL Extraction: DuckDuckGo's redirect URLs obscured actual article links. We built a custom URL decoder to extract real URLs from the uddg parameter.

  4. Content Fetching: Many news sites have complex HTML structures. We created a multi-selector fallback system to reliably extract article content across different site architectures.

  5. Analysis Accuracy: Balancing sensitivity (detecting true claims) with specificity (avoiding false positives) required extensive tuning of:

    • Relevance thresholds
    • Pattern matching weights
    • Verdict confidence calculations
  6. CORS Issues: Frontend-backend communication initially failed due to CORS restrictions, resolved by enabling CORS in Flask and configuring Vite's proxy.

Accomplishments that we're proud of

  • Functional Product: Built a complete, working fact-checking system from scratch in under 24 hours
  • Sophisticated Analysis: Implemented advanced NLP techniques including pattern matching, weighted scoring, and contextual sentence extraction
  • Polished UI: Designed an immersive film noir interface that makes fact-checking engaging
  • Multi-Source Integration: Successfully scraped and analyzed content from 7+ different source types
  • Accuracy Improvements: Iteratively refined our analysis algorithm to significantly reduce false positives
  • Team Collaboration: Effectively divided frontend and backend work while maintaining seamless integration
  • Participation: Competed in NC State's hackathon and delivered a complete presentation-ready product

What we learned

Technical Skills

  • Web scraping techniques and HTML parsing with BeautifulSoup
  • Regex pattern matching for natural language analysis
  • React state management and component lifecycle
  • RESTful API design and CORS configuration
  • Debugging complex analysis algorithms through iterative testing

Project Management

  • Scope Control: Keep ideas simple, focused, and achievable within time constraints
  • Iterative Development: Start with MVP, then add features incrementally
  • Testing Early: Validate core functionality before building advanced features
  • Pivot Quickly: Don't be afraid to change direction when initial plans prove unfeasible

Soft Skills

  • Ideation is challenging—brainstorming practical, unique solutions takes time
  • Clear communication between frontend and backend developers is critical
  • User experience matters—even technical tools benefit from thoughtful design
  • Debugging requires patience and systematic problem-solving

What's next for NoirMore

Short-term Enhancements

  • Claim History: Save and review previously investigated claims
  • Export Reports: Download case files as PDFs for sharing and archiving
  • Source Filtering: Allow users to select which source types to search (academic, news, encyclopedias)
  • Bias Detection: Identify potential political or ideological bias in sources

Long-term Vision

  • Browser Extension: One-click fact-checking for claims encountered while browsing
  • Social Media Integration: Analyze claims directly from Twitter, Facebook, and Reddit posts
  • API Access: Open API for developers to integrate NoirMore into their applications
  • Machine Learning: Train custom models on verified fact-check datasets to improve accuracy
  • Multi-language Support: Expand beyond English to serve global users
  • Community Verification: Allow users to submit their own analysis and create a collaborative fact-checking network
  • Real-time Alerts: Notify users when new evidence emerges about previously checked claims

Differentiating Features

  • Narrative Explanations: Generate plain-language summaries of why verdicts were reached
  • Visual Evidence Maps: Graph relationships between sources and claims
  • Claim Clustering: Identify related claims and misinformation campaigns
  • Expert Network: Partner with fact-checking organizations like Snopes and PolitiFact for enhanced credibility

NoirMore brings transparency to the digital age—one claim at a time. In a world of shadows and misinformation, we're the detectives shining a light on the truth.

Built With

Share this project:

Updates