Inspiration

What it does

How we built itGrokipedia Bias Analyzer

Overview This project explores biases in Wikipedia articles by comparing them to their counterparts in Grokipedia, an AI-driven encyclopedia designed for greater objectivity. Using generative AI, we extract semantic concepts, build knowledge graphs, and visualize clusters to detect and highlight biases, promoting more neutral knowledge representation. Problem Statement Wikipedia, once trusted as a crowd-sourced encyclopedia, faces ongoing criticisms for factual inaccuracies, vulnerability to vandalism, and systemic biases—often a mild to moderate left-leaning tilt in political topics. This is supported by computational analyses showing negative sentiment toward right-of-center entities and admissions from co-founder Larry Sanger about ideological capture. Grokipedia is an AI-driven encyclopedia aiming to debias Wikipedia or rebuild articles from scratch for greater objectivity. Challenges with reliable sources/consensus Legacy media and activists often promote one-sided narratives, suppressing alternative views under labels like "misinformation" or "political correctness". Examples include transgender activism, COVID-19 origins, critical race theory, and illegal immigration, where "scientific consensus" is built on cherry-picked evidence, ignoring fundamental questions (e.g., rejecting a woman as a biological female and so unable to answer "What is a woman?" without a circular reference). These challenges make it impossible to rely on autority of consensus. Project Approach This project uses generative AI to extract semantic concepts from Grokipedia articles, build knowledge graphs of related ideas, and visualize clusters across dimensions like core knowledge, potential biases, and comparisons with Wikipedia counterparts. By analyzing patterns in deceptive techniques (e.g., loaded language, omission), it aims to detect and highlight biases for more neutral knowledge representation. Use LLM-as-a-judge with a strict prompts to produce these metrics. Use LLM to label core concepts and feauters of the article. Project features Knowledge graph: A knowledge graph for each article pair using entity-relation extraction. Visualize clusters (e.g., knowledge vs. bias nodes). Evaluations: Define LLM-as-a-Judge metrics, evaluate Grokipedia/Wikipedia twin pages via metrics like entity overlap or sentiment divergence. Add bias detection patterns (e.g., flag loaded terms like "conspiracy" vs. "hypothesis"). Use AI to identify propaganda techniques (e.g., omission, framing). Output: compare on these metrics, side by side. Add missing concepts: Make an X app, which fetches user's feed and maps to grokipedia concepts. If missing - prompts a model to add a one. Concepts to Explore COVID-19 origins (e.g., lab leak theory, once dismissed as racist). Wikipedia: https://en.wikipedia.org/wiki/COVID-19_lab_leak_theory Grokipedia: https://grokipedia.com/page/COVID-19_lab_leak_theory

Challenges we ran into

Accomplishments that we're proud of

What we learned

What's next for Grokipedia Bias Evals

Built With

Share this project:

Updates