Please sign up or log in to continue.

Inspiration

The overwhelming flood of information in today's media landscape often traps individuals within echo chambers, reinforcing biases and limiting exposure to diverse perspectives. We were inspired to build a tool that not only detects political bias in news articles but also provides reasoning behind the bias. This empowers readers to engage with media critically and fosters a more informed society.

What it does

Our tool classifies political bias in news articles and explains the rationale behind the classification. It leverages fine-tuned large language models (LLMs) to detect bias across ideological spectrums (left, center, right) and provides detailed reasoning for each bias. Users can interact with the system via query searches, article text input, or by submitting news URLs for bias analysis.

How we built it

We used the BIGNEWS dataset, which clusters news articles based on ideological bias. The system leverages fine-tuned LLMs (Llama and Gemma models) for bias detection and explanation. We applied QLoRA techniques for efficient fine-tuning and integrated a Python Flask backend with a MongoDB database to handle user queries, store data, and process articles. Information retrieval methods like TF-IDF vectorization were also used to enhance the user experience and search capabilities.

Challenges we ran into

One major challenge was the computational cost of fine-tuning large models. We had to limit our dataset size and carefully manage resources while ensuring that the models still performed well. Additionally, generating reliable explanations for bias required significant manual validation to prevent hallucinations from the LLMs. Ensuring the system provided accurate and trustable results was critical.

Accomplishments that we're proud of

We successfully developed a system that not only detects political bias but also provides clear explanations, a feature missing from many other bias-detection tools. Despite working with a limited dataset, we achieved strong performance benchmarks and created an interactive, user-friendly interface. We're proud of how we applied advanced techniques like QLoRA to make our solution efficient and scalable.

What we learned

Throughout the project, we learned about the intricacies of fine-tuning LLMs for task-specific outputs and the challenges involved in managing computational resources. We also gained deeper insights into the complexities of media bias and how LLMs can be fine-tuned to improve their reasoning capabilities. Additionally, integrating bias detection with information retrieval enhanced our understanding of combining NLP techniques for practical applications.

What's next for Breaking the Echo Chamber Political Bias Detection LLMs

In the future, we plan to expand our dataset and improve the model's accuracy by fine-tuning it on a larger corpus. We also aim to explore bias detection beyond political bias, such as gender or racial bias. Another goal is to integrate a debiasing feature that neutralizes biased language in articles. As the landscape of LLMs evolves, we will continue improving our system’s performance and exploring real-world applications of bias detection.

Built With

Share this project:

Updates