The problem your project solves

What the Fake!? is there to help flatten the misinformation curve. One of the key challenges that has emerged during the Covid-19 crisis is the sheer volume of falsehoods, half-truths and outright bulls**t being presented as facts. Much like the virus, this issues is global and ubiquitous, yet hard to keep track of. But the issue isn't new and there are several angles at addressing it. One of the most concrete is the process of fact-checking, carried out by news outlets or independent organisations. The challenge they face under Covid-19 is the unprecedented increase in the volume of claims they need to verify.

The solution you bring to the table (including technical details, architecture, tools used)

Which is where What the Fake!? comes in.

This prototype pulls together several sources of information to offer a head-start in the fact-checking process.

It does so by providing quick information about a topic being verified: has it been covered before? Is it generating interest with news organisations and the general public? Has it been fact-checked in some-countries, but not others? How does the topic spread? And, what other angles are related to this topic?

The prototype has been built around three key data sources:

The Poynter CoronaVirusFacts/DatosCoronaVirus Alliance Database

The GDELT Project

Google Trends

It combines these to provide a small report based on a keyword. The prototype has been built in R (Tidyverse, Shiny, etc).

What you have done during the weekend

I developed a pen and paper mockup of what a tool like this would need to include. Then I looked at automating the flow of information from my data sources. This was used for prototyping and refining the key components that are in the prototype. Finally, I wrestled my messy code into a front end that is held together by sticky tape and willpower.

The solution’s impact to the crisis

I would be over the moon if this saved a fact-checker a few minutes of googling around for information. That may not sound ambitious, but I strongly believe that if we can empower the experts in this crisis we will see a lot more progress.

What is potentially interesting is applying this beyond just fact-checking (as one of my mentors pointed out). With some variations this could be used to help governments, public organisations and policy-makers form a worldview about topics that are causing confusion or doubt (do people think vaccines are dangerous, what kind of cures does the public need to be warned about, do people worry about their pets, etc.).

The necessities in order to continue the project

Phew. Where do I start. So this has been pulled together quite quickly and has a lot of potential issues. For a fully-fledged tool I would probably want to invest time in the following:

Work closer with Poynter to design a better database and create cleaner data

Fine-tune some of the key concepts of what the key measures are

Build the project under different technology or find a better way of hosting it with R Shiny

Do user research to understand how to develop the prototype further (currently based on hunches rather than speaking to fact-checkers)

Re-frame a lot of the language and design being for clearer communication

And I'd really like to boost this with some natural language processing techniques, potentially connect with machine learning solutions, and considering turning it into an API (again, a thank you to mentors for these suggestions).

The value of your solution(s) after the crisis

Well... whether we like it or not, I have a hunch fake news is going to stick around for a while. Which means this could be adapted for a life after the crisis. There might also be a few elections that need to happen when all of this is over...

Built With

Share this project:

Updates