The problem your project solves

Fake news are causing misinformation to spread and posing serious security and health threats to our society. Unfortunately, it is not possible to prevent fake news from circulating but what we can aim to do is to minimise their spread. Hence, the best way to fight fake news is to try and encourage people’s critical thinking. How can people become more critical about what they read and avoid sharing news that are possibly harmful to others? This is not a trivial task as fake news are often shared over social media’s feed over an impulse reaction to a sensational title (i.e. “clickbaiting”) or a superficial skim-read of the article’s content. Hence, whether a fake news will spread or not sometimes depends on a decision taken over a few split seconds. We need to find and equally quick solution that could inhibit that “click&share” moment to happen. This is exactly what Sentimentext does. Sentimentext helps you to check the emotional content of news and encourages you to read through the news with an extra pinch of salt.

The solution you bring to the table

Sentimentext is a browser extension that automatically warns users about those news articles that linguistically resemble the style of a fake news. Sentimentext scans the content of a webpage and analyses how emotionally loaded such content is by looking at its narrative style, the pattern of adjectives and syntaxes used. Sentimentext does not tell whether a news is fake or not: we do not want to tell people what to think, but give them a tool that helps them to think.

Fake news are found to purposefully employ emotional jargon and emotional narratives to hamper logical-thinking and foster emotional responses istead. They want to make you feel something. The emotional load of a text is usually analysed with a machine learning technique called sentiment analysis, which scores text based on two values: polarity and subjectivity. Several studies show how fake news are likely to have negative polarity [Kapusta et al. 2020] and high subjectivity [Volkova et al. 2017]. We want to use this distinctive emotional content of fake news to fight misinformation and provide citizens with a simple tool which alerts them of such possible suspicious behaviour.

What you have done during the weekend

At first, we brainstormed and ran background research in the psychology, neuro-science and socio-linguistic domains to understand the correlation between fake news and emotions. We then prioritised the creation of our MVP by developing the working browser extension on Saturday. The team has worked hard to develop a working algorithm based on sentiment analysis and collected a database of example articles (both fake and not-fake) which addressed the covid-19 crisis to test it.

The solution’s impact to the crisis

The problem with fake news and misinformation has been highlighted by the current pandemic crisis. Covid-19 fake news have contributed to the spread of fake beliefs about which health and safety measures to follow, sometimes with serious harmful consequences like the numerous homemade remedies to “kill” Covid-19 that have been circulating since the beginning of the pandemic. Covid-19 related fake news not only threaten our health system but also undermine the overall sense of social security, contributing to the diffusion of panic and fear, working against the efforts of governments and civil societies. Sentimentext can have a great impact in blocking the diffusion of fake news by intervening at an early stage, fostering readers’ critical reflection and preventing the impulsive “click&share” behaviour.

The necessities in order to continue the project

In the sentiment analysis literature, there are studies that have examined other characteristics of the text in addition to sentiment and subjectivity: psycholinguistic and moral foundation cues [Volkova et al. 2017, Reis et al. 2019], and approaches that identify the relation between particular emotions and fake news [Gilleran 2017]. We think that including these features can be a useful direction for a future version of Sentimentext. In addition to sentiment analysis and similar approaches, there are plenty of other approaches to target fake news, and several of them are also explored in this hackathon. Integrating other approaches such as news outlet reputation (e.g., https://www.adfontesmedia.com/interactive-media-bias-chart/), fact checking based on trusted data sources [Karagiannis et al. 2020] or others may of course help improve our application.

The value of your solution(s) after the crisis

The problem with fake news is evergreen and will continue beyond the Covid-19 pandemic. To help solve this problem and have an impact we need to invest in training citizens’ ability to critically reflect and be weary of “too good/too bad to be true” news. The greatest feature of our solution is its almost cost-zero scalability and potential which could reach and help millions of people worldwide.

The URL to the prototype [Github, Website,...]

https://github.com/trimalcione/sentimentext

Who might be interested:

Social media platforms that need to make sure their anti-misinformation strategies are empirically grounded. (Social media companies have been under tremendous pressure to do something about the proliferation of misinformation on their platforms) Parents or carers of young social media users (10-18 years old) who want to limit young people exposure to potential dangerous, fake news and stimulate their own critical reflection Schools and educational institutions that want to guarantee a healthy digital environment for their students

References

Gilleran, Brady. "Identifying Fake News using Emotion Analysis." (2019).

Kapusta, Jozef & Benko, Ľubomír & Munk, Michal. (2020). Fake News Identification Based on Sentiment and Frequency Analysis. 10.1007/978-3-030-36778-7_44.

Karagiannis, Georgios, et al. "Scrutinizer: A Mixed-Initiative Approach to Large-Scale, Data-Driven Claim Verification." arXiv preprint arXiv:2003.06708 (2020).

Reis, Julio CS, et al. "Supervised learning for fake news detection." IEEE Intelligent Systems 34.2 (2019): 76-81.

Volkova, Svitlana, et al. "Separating facts from fiction: Linguistic models to classify suspicious and trusted news posts on twitter." Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2017.

Built With

Share this project:

Updates