Which voice can we take for an expert one ?

The multiplicity of media – social media, mainstream media, alternative media, people media, …- is complicating this polyphony of voices even more

The crisis does not only demand fact checking. Sometimes, facts are real, but ambiguously worded. Many times, facts and opinions are mingling. In different cultures, facts are understood, or even translated differently. In the same culture, same opinion has different weights depending on the territory or community where it circulates. In the covid crisis, there is not only a question of protecting oneself from fake news, but also of understanding the formulation of different relative truths.

HECTOR

The Hub for Experts in Crisis and Truths of Random

Keywords of our project: identify and decide.

Identify

What if we were able to identify what kind of expertise is involved in this issue?

Different expertise, based on different knowledge (theoretical, practical, academic , fame…) are circulating on social media. They usually have a unique logic. This creates communities or clusters of information flows with a number of various truths. Therefore, when one comes across a specific discourse, one does not know how to categorize it.

Decide

We can only truly decide when we comprehend or understand the picture as a whole.

Our solution is designed for two targets:

  • Citizens: Live tracking of confrontations and circulation of expertise allow citizens to understand which are the experts, the relevance and the social impact of information (trough social commitment RT and Likes).

  • Public decision making (government and media): cultural sensibilities are changing through the different situations. In a crisis situation, public information and decisions are embedded within the flow of diverse expertises. Positioning one’s voice in this polyphony is a matter of knowing its’ public and tackling the expertise present in the public space.

Our project is meant to have an impact on every aspect of our social behavior, when a more holistic approach is adopted and the intentional spread of fake news is controlled.

Our hackathonian work

Quantitative analysis

During the hackathon, the team decided to work only on Twitter corpus. Namely because of its popularity in France and Sweden. We worked on a 23th April corpus of tweets in French and a 24th April corpus of tweets in Swedish. We used NodeXl. It is a free, open-source tool for data collection and network visualisation. Next, with Hyphe, a Medialab-SciencesPo Paris application, we analysed the hypertext circulation. Finally, NodeXl allowed us to observe topics network.

Main results

We identified of six expert clusters in France :

  • The medical expertise on the virus.
  • The political expertise on the management of the crisis.
  • Diverse scientific and professional expertise in pandemics, mental health etc.
  • The civil circumstantial expertise on the virus, the crisis, life during lockdown, personal solutions or problems…
  • Public intellectual, present in the public space for any kind of debate, so also present during the corona crisis.
  • Media dispositive as expert opinion source, not only mediation channel.

All these categories of expertise are associated with several common attributes: credibility, trust, legitimacy, use of media channels and, more recently, of new technologies. That gives them almost equal credibility in the public sphere. That is, they are not experts on the virus, but acquire by their discourses recognised expertise on the lock down, on the management of the crisis, on the problem solution. Moreover, there is a circulation between various experts, hierarchies, and oppositions.

Qualitative study

We divided clusters of various communities of experts and defined "markers" for each type of expertise in order to formalize those clusters. These markers are ready to be used in the computational model, which is based on linguistic and deep learning.

This approach is now ready to be applied to a wider corpus and in other countries.

Our projective study allows us to identify typical markers of each community of expertise. Finally, this will allow us to have an application that follows the evolution and the shocks of the different forms of opinions in real time.

Accomplishments that we are proud of

In short time, we mobilised a vibrant network of participants. We represent our EUTOPIA alliance with researchers from France (CY Cergy Paris Université), Sweden (University of Gontheborg), Great-Britain (University of Warwick) and Belgium (Vrije Univeriteit Brussel); Seven dedicated students worked tirelessly during the whole weekend. We were joined by hackers and mentors from seven countries. In sum, this allowed us to propose a prototype based on a broad academic knowledge.

What we learned

A small idea can grow up very fast when supported by enthusiasm, solidarity and lots of coffee.

Different experts can work together for a common purpose and become a productive team

And, of course, IT guys are so important:)

What's next for Experts vs Truth - EUTOPIA discourse team

Future features for the hub:

  • Creation of different levels of reliability for each group and type of messages, corresponding to different colors. Percentages of reliable, questionable or fake news in each group (e.g.., vaccine, medication, hospitalization). · How news of each cluster influences the other, i.e. how news in the medical cluster influences politics, citizens

  • How news spread by location (interactive map).

  • Lists by category and sub-category (scientists, politicians, media, etc.) that have spread fake news.

  • Number and type of fake news.

  • Timeline of that spread and correlation with other groups of news.

  • Active participation of citizens that will like to participate actively, signaling fake news.

Technical possibilities

Since we are working with Twitter data, we have several solutions to automate our model and limiting the tedious work of annotating the tweets, opening HECTOR to new possibilities for automatically detecting expert communities on COVID-19.

First, we could work on a supervised tweet classification model based on text similarity, which would first analyse how specific words are in each community, and then check in new tweets if the words present in the text data are typical to one community, linking them together. On top of this model, we could add the markers identified in our work as additional information for each tweet, to guide our classifier the best we can. The biggest challenge for this approach would be the need of a bigger annotated corpus that would need to be updated over time : more data often provides better results with this kind of models, and the constantly changing nature of the discourse on lockdown and the information about COVID means new vocabulary will appear over time.

Otherwise, we could take on an approach based on previous work on political communities (cf. Politoscope from the ISC-PIF, 2017) : now that we identified several experts on lockdown, we could start building a social network from them, looking at who they are following, who are following them, and who they are interacting with by replying and retweeting to tweets from other accounts. The main advantage of this approach is its independence towards language, which fits with our European-wide objectives. However, it doesn’t take into account the actual content of the tweets, which could be very misleading regarding people sharing or reacting to content they disagree with.

We would like to develop a partnership with LIRIA (section of CNRS, France, specialized in deap learning and social networks).

We aim to widen our skills and impact by collaborating with non-academic milieux, start-ups and innovating enterprises.

As for the geographical area, first of all, we could extend the hub to the entire EUTOPIA alliance (Universitat Pompeu Fabra Barcelona and University of Ljubljana still to join this project) and benefit from the entire internal support. That should be a beggining of an European covering of this problem and application of our solution. The covid crisis is a global problem with national or territorial evolutions. Circulating « truths » are not depending (only) on their sources, but also on the engaged expertises.

This project is made by university professors, researchers and students. This provide to the project not only a vast pool of highly skilled experts, but it is also giving access to EU funding as well as to funding provided by other public and private institutions willing to finance such effort.

Timeline:

6 months to deploy basic features for the project for a use on Twitter.

12 months to form a wider European team.

36 months to achieve HECTOR.

HECTOR supports Sustainable Development Goal 17: Strengthen the means of implementation and revitalize the global partnership for sustainable development

Project team (by alphabetical ordrer):

Axel Boursier

Audrey Deloison

Zakarya Desprès

Lisa Gutierez

Elsa Hassaïne

Alexandre Morjon

Luciana Radut-Gaghi

Ugo Ruiz

Géraud Vidalenc

They helped and mentored us during the hackaton (by chronological order):

Jo Angouri

Luk Van Langenhove

Jan Krasni (our official team mentor)

Benno Herzog

Romina Surugiu

Aistė Klimašauskaitė

Panos Agrapidis (our official skills mentor)

Aron james Miszlivetz

Aurite Kouts

Arnnei Speiser

Thank you!

Built With

  • english
  • french
  • hyphe
  • nodexl
  • swedish
  • twitter
+ 1 more
Share this project:

Updates