How Facebook’s Failure to Remove False Content Allows COVID-19 Misinformation to Spread

2nd November 2021

By Aoife Gallagher

Since the onset of the pandemic in March of 2020, the World Health Organisation has described what they call an ‘infodemic’, where the proliferation of false information is leading people to believe in claims that could put their own health, and the health of others, in jeopardy. Major social media platforms updated their policies around medical misinformation in 2020, with many of the most popular sites providing intricate detail of the kind of content that is prohibited in light of the global health emergency.

What’s been apparent to researchers looking at online misinformation is that platforms are generally failing to enforce these policies and seemingly do not have the will, or the ability, to take meaningful action to stop the spread of information that is leading to illness and death worldwide. Influential figures involved in spreading health and vaccine misinformation before the pandemic had free reign of these platforms for years to grow their networks, and easily pivoted to manipulating people’s fears about COVID-19.

_________________________________________________________________________

At ISD, researchers monitoring COVID-19 misinformation communities online had noted that not only were veteran health misinformation influencers increasing their visibility, but so too were new and emerging players who had used the pandemic to elevate themselves to fame within the COVID-19 sceptic movement. A number of these new influencers were found to be doctors and scientists who were using their credentials to manipulate people’s trust in science and medicine and spread misinformation under the guise of expertise.

Many of these doctors were connecting with other like-minded doctors internationally and forming coalitions. One such group is the World Doctors Alliance, a group of 12 doctors and scientists from seven different countries that had united to oppose measures put in place to stop the spread of COVID-19. A number of members spoke at anti-lockdown protests across Europe throughout 2020 and 2021 and were found to spread false and often contradictory claims about the pandemic. Two members, Dr. Dolores Cahill and Dr. Vernon Coleman, had appeared in previous research conducted by ISD into Arabic language Facebook groups, showing how content can be repurposed across regions in the world.

Studying the World Doctors Alliance on Facebook

Taking the World Doctors Alliance as a case study, ISD set out to determine how the group and its members were using online platforms and how their popularity had increased throughout the pandemic. The research was undertaken with an emphasis on Facebook in order to look into how successful the platform’s fact-checking program was at curbing the spread of false information.

To determine the group’s popularity, researchers first identified accounts under the names of the group and each of its 12 key members across eight different platforms (see figure 1) finding a collective following of over 1.2 million users. Accounts on platforms like Twitter and Instagram seemed to play minimal roles in the group’s promotion, whereas Facebook and YouTube hosted accounts with hundreds of thousands of followers. The group’s Facebook following was found to have increased over 100 fold since the start of the pandemic, to a total of more than 550,000, making Facebook the group’s platform of choice (see figure 2). ISD also found content mentioning the World Doctors Alliance or its members in 46 different languages on the platform, highlighting the cross border nature of these movements.

Figure 1: Number of followers of the World Doctors Alliance and its members across eight platforms

Figure 2: Growth in followers of Facebook pages associated with the World Doctors Alliance

Given the significant following the group had on Facebook, ISD researchers were interested to know how many times the group or its members had been mentioned in articles from Facebook’s fact-checking partners. We found a total of 189 articles across 24 languages, with some members, such as Dr. Dolores Cahill, fact-checked dozens of times on false or misleading claims she had made on the platform. This calls into question Facebook’s ability to take decisive action on those who are found to continuously violate the platform’s policies, even when the hard work has been done by fact-checking organisations.

Facebook’s bias to moderating English language content

When the number of Facebook posts in each language were compared with the number of fact-checking articles found in each language (see Table 1), it is obvious that there are major gaps in Facebook’s ability to provide robust fact-checking in all languages it operates in and that fact-checking organisations are heavily weighted to English language content. For example, over 5,528 posts mentioning the World Doctors Alliance were found in either Romanian, Hungarian, Swedish or Italian, yet no fact-checking articles about the organisation were found in these languages in ISD’s analysis (see Table 1).

Language Number of posts Number of fact-checking articles
English 57179 61
Spanish 8422 26
German 3911 13
Dutch 3359 4
French 1930 5
Romanian 1789 0
Hungarian 1339 0
Swedish 1282 0
Italian 1118 0
Arabic 1111 2

Table 1: Comparison of number of Facebook posts to fact-checking articles per language

 

Digging deeper, we also analysed the top 50 most popular posts mentioning the World Doctors Alliance in English, Spanish, German and Arabic, noting whether the posts contained false claims, whether the claims had been previously fact-checked and whether fact-checking labels had been applied. Across all four languages, we found minimal application of fact-checking labels, with lower rates in Spanish, German and Arabic compared to English content.

  ENGLISH SPANISH ARABIC GERMAN
Problematic posts featuring fact-check labels 5 (13%) 2 (4.5%) 1 (2.4%) 2 (8.3%)
Total engagement on posts w/ fact- check label 122,091 10,075 517 2,958
Total engagement on posts w/o fact-check label 567,129 157,536 58,247 63,007

Table 2: Analysis of posts with fact-check labels among the 50 most popular posts

This research found that Facebook’s AI technology, which the platform says is able to track down misinformation to a ‘very high degree of precision’ is failing to find and label all versions of posts that have been deemed false by fact-checkers. This was true with content that had been debunked multiple times across languages, but which was uploaded natively to Facebook with little or no variation (see figure 3).

Figure 3: An embedded clip of a Dr Dolores Cahill interview with no fact-checking label, despite being fact-checked multiple times

It also found that Facebook isn’t using the additional information provided by fact-checkers to provide context to claims that are not necessarily false, but are still feeding conspiracy theories about the pandemic. Figure 4, for example, shows claims made by Dr. Scott Jensen about the inaccurate recording of COVID-19 deaths being used to further the idea of a ‘scamdemic’. Despite fact-checkers determining that no such fraud was taking place, this additional information was not provided on Dr. Jensen’s posts.

Figure 4: Jensen’s claims used to bolster the idea of a ‘scamdemic’

This investigation adds further weight to the claims that Facebook is failing in its commitment to stop the spread of false information and is misleading the public about their efforts. It has also shown that content in languages outside of English is not being given the attention it deserves, considering almost all of Facebook’s new users are coming from developing countries. Even with access to the world’s best fact-checkers, the company has not taken decisive action against the World Doctors Alliance, and the group has risen to international prominence as a result. The method of fact-checking one post at a time is untenable when you are dealing with disinformation purveyors who produce content in such vast quantities. According to Facebook’s own internal research, they are aware that a large percentage of problematic content comes from a small number of users. Despite this, the platform is continuing to use tactics that essentially act as a plaster on a gushing wound. This research gives the impression of a company that does not understand the nature of the problem it is trying to fix, meaning the current approaches will do little to get to the root of the issue.

 

Aoife Gallagher an Analyst on ISD’s Digital Research Unit, focusing on the intersection between far-right extremism, disinformation and conspiracy theories and using a mixture of data analysis, open source intelligence and investigative techniques to understand the online ecosystem where these ideas flourish and spread. Previously, Aoife was a journalist with the online news agency, Storyful and has completed an MA in Journalism from TU Dublin.