We need transparency, not censorship, to address hate speech and other harms on social media

By: Sasha Havlicek, CEO, Institute for Strategic Dialogue

17 April 2023

___________________________________________________________________________

Those on the left and the right agree on one thing: they both feel tech platforms censor – or are biased against – their views. The reality is more complicated, but we’ll never fully understand what’s happening without a complete picture of social media moderation and curation systems.

In a recent BBC interview with Twitter CEO Elon Musk, the interviewer referenced research by my team at the Institute for Strategic Dialogue (ISD) when pressed by Musk to substantiate his accusation of rising levels of hate speech on Twitter. Our report revealed that antisemitic hate speech on the platform has more than doubled since Musk took the reins at Twitter. In an earlier report, we also identified a surge in Islamic State (ISIS) accounts in the days following Musk’s takeover—a 69% increase in just the first 12 days.

The interview created some backlash with commentators suggesting my team are part of a grand, partisan conspiracy to censor the internet. This is not the case; on the contrary we have called for transparency not censorship in the face of online harms, and ISD has always monitored hate and extremism across the political spectrum, including leftwing extremism, right-wing extremism, Islamist extremism, as well as information operations by hostile state actors, like Russia, China, and Iran. Exposing and reporting on this type of activity online is not censorship. It provides those on the frontlines of protecting communities and democratic processes with the insights needed to prevent attacks and provides the public and policymakers with a basis for informed responses.

To be clear “slightly racist or slightly sexist” content, as it was defined by the BBC reporter in his interview with Musk, is not hate speech. It is deeply unpleasant and distasteful and speaks volumes to the character of the individual delivering it, but even in countries like the UK and Germany, which have laws criminalising certain types of hate speech, this sort of material would fall far short of prosecution. Hate has various definitions in different contexts, but at ISD we define it as activity which seeks to dehumanise, demonise, exclude, harass, threaten, or incite violence against an individual or community based on their race, religion, nationality, sex, sexuality, gender identity or disability.

This sort of speech has real-world impact. At its most egregious, it inspires catastrophic attacks against whatever ‘outgroup’ becomes the target of dehumanization. There are too many tragic cases of such violence from around the world to list here. In the U.S., where the debate is increasingly polarized about solutions to these problems, the Tree of Life synagogue massacre in Pittsburgh, the murders at the Emanuel African Methodist Episcopal Church in South Carolina, and the Club Q shooting in Colorado Springs are but a few terrible examples.  No American is immune to possible violence as cities like Buffalo, El Paso, and San Bernardino grieve after senseless attacks, the perpetrators of which were all immersed in the hateful content and communities that we’ve tracked online.

Beyond these most extreme cases, online hate impacts the mental health of those targeted – often children; silences people through intimidation and harassment – often minorities and female public figures; and is increasingly fueled by authoritarian states seeking to divide our societies and influence electoral processes. This in turn fuels polarization which is damaging our society.

When we allow the debate about solutions to these complex problems to be cast in partisan, binary terms, those that seek to divide and weaken democracy win.

So, what do we do about online hate speech? Americans have the right to express abhorrent views, though even Elon Musk seems to agree that it cannot be allowed to run completely unchecked on social media. Our research on the recent doubling in antisemitism on Twitter showed that the platform was in fact removing a great deal of this content; it simply wasn’t keeping pace with its production and posting.

Indeed, few social media companies want their platforms to become hives for hate mongers and terrorists, so they have policies in place about what they do and don’t allow. Our work has examined how consistent and effective their enforcement of these policies has been around the world. We believe it is in the public interest for there to be independent research and analysis of platform moderation practices including independent review of what they have taken down and why, and for users to have the right to a fair appeal.

In addition, we would respond by trying to increase speech that outcompetes the hate in the marketplace of ideas. My organization has worked for years with civic partners to try to do just that. However, increasingly we came to understand that platforms deploy algorithms which boost certain types of content, at times even promoting abusive content to people who are not looking for it.

These algorithms are designed for profit, to serve users content which will keep them on the platform for as long as possible – that which engages us, or enrages us. There is now a wealth of research that suggests that content recommendation algorithms across social media platforms serve up ever more extreme variants of content. In fact, the most susceptible and vulnerable users are often served with the most potentially harmful content. Our own research has shown how platforms have served teenage boys material which encourages hate and violence towards women, suggested hashtags that abuse female political candidates, and algorithmically recommended white nationalist literature.

This is not a free speech environment, this is a curated speech environment, in which the major social media platforms’ technological architecture and products determine who sees what.

Ultimately, the solution to hate speech lies in achieving higher standards of transparency and accountability on these platforms. This is the real crux of the debate about TikTok in Congress, but it is true of all platforms. Data mining processes and the algorithms that promote content are opaque; content moderation decisions are opaque and often inconsistent; and the ability to audit data is severely circumscribed. So long as platforms are secretly curating the material users see, for profit, they are not providing a level playing field for speech.

To protect free speech online we need transparency around the invisible hands that guide our digital experiences, from the content moderation policies and decisions made, to the ways in which their platforms boost – or de-boost – and target certain types of content. We need independent audits of what types of speech might be given an unfair advantage, and why.  Surely it is in everybody’s interest to have this information.

Currently both independent researchers and regulators lack access to the data needed to comprehensively perform such an evaluation.

Twitter recently published the source code of its ‘for you’ page, however this release lacked contextual information which would facilitate a true understanding of why users view certain tweets. Additionally, the platform is undergoing such rapid change that this snapshot may well be out of date in a matter of days. They have also just made access to their data, which was free until recently, extremely expensive for those wishing to perform independent research.

Other platforms have provided even less access to data, or have been shutting down that access over recent years, restricting the research community’s ability to illuminate the ways in which their systems impact the safety, security and cohesion of our societies.

There seems however to be a glimmer of light at the end of this tunnel: things look set to change for Europe with the EU’s incoming legislation, the Digital Services Act, which could, if properly enforced, achieve data access for independent researchers and regulators, underpinning meaningful transparency and accountability of very large online platforms for the first time. There will be much to learn from its implementation over the coming months and years but we are not starting from scratch.

Working collaboratively, liberal democracies can and must solve the complex problems posed by an unregulated web to safeguard free speech, human rights, safety and democracy now, and for generations to come.