What happens when platforms give up

By: Anthony DeAngelo

15 February 2023

_________________________________________________________________________________ 

It’s news that isn’t really news to those who spend time on the world’s most popular platforms. Big tech companies are shedding teams once built to address mis- and disinformation. According to the New York Times, cuts represent a “trend across the industry that threatens to undo many of the safeguards that social media platforms put in place in recent years to ban or tamp down on disinformation.”

ISD’s own research has repeatedly shown that mis- and disinformation, along with hate speech and violence, continues to thrive on major platforms. We saw it during last year’s elections in the US and around the debate immediately following the repeal of Roe v. Wade by the US Supreme Court.

Platform’s spokespeople insist that addressing these critical issues remains a “top priority,” but with a clear divestment from in-house experts initially hired to tackle them and continued inconsistent enforcement of existing policies to address these problems, what happens when platforms just give up? Who steps up to continue the online battle against mis- and disinformation?

Policymakers

The European Union’s Digital Services Act (DSA) represents the global standard for holding online platforms accountable when their own algorithmic preferences propagate mis- and disinformation. You can read more about the DSA on our own site, as well as check out the other policies we’ve informed and advocated for globally that address online harms.

As the DSA begins to be implemented in the EU, US lawmakers have largely shied away from similar approaches. A lack of a common, bipartisan definition of the very problem itself, along with an intense politicization of mis- and disinformation in the public domain threaten to sink any legislative action on these issues even before a vote can be called. While there might be some avenues for bipartisan action, on addressing mis- and disinformation by foreign actors such as China and Russia, most bills introduced during the last Congress on these issues were simply to prevent the government from addressing them at all. Only one bill, the Educating Against Misinformation and Disinformation Act, introduced by Congressman Don Beyer (D-VA), took a comprehensive approach to tackling the issue; yet it did not even receive a hearing or vote in committee.

Other Platforms

There has been a slew of studies over the past several years showing that people simply do not trust what they see on social media. For example, a 2020 Pew study showed an overwhelming distrust of social media as a source for political and election news. To assume the current landscape of major online platforms will stay stagnant over time is to bet against history; innovation will bring new platforms to market to compete for the attention of consumers. As new platforms come online, they have an opportunity to set new standards for trust and safety. After all, if people trust what they’re seeing online, or aren’t being actively attacked or manipulated, they might just stay on that platform.

You, the Reader

There’s been a lot of focus lately on the dangers of online harms threatening children, so much so that the Senate Judiciary Committee held an entire hearing on it. We’ve seen real concerns about foreign mis- and disinformation from countries like China and Russia, questions about mis- and disinformation about health care, and a number of other threats that platforms could or would have addressed if they were truly a “top priority.” In lieu of action by platforms or policymakers, a lot of the burden will fall on you, the reader.

Everyday people will have to separate out harmful content from safe content to ensure their kids aren’t digesting things that simply aren’t safe. They’ll have to question whether the assertion they’re seeing is true without the full help of the platform delivering it to them. And they’ll have to do this as online platforms deprioritize the kind of accessibility and transparency that would build trust.

So, What Now?

Just because online platforms have moved in the wrong direction in addressing mis- and disinformation doesn’t mean there isn’t hope for progress in the future. Policymakers in the US might have vast disagreements, but there are avenues for consensus on things like protecting children and combating foreign malign influence. Future platforms might brand themselves on being a trustworthy alternative to the status quo, pushing the larger marketplace to prioritize trust and truth over an open sewer of unchecked content. And consumers of these platforms themselves might not only develop their own ways to combat mis- and disinformation, but press stakeholders from big accounts to advertisers to call for changes that will deliver impact.

Every action by an online platform to gut their ability to address mis- and disinformation is a step backwards, but we should use that perspective to see what’s possible ahead and boldly go forwards.