What China’s Sweeping Algorithm Regulation means for Digital Governance Globally

31 May 2022

By Sara Bundtzen 

In recent years, governments globally have been paying greater attention to the algorithmic recommendation systems used by social media platforms – in both democratic and authoritarian contexts. This Digital Dispatch looks at China’s new attempt to cement its control of online discourse through regulation.

Doing so, China is hijacking the policy reasoning of digital governance, claiming to safeguard civic discourse and combat disinformation, while implementing ever more invasive surveillance regimes. In light of China’s approach, this Dispatch discusses challenges and opportunities for algorithmic and platform design interventions that seek to guarantee pluralistic debate and freedom of speech online.  

_________________________________________________________________________________

Digital policy interventions are trying to balance social media platforms’ power over the public discourse with accountability and transparency obligations. In the attempt to regulate platforms’ systems, not user content, researchers and policymakers alike increasingly focus the debate on how algorithms and design features amplify emotional, inflammatory, or false content. Given that ‘news feed’ algorithms are trained to keep users engaged, ‘negative bias’ is frequently the subject of amplification. Negative bias refers to the human tendency to give more weight to negative over positive content.  

In response to the risks of platforms for democratic discourse, regulatory steps and court rulings face constant balancing acts, negotiating between a user’s freedom of speech and a company’s freedom to exercise a profession (i.e., the right of companies to enforce their Terms of Service); between enforcing the principle that “what is illegal offline should be illegal online” and not making platforms’ algorithms arbiters of legality; or between addressing legal, but harmful content, yet not incentivising over-blocking or proactive “upload filters”.  

Authoritarian regimes do not make these careful trade-offs. Instead, they use platform regulation to control information flows, censor undesired opinions, limit public discourse, and erode independent media.  

Authoritarian framing of “safeguarding” public discourse  

A build-up of anti-democratic “best practices” and “norms” risks undermining human rights-based international digital policy debates. Unfree or partly free countries with little or no accountability and transparency appropriate the fight against disinformation and hate speech to introduce repressive content removal obligations or enforce government-friendly “digital media ethics rules”. This includes sweeping “fake news” laws in Pakistan, India, the Philippines, Brazil, Ethiopia and Vietnam.  

A global leader in artificial intelligence and big data, China has been moving ahead with the establishment of an “internet civilization” that cultivates “ethics and norms conforming to socialist core values”. Through promoting multipolar global governance, creating its own multilateral bodies, and investing in the global south, Beijing has shown a willingness to shape alternative global governance institutions and norms where they suit China’s interests, especially in the area of human rights. The state-financed “Digital Silk Road” (DSR) notably seeks to increase participation by Chinese companies in global norm-setting, serving as a vehicle to establish an alternative to what China sees as a US-dominated technology world. 

As well as raising questions about the Chinese Communist Party’s (CCP) grip on the domestic information environment, its enforcement of a new algorithm regulation poses important questions elsewhere. This is especially true when institutions, rules and norms around digital governance are still incompletely developed.  

China’s Algorithmic Recommendation Management Provisions

In January 2022, the Chinese internet regulator, the Cyberspace Administration of China (CAC), published the “Internet Information Service Algorithmic Recommendation Management Provisions”. The provisions swiftly entered into force in March 2022, affecting “internet information services” that rely on the use of “algorithmic recommendation technology”. This includes giant tech companies such as Weibo, Tencent’s WeChat and ByteDance, the latter of which is most notable for owning TikTok, which thrives on its recommendation systems and has been found to host rampant disinformation, hate speech and extremism. The provisions oblige providers of such services to “carry forward the Socialist core value view, safeguard national security and the social and public interest”. 

The new regulation addresses a range of issues that involve algorithms, from “addiction or excessive consumption” and “synthetic false news information”, to the protection of the elderly (including from online fraud) and gig workers. At its core, the regulation emphasises the need to establish and perfect norms in algorithms – requiring providers to uphold “mainstream value orientations” and “vigorously disseminate positive energy” in key segments such as “front pages, hot search terms, selected topics, topic lists, pop-up windows”. Algorithmic content curation must thereby “avoid creating harmful influence on users, and prevent or reduce controversies or disputes”. Specifically, providers must not use unlawful or harmful information as keywords or user tags, which includes “discriminatory or biased user tags” for profiling users or recommending information.   

The provisions also claim to protect user rights. For example, providers must afford users with the “choice to not target their individual characteristics” and a “convenient option to switch off algorithmic recommendation services”. However, any user “choices” would remain well within the limits of what the CCP considers “positive energy”, i.e., party position.  

On 8 April 2022, the CAC published the “Qinglang – 2022 Algorithm Comprehensive Governance” Special Action, to be enacted until the beginning of December 2022 with the intention of guiding tech companies in “comprehensively cleaning up the use of algorithms”. The enforcement of algorithmic “norms” will include companies “self-evaluating” and “self-correcting” their algorithms, as well as the CAC carrying out on-site inspections and overseeing the registering of algorithms with government departments.  

Implications for democratic digital governance  

On the surface, China’s declared focus on “corporate responsibility” within the internet industry reflects many of the same concerns held in the West. User control, transparency, external audits, mitigation of societal harm, and safe product design are all areas which current democratic legislation such as the EU’s Digital Services Act (DSA) aim to address.  

Yet in the context of the CCP’s domestic surveillance, censorship and control of media, far-reaching and centralised oversight of the online environment threatens to limit free speech, dictate public discourse and prevent popular upheaval. Government departments can scrutinise user content, interactions and behaviour based on user tags, keywords, or logs of algorithms. Such oversight powers can be used as tools for monitoring the “mood” of society and administering domestic public opinion. Graham Webster from Stanford University’s Cyber Policy Center notes that the algorithms provisions could be a “powerful lever to advance the general job of propaganda and public opinion guidance, as they call it, into this automated realm”. The regulation, part of China’s efforts to rein in the growing power of its tech giants, ensures that algorithmic recommendations align with the Communist Party’s measure of “values”. 

China’s new grip on algorithms could also directly impact Western democratic discourse. To recall, Lu Wei, who was head of the CAC in its formative years, coined the term “discourse power” or “the right to speak” (话语权) as a means to “tell China’s story well”, emphasising that “national discourse power is the influence of a country’s ‘speech’ in the world”. With regard to TikTok’s power over public debate especially during an election, Ezra Klein fittingly suggested, “If one candidate was friendlier to Chinese interests, might the CCP insist that ByteDance give a nudge to content favoring that candidate? Or if they wanted to weaken America rather than shape the outcome, maybe TikTok begins serving up more and more videos with election conspiracies, sowing chaos at a moment when the country is near fracture”.  

Beyond Beijing’s discourse ambitions, China’s oversight model for platform regulation warns Western policymakers of the inherent risks of implementing regulation that seeks to establish “norms” in a way that controls which content should be promoted or not. Daphne Keller from Stanford’s Cyber Policy Center pointed out that current policy reasoning on regulating algorithmic amplification could have much in common with discussions about content moderation in the first place. External audits would still require regulators or vetted researchers to establish some line between what is harmful and what is not, in order to decide what to omit or down-rank in algorithmic curation and prioritisation. Ellen Judson from Demos similarly noted that most proactive “systems changes” still aim to “identify where the harmful content is and then act on it; a content measure looking like a systems change”. 

Ultimately, digital policy interventions boil down to the question of what levels of availability, reach, and visibility of certain types of content are acceptable or not. Democratic governments, together with independent regulators and civil society organisations, should give focus to what policy interventions are aiming to achieve, what problems they try to solve, and what trade-offs are necessary. 

Recommendations: How to move forward 

Democratic platform regulation that enables pluralistic discourse requires balancing fundamental rights, a challenge that necessitates more decentralised power, rigorous public consultation, and democratic scrutiny and accountability. Democratically elected governments should set the parameters, while independent regulators, courts and civil society work towards ensuring a safer and open online environment for users.  

And rather than adding content measures, digital policy interventions need to change the content measures inevitably used for regulation. An innovative change proposed by Frances Fukuyama is the use of “middleware”; alternative third-party content-curation services that provide users more control over the content they see on platforms. His proposal aims at bringing competition into markets where there are network effects (i.e., when a service becomes more valuable as its user base grows). No one player would exercise power in making fine-grained decisions over content, and users would be able to select providers they trust. Daphne Keller further informs the debate highlighting that a robust interoperability model will need to address curation costs and user privacy – weighing up privacy, competition and speech challenges. A “middleware” option has the potential to mitigate both the negative effects of algorithmic systems, and the messy blend of government and private power behind content moderation (i.e., platforms’ own power and government pressures to suppress ‘harmful’ speech).  

Platform design interventions should also consider “content-agnostic” measures that tackle amplification rooted in one-click, frictionless sharing of content, for example, by removing the reshare button after two levels of sharing. This measure would avoid value judgements as it would affect unobjectionable content just as much as “harmful” content. After a post has been shared twice, users can simply copy and paste content if they want to share it further. 

Ultimately, users, researchers, and regulators (and even the platforms themselves) only have a limited understanding of exactly how systems sort and amplify content – and to what extent they drive “harmful” content and behaviour. In the context of platforms built on engaging users via their algorithms, it is incredibly difficult to differentiate between “authentic” or “neutral” behaviour and “artificial” or “amplifying” behaviour. This informational fallacy obscures policy responses. Research, data access, and public understanding of the systems that shape public discourse are crucial. Policy intervention that aims to protect public discourse should, first and foremost, enable meaningful, privacy-compliant platform transparency. 

Digital governance globally requires democratic governments to shape clear baselines for intervening in social media platform systems and design. Only through doing so can it protect the freedoms of users and the functioning of a healthy, pluralistic public discourse.  

 

Sara Bundtzen is a Research and Policy Associate at ISD Germany.