Transparency

By: Helena Schwertheim

Transparency is key to building trust between the public, governments, regulators, the private sector and social media platforms, and ultimately increases accountability between these stakeholders. In democratic societies, there is an assumed level of transparency which allows for accountability. When addressing online terrorism, extremism, hate speech or disinformation and foreign interference, and protecting human rights online, there is a need for substantive evidence to provide oversight and develop effective and proportionate policies, legislation and regulation. Transparency is crucial in gathering evidence to better inform these debates and drive positive changes to the online information ecosystem. 

Glossary  

Online harm: this is behaviour online which may hurt a person physically or emotionally. It could be harmful information that is posted online, or information sent to a person.  

Hate: is understood to relate to beliefs or practices that attack, malign, delegitimise or exclude an entire class of people based on protected characteristics, including their ethnicity, religion, gender, sexual orientation or disability. Hate actors are understood to be individuals, groups or communities that actively and overtly engage in the above activity, as well as those who implicitly attack classes of people through, for example, the use of conspiracy theories and disinformation. Hateful activity is understood to be antithetical to pluralism and the universal application of human rights. 

Terms of Service: are the legal agreements between a service provider and a person who wants to use that service.  

Platform affordances: The concept of affordances was developed in the field of ecology to describe the extent to which the properties of an environment influence the possible actions that can be taken in this environment. This basic concept was later transferred into the area of product design to describe the perceived possibilities resulting from the design features of objects and the abilities of users. 

Introduction 

Over the past 10 years, we have witnessed an increase in risks emanating from the rapid expansion of the internet, and in particular social media platforms, across all spheres of life. This information ecosystem (where communities and individuals interact online) has changed the way we do business, communicate, engage in political discourse and debate, and express ourselves. ISD’s research has repeatedly demonstrated how states such as Russia or China, and conspiracy, extremist and hate movements around the world, are furthering their individual agendas by exploiting social media platforms, their underlying algorithmic recommender systems and the gaps in enforcement of platforms’ community guidelines or Terms of Service. 

This exponential spread of online disinformation, conspiracies, hate speech and extremism, poses an increasing threat to democracy and human rights globally. Even so, tech companies have fallen short in ensuring full transparency of activity on their platforms, including the decisions they make and their resulting impacts upon their users, and broader societies around the world. Transparency in its broadest sense provides a mechanism for improving visibility, understanding and accountability for public policy issues. By increasing transparency of online spaces and platforms, the argument goes, we stand a better chance of detecting, mitigating and responding to this broad spectrum of both illegal and legal online harms. 

Meaningful transparency benefits the protection of human rights in the digital age. Without transparency, any responses – regulatory or not – to any harm will not be effective, and their effects cannot be measured and evaluated. Transparency benefits those advocating for the protection of any right or issue, ranging from state censorship to child safety online or gendered disinformation or hate. Without transparency, no government, civil society organisation, researcher or other actor will be able to hold platforms to account for what happens or originates on their platform. 

By increasing the transparency of online activity, and the decisions and responses (by governments and platforms alike) affecting online spaces and platforms, there is a better chance of law enforcement, civil society researchers, or users, to detect, mitigate and respond in a timely manner to a broad spectrum of both illegal and legal online harms. Any response must be proportionate; while illegal content must be removed, legal but harmful content warrants a softer approach as appropriate (e.g. algorithmic demotion or demonetisation). Transparency is a necessary first step in creating accountability and should underpin any regulatory framework for online platforms. 

More broadly, transparency is cited and widely accepted as a key principle for ‘good governance’ of public administration, including by the Council of Europe and the OSCE. It is assumed that, for governance to be fair and efficient, independent oversight (either regulatory or non-regulatory) and avenues for public scrutiny are necessary. In a democratic system, transparent processes need to be in place that ensure public actors can be held accountable. For this reason, transparency by governments, politicians, legislators and law enforcement to their citizens is also important for achieving accountability. 

Transparency is therefore not an end in itself but a prerequisite to establish public trust, accountability, oversight, and a healthy working relationship between tech companies, government, and the public. 

Stakeholders: Transparency for whom, for what? 

Transparency can take a variety of forms in the context of social media platforms and online service providers. Different stakeholders (outlined above) will find different types of transparency useful or important when dealing with these topics. 

First, it’s important to understand the principles of transparency as they apply to the different spheres below. While each area and stakeholder will require different frameworks, there are two broad principles behind digital transparency: 

  1. Transparency must be computational. For an online space to be transparent, it must be possible to observe it computationally. For instance, until February 2023 Twitter’s API allowed for a holistic view of what took place on that platform. Without that API, this ‘public’ platform loses accessibility. No person or organisation can get the whole picture, because the scale of the platform overwhelms human capacity.  
  2. Transparency must protect user rights, including data privacy rights, not erode them. A good model for transparency protects individuals’ data privacy while enabling a macro understanding of the nature and scale of technology platforms’ processes and any potential infringement of rights that stems from the use of the platform. With this principle of transparency, observers can ensure that fundamental rights, such as freedom of expression, the right to equal treatment, rights of the child, and access to information are being protected. 

Once that is understood, we can look at the mutual transparency needs and benefits in each sphere. 

  • Firstly, transparency is vital for the public to understand their rights and responsibilities online, and their two-way relationship with online platforms. These are environments in which we all spend increasing amounts of time, consume information and participate in society and the economy. The public should have transparency in understanding what they are being exposed to, and why. 
  • Governments and political representatives require a transparent evidence-base to draw from to be able to fulfil their obligations. These include enforcing existing laws, drafting new laws, providing democratic oversight, and representing the views of their constituents. In the context of social platforms, democratic governments must equally be transparent in disclosing the basis and reasoning behind their decisions. 
  • Regulators (for example, Ofcom in the UK, or the FTC in the US), also require transparency to fulfil their responsibilities. They must have access to company policies, procedures and systems, and an understanding of the underlying technology to be able to monitor compliance with regulation.  
  • Civil society, academia and the media rely on social media platform transparency and data access to be able to fulfil their mandate of investigating and raising awareness of issues affecting wider society. With transparency and access to data, they can compile comprehensive evidence on online harms’ perpetrators, causes and impacts. Transparency for researchers is also necessary to provide advice to, and independent scrutiny of, the actions of platforms, governments, and regulators, as well as support vulnerable or minority groups. In a similar vein, researchers must also provide transparency on their methodologies, in case results need to be verified, and to allow for the public, government and social platforms to understand these results. 

Areas requiring further transparency 

Regarding the future of transparency in the context of social media and online ecosystems, ISD looks at four categories that will benefit from greater transparency standards: content moderation and communication; advertising; complaints and redress; and algorithms and platform architecture.  

Content moderation and communication: Platforms that have become public spaces must make that space as accessible as possible, bringing transparency to both the content available on their platforms, as well as the decisions to take action against content that is illegal (e.g. illegal hate speech, terrorist content or child sexual abuse material) or contravenes their own Terms of Services. As public platforms and their users play an increasing role in shaping culture, influencing political decisions, and driving societal change, the activities taking place in these spaces should be observable. For illegal content this may mean removal notices for users. For legal but harmful content that breaks platforms’ Terms, this could include labelling of advertisements or potentially misleading material. 

From a researcher’s standpoint, public content should be easily accessible (for example through an API) to allow analysis of live and historical content. Some platforms have provided this access on a voluntary basis (such as Twitter’s API prior to February 2023, when free access was revoked). The European Unions’ Digital Services Act (DSA) picks up where voluntary approaches fall short. The DSA obliges all services designated as Very Large Online Platforms (VLOPs) to provide “data publicly accessible in their online interface” to vetted third-party researchers investigating systemic risks on these platforms (DSA Article 40). 

Additionally, platforms should be transparent on the decision-making processes behind actions taken or not taken against content that goes against community guidelines or Terms of Service. The EU’s DSA, and historically Germany’s Network Enforcement Act (NetzDG, which will be replaced by the DSA), require platforms to take faster action on illegal content, but also require greater transparency regarding platform moderation activities. These laws require social media companies to provide regular transparency reports documenting complaints received and decisions taken against manifestly illegal content, such as hate speech on their platforms. In addition, these laws require social media companies to publicise information or content-blocking requests by law enforcement. 

Complaints and appeals: A significant gap exists in the public’s understanding of platforms’ abilities to moderate and respond to abuses of their platforms. Visibility of complaints made to platforms is essential for ensuring accountability, guaranteeing support for those targeted by online harms, and raising awareness of the challenges users face online, as well as providing evidence in the appeals process. As described above, regular transparency reports, sometimes legally mandated, have sought to fill this gap in public understanding. 

However, transparency reports have often failed to provide meaningful insight into the moderation processes of private companies, therefore limiting the ability of users to appeal or challenge decisions. In fact, the first fine levied under the German NetzDG law targeted Meta (then known as Facebook) for providing incomplete data in its first 2018 transparency report. In response to mounting pressure, Meta created an Oversight Board, an independent body that allows users to appeal Meta’s content decisions by escalating them to board members, whose decisions are binding and would shape Meta’s moderation policies going forward. Despite significant weaknesses (such as limited jurisdiction and impact), this novel body can be seen an experimental approach to enhancing transparency of content moderation and decisions, as well as complaints and redress.  

Advertising: Advertising (including targeted political advertising), is one of the core products offered by online platforms. These systems have allowed advertisers and campaigners to target chosen audiences with content. It is of public interest for users to understand how and why they are being targeted by online ads, and for regulators to be able to understand and respond to malpractice. 

Many countries around the world have determined that there should be stronger, more explicit transparency requirements for political advertising versus what is expected from unpaid or organic public content and communications. Regulatory initiatives, such as those in France, Ireland, Australia, Canada, the US, the UK and the EU, have proposed expanding existing authorisation requirements of offline political advertising to the online realm. This not only includes requirements for the clear labelling of paid-for content (noting the address of who authorised the ad), but also providing users with information about why they were targeted with the ad.  

To meet these demands, Meta and Twitter have introduced a public archive of ads that can be explored and queried by anyone. However, there has been a lack of enforcement across jurisdictions. The shortage of details provided and unreliability of the service during key election phases has shown the importance of civil society-led initiatives such as the UK-based Who Targets Me or the NYU Ad Observatory to improve online ad transparency. Many of these groups pre-date the official platform ad archives and use browser plug-ins to crowdsource information about where and when ads pop up on users’ news feeds. However, the shutting down of the NYU Ad Observatory in 2021 following legal threats from Meta, demonstrates competing priorities that obscure much needed transparency, and the issues with voluntary reporting approaches. This is where government-led regulation and enforcement would be helpful. 

Algorithmic ranking, choice architecture and affordances (including the design of features, interfaces, and rules that facilitate communication and content sharing on a platform): There remain significant concerns that the design of platforms and their algorithm systems contribute to online risks. These range from evidence about racial and gender bias in search engine results to worries that platforms’ interfaces and design choices incentivise the spread of divisive or misleading content. Central to these concerns is that the algorithms dictating a user’s online experience have led to unintended consequences, and have been challenging to scrutinise or evaluate for those unfamiliar with the internal operations of social media companies. For example, recommendation systems have been criticised for driving users to consume even more extreme content, potentially facilitating political radicalisation. An ISD policy brief explores this infamous “engagement problem” and other algorithmic issues in more detail, and how they may exacerbate the spread of harmful or ‘borderline’ content, while reinforcing biases and discrimination. 

Auditing such algorithms independently can provide transparency when investigating a specific harm or risk, or help ensure compliance with regulatory obligations. Researchers propose a range of research methods to assess how systems influence online discourse and user behaviour, each with their own advantages and drawbacks. An analysis of these different methodologies and their advantages and drawbacks can be found in the aforementioned policy brief. 

Balancing transparency 

Privacy rights  

Transparency must complement rights to data privacy, not erode them, and access to some transparency data may be contentious. Although ISD believes that it is in the public’s interest to err on the side of greater transparency, there still ought to be exceptions to the types of data and access available to the public or researchers. A ‘tiered’ access structure (by which a regulator or institutions accredited by a regulator or other body have increased access to transparency data) is advisable, therefore considering data protection and privacy expectations. However, the starting point should be public access.  

Ultimately, data access for public interest researchers should enhance platform accountability and democratic decision-making, and ensure regulatory interventions are fit for purpose, proportionate and do not set precedents that could threaten fundamental rights of privacy and freedom of expression. For further details on data access and related challenges to social media regulation, see ISD’s explainer. 

Government transparency requirements 

Linked to the protection of privacy rights, governments and regulators themselves should practice transparency in the policymaking process and enforcement of regulation in a way that builds trust and enables accountability. 

Especially in contexts with weak rule of law, social media regulation has the potential to be abused by governments, threaten users’ speech rights, or violate privacy through government surveillance. To avoid this, platform transparency reports should include information about policies, laws, or regulations of the relevant jurisdiction that compel platforms to take down content. To date, this is not always covered in transparency reports. 

Similarly, users who have had content taken down, should be informed if the action was due to a government request. Information on government removal or takedown requests (such as frequency and kind of removal) should also be made available for vetted, third-party researchers, so that the reason for removals can be studied and better understood (for example, to scrutinise and further research government anti-terrorist policies). 

Informal but influential relationships between platforms and governments raise another challenge in this field, as in some context governments may use their influence to go beyond formal takedown requests. The Center for Democracy & Technology’s recent report provides a more detailed framework about practices that affect users’ speech, access to information, and privacy from government surveillance and how transparency can mitigate risks. 

Transparency legislation: where do we stand now? 

As online terrorism and extremism, foreign information manipulation and interference, and hate speech continue to proliferate, governments around the world are increasingly seeking to effectively address these issues through regulation. However, the current state of transparency in social media regulation varies between regions. While some governments have taken steps to enhance transparency, further efforts are required to foster greater accountability, promote public trust, and mitigate the adverse consequences of online harms.  

The EU has taken significant steps towards transparency in social media regulation through its General Data Protection Regulation (GDPR), and the DSA alongside the voluntary Strengthened Code of Practice on Disinformation (CoPD). The GDPR empowers individuals to control their personal data (for example, through the right to object to personal data processing) and ensures that companies provide transparent information about data processing practices. Overall, the DSA’s transparency provisions aim to bring greater clarity and accountability to social media platforms, ensuring that users have a clearer understanding of how these platforms operate, handle complaints and engage in advertising practices. By enhancing transparency, the DSA aims to foster trust and enable users to make informed choices while mitigating the risks associated with online harms. As of the time of writing in June 2023, the DSA’s implementation and drafting of some delegated acts is still underway. It remains to be seen whether the enforcement of this act will translate to real improvements in the transparency of social media platforms in the region. 

The DSA includes several provisions that seek to enhance transparency and accountability in the operations of these platforms, however some key ones include: 

  1. Transparency reporting obligations (Article 15): mandates that VLOPs provide regular (at least once a year) transparency reports, which will include information on content moderation policies and practices, including how they address illegal content, disinformation and algorithmic processes. These reports should offer insights into platforms’ efforts to combat online harms, allowing for greater public scrutiny and assessment of their performance. 
  2. Notice and action mechanism (Article 16): requires social media platforms to establish user-friendly complaint mechanisms for users to report illegal content. Platforms will be required to acknowledge receipt of complaints within a defined timeframe and provide regular updates on the progress of the complaint. This process increases transparency for the user/citizen, by ensuring that users have visibility into how their reports are being handled and the actions taken by the platform. 
  3. Independent audits (Article 37): VLOPs will be required to undergo annual audits conducted by independent third parties. These audits will assess platforms’ compliance with the DSA’s obligations, including transparency requirements. Independent audits contribute to transparency by providing an external evaluation of platforms’ practices, ensuring accountability and fostering trust among users and stakeholders. 
  4. Regulatory oversight by Digital Services Coordinators (DSCs) (Article 51): The DSA establishes a regulatory framework for digital services coordinators (DSCs), which are national authorities responsible for overseeing compliance with the DSA. DSCs will have the power to request information from platforms, including data on their algorithms and advertising practices. This oversight mechanism enhances transparency by enabling authorities to gain insights into platforms’ operations and hold them accountable for their actions. 

Given the hesitancy in the US to regulate social media platforms, transparency efforts have mostly come from platforms themselves as a result of public pressure, such as the Meta Ad Library and the Twitter Transparency Centre. Legislative initiatives, such as the Honest Ads Act (introduced in 2017, later incorporated into the For the People Act), have sparked important discussions and raised awareness around the importance of transparency in political advertising. However, as of 2023, it remains merely a proposal.  

Other approaches include the US Federal Trade Commission’s (FTC) Tech Task Force, launched in 2019, which is dedicated to monitoring and investigating anti-competitive practices, including those related to online platforms. While the primary focus of the task force is not transparency, it plays a crucial role in enforcing existing laws and regulations to ensure fair competition and address potential privacy and data protection concerns. These initiatives demonstrate some progress in enhancing transparency in the digital sphere. 

_________________________________________________________________________________

This Explainer was uploaded on 21 July 2023. 

Left Wing Extremism

This Explainer outlines definitional considerations around far-left extremism, drawing a key distinction between the radical left and left-wing extremism, and highlights harm areas associated with the broader far left.

Hayat Tahrir al-Sham (HTS)

Hayat Tahrir al-Sham, a proscribed Islamist group, pursues nationalist jihad, distinct from IS and al-Qaeda's global caliphate ambitions.

Aryan Freedom Network

The Aryan Freedom Network (AFN) is a nationwide neo-Nazi organization in the US that embraces antisemitic, racist and national socialist ideas.