Plenty oversight, little execution: Recent Oversight Board rulings reveal that Meta continuously fails its users 

By: Clara Martiny and Ellen Jacobs 

21 February 2024 


“We’re sorry you’re having a bad experience on Facebook, and we want to help,” states Meta’s ‘Report inappropriate or abusive things on Facebook’ help article. “If you want to report something that goes against our Community Standards, […] Use the Find Support or Report link to report it to us.” 

Image 1. Screenshot of Meta’s help article for users to report ‘inappropriate or abusive’ content.  

Image 1. Screenshot of Meta’s help article for users to report ‘inappropriate or abusive’ content.

In April and May of 2023, 11 users followed Meta’s suggestion and used Facebook’s Report link to flag an abusive post advocating for suicide among transgender people. This action was taken a total of 12 times by the 11 users. Meta closed most of the reports, sending only two for human review. After a back and forth between the users and the reviewers, it was only after Meta’s Oversight Board1 selected the case in September 2023 that the platform removed the post and disabled the account of the user who posted it.  

On 16 January 2024 specifically, the Oversight Board overturned Meta’s previous decisions to leave up the post, stating it violated Meta’s Hate Speech and Suicide and Self-Injury Community Standards. Additionally, the Oversight Board emphasized how “the fundamental issue in this case is not with the policies, but their enforcement.” The Board concluded that “Meta’s repeated failure to take the correct enforcement action, despite multiple signals about the post’s harmful content, leads the Board to conclude the company is not living up to the ideals it has articulated on LGBTQIA+ safety.” 

The conclusion the Oversight Board came to is not particularly shocking: while Meta may have policies in place to protect its LGBTQ+ users, ISD has previously documented how these policies are often poorly enforced, exposing LGBTQ+ users to potential offline harms. GLAAD’s 2023 Social Media Safety Index (SMSI) also documented gaps that continue to exist in Meta’s policies, disparities that most notably affect transgender, non-binary and gender non-conforming users.  

In ISD’s public comment for the case, we suggested Meta should clarify both their Hate Speech and Suicide and Self-Injury policies and invest in content moderation systems that can catch the spread of extremist or hateful ideologies through more implicit means by training its policy teams to be responsive to emerging trends that attack vulnerable groups of people. This case is also a reflection of a problem many major social media platforms have, and a problem ISD has extensively highlighted in past research: platforms do not have adequate resources in place for policy enforcement outside of English-language content. In this case, the original post was in Polish, and Board-commissioned linguistic experts concluded the phrase used was a “veiled transphobic slur” – a slur Meta’s human reviewers failed to catch.   

Meta has long been slipping in policy enforcement, as confirmed by recent rulings of the Oversight Board. In another ruling about an Instagram post pushing Holocaust denial (which Meta decided not to remove), the Board revealed that Meta was still applying its COVID-19 automation policies to its enforcement practices. According to Meta, these policies “auto-closed review jobs” to reduce the burden on human reviewers. Although these measures may have utility during emergency situations, they cannot replace human reviewal when detecting nuanced and culturally contextual hate speech rhetoric, such as in the aforementioned case. Meta has continued to rely on these outdated measures knowing that they are failing to prevent and curb hate speech on its platforms. 

By not prioritizing the creation of effective and accurate enforcement systems, especially for hate speech, Meta is failing their users. It is concerning that the platform is not taking active steps to keep its policies and enforcement practices efficient and up to date. Most of Meta’s recent responses to Oversight Board rulings suggest they are still “in progress” at implementing (partially or fully) the Board’s recommendations. However, in a turbulent political year where national elections are expected in at least 64 countries, prioritizing a safe platform with reliable information is critical.  

Hate speech policy and enforcement is not the only area where Meta is falling behind. Another recent Oversight Board case sought to address Meta’s Manipulated Media policy, which unlike Meta’s other more comprehensive policies, is limited to one bullet point. The policy advises users not to post videos that have been edited in ways that are “not apparent to an average person” and would “likely mislead” a viewer to believe a subject of the video “said words they did not say” and that the video is a “product of artificial intelligence (AI).” The case focused on an altered video of US President Joe Biden, where a Facebook user posted an edited video of Biden seemingly “inappropriately touching his adult granddaughter’s chest” with a caption describing Biden as a “pedophile.”  

Meta decided to leave the video up and ultimately, the Board upheld the platform’s decision because the policy “applies only to video created through artificial intelligence (AI) and only to content showing people saying things they did not say.” The Board, however, expressed concern regarding Meta’s current policies, which they deemed “incoherent, lacking in persuasive justification, and inappropriately focused on how content has been created, rather than on which specific harms it aims to prevent.”  

This is a concern ISD flagged in our own public comment, where we stated Meta’s Manipulated Media policy does not do enough to address emerging online trends with not only AI-generated deepfakes, but also regular deceptively edited videos, which can be just as damaging. Additionally, Meta’s focus on the manipulation of speech without consideration for the manipulation of actions allows a lot of content to slip through the cracks of their moderation systems, potentially amplifying harmful narratives. These gaps in Meta’s policy and enforcement do not bode well for 2024, despite Meta’s efforts 

It is no secret that the US is far behind the EU and UK in passing regulation that requires platforms to provide more transparency. The establishment of the Oversight Board has been an interesting experiment in content moderation and independent platform governance. However, even at its establishment in 2020, it was clear that the Oversight Board was insufficiently independent and inherently flawed. An untransparent Oversight Board originally staffed with members selected by Meta with very limited purview and no ability to make binding decisions beyond whether an individual post can remain up was never going to provide an avenue for meaningful changes to Meta’s practices. Much like the company’s announcement last month of over 30 tools aimed at better protecting kids online, conveniently shared ahead of the Senate Judiciary’s hearing, the Oversight Board seems to be another aspect of Meta’s comprehensive playbook in self-marketing.  

However, the Board has provided a practical venue for centralizing comments from civil society organizations and has made good recommendations in its rulings. The Board’s decisions often reflect concerns from submitted comments, showing that the Board recognizes and often agrees with external stakeholders (although under recently-published procedure for expedited decisions, the Board will not review public comments). The Board’s justifications for investigations and subsequent rulings also help illustrate the difficulty and complexity of content moderation on social media platforms. Venues like the Oversight Board bring additional perspectives and expertise to making content moderation and policy decisions. This provides a useful model for how companies could inform their efforts to comply with regulation or make difficult content moderation choices in the future – but also makes the case for why this cannot be the sole venue. 

The Oversight Board, despite its flaws, has also been useful in adding to the evidentiary basis calling for regulation. The above cases plainly state that Meta’s policies are inadequate, and others point to Meta’s inconsistent adherence to its own policies. This — coupled with Meta’s long history of adopting a limited number of the Board’s recommendations — all shows that the company has continually failed to protect its users. It also provides another proof point that social media companies will not reliably work with policymakers, appointed by them or otherwise, even when they promise to. 

The Oversight Board is one of many instances that prove why regulation from lawmakers is needed to successfully protect people online. Social media companies have consistently failed to protect their users from hate speech or other violative content, invest in adequate content moderation and trust and safety resources and staff, and update policies to mitigate harms from emerging technologies. Without transparency and accountability mechanisms to ensure that platforms create and properly enforce comprehensive and effective policies, the harms outlined in the aforementioned Oversight Board investigations will only continue. And if so, the toothless and self-appointed Board will remain the sole venue for recourse – and who is that helping? 

TikTok series: Policy recommendations 

ISD identified platform safety improvements for hate speech on TikTok, including better enforcement, clearer policies and filling content knowledge gaps.