“Get them out, keep them out”: How social media fuelled violent attacks against migrants in Northern Ireland
20 June
By: Aoife Gallagher and Ciarán O’Connor
Recent unrest in Northern Ireland, marked by violence and the targeting of minority communities, was sparked after two Romanian teenagers were charged in connection with an attack on a teenage girl in Ireland. Online platforms failed to curb the spread of hate speech, incitement to violence and doxxing that followed the allegations, likely contravening both platform policies, as well as platforms’ obligations under the UK’s Online Safety Act (OSA). The case also reflects a growing trend, as seen in the riots following the Southport stabbing in the UK in 2024 and those that took place in Dublin in 2023, of coordinated online activity quickly translating into real-world violence. In this Dispatch, ISD analysts outline the online and offline interplay that facilitated this violence and consider the coordinated responses necessary to prevent such episodes in the future.
Introduction
For eight consecutive nights in June, several towns across Northern Ireland were engulfed in protests and violent unrest, with Ballymena in County Antrim being the epicentre. The violence was triggered by an alleged sexual assault on a teenage girl in the town on 7 June.
After two Romanian teenagers were charged in connection with the attack (which they deny), a broad coalition of Northern Irish Loyalists, anti-migrant nationalist agitators from south of the border and international far-right actors mobilised quickly to shape the public narrative and response online. Focusing almost exclusively on the alleged attackers’ ethnicity and nationality, these groups exploited fear and outrage felt over the crime to incite violence and hatred against migrant communities (in particular Roma and Romanians perceived to be Roma).
Hundreds of masked rioters targeted and burnt homes and properties belonging to immigrants, set up blockades, and threw petrol bombs, bricks and fireworks at police. As with both the November 2023 riots in Dublin and the July – August 2024 riots in Southport (which also spread to Belfast), social media was critical to the organisation of this violence. While mis- and disinformation was not as prevalent in the immediate aftermath of recent events in Northern Ireland as in these previous riots, social media platforms again failed to act on egregious examples of incitement to hatred and violence which fuelled the unrest.
Online mobilisation across platforms
On Facebook, a page called ‘Ballymena Reaction Group’, set up in November 2023 in reaction to what the page refers to as “Roma gang master establishing themselves in our town”, kicked into action after 18 months of inactivity. The only previous post on the page from November 2023 shared details of what they termed a “Romanian pervert” who was allegedly “rehoused” in the town.
“It’s time we were back on the streets”, the page posted on 9 June to its 6.7k followers. “Call us racist if you want, it won’t deter us.”
Over the subsequent days and nights, the page shared posts celebrating migrant houses being destroyed and families driven from the area. “Well done ballymena [sic] burn the lot of the c*nts out”, said one comment from a follower of the page.

Image 1: Some of the comments posted on Facebook inciting violence against migrant and Roma communities.
Other posts listed the addresses of “local families” who shouldn’t be targeted, while advising people on how to identify homes of members of the Roma community. One comment said, “If yous [sic] check in with a local resident of queen street (name can be supplied) they can give you all house numbers of the romas [sic] living on street.” Another advised locals to mark the doors of Roma families with an X. A post made by the page on 11 June declared “We are at war…with Roma gangs” while advising the “young soldiers protecting our towns” to be “specific” in their attacks so they can “remove this scum from our streets.”
On 9 June, a third man was arrested in connection with the attack and unconditionally released. His home was attacked by rioters, forcing him and his wife to go into hiding while his mother took their daughters back to Romania for safety. Following this, details of his family members’ addresses were shared on Facebook along with a caption reading: “Don’t poke the bear [name withheld by ISD], your [sic] a wannabee gangster that can easily be found. This isn’t a threat, just advice from some old adversaries.”
As the violence grew and families were forced to flee their homes and seek shelter elsewhere, details of their movements were also shared online. On the afternoon of 11 June, Gordan Lyons, DUP Minister for Communities of Northern Ireland, posted on Facebook that “a number of individuals were temporarily moved to Larne Leisure Centre” after the disturbances in Ballymena. This information was also posted elsewhere online, including on Facebook by a local news organisation. That night, Larne Leisure Centre was targeted and set ablaze.

Image 2: Facebook post by DUP Minister Gordan Lyons sharing the location of migrant families fleeing violence.
Another Facebook page set up to organise protests in Larne shared information about the whereabouts of families seeking protection. This included posting photos of unidentified people they regarded as suspicious as well as locations where assumed members of the Roma community were sighted. One such post showed a photo of four men and was shared with the caption claiming that the page “cant confirm they are romas but this is to make people aware incase [sic] it is.” On 16 June, it was reported that a man was arrested and charged with publishing material on this page that was “capable of encouraging or assisting others to commit riot, criminal damage or affray.”
Much of the content posted on Facebook listed in this report violates a number of Meta’s Community Standards, including the promotion of hateful content and content that incites violence or facilitates harm against individuals or groups. Additionally, as these pages released information about the identity and residential information of individuals and groups at risk of harm (also known as doxxing), they also violated the platform’s policies regarding coordinating harm and promoting crime as well as privacy rules.
On TikTok, livestreams documenting the violence on the front lines of the riots clocked up hundreds of thousands of views; some earned money through the platform’s monetised Gifts feature. One such account, belonging to a local individual which had shared racist memes and inciteful content, claimed that he would not be livestreaming if it was not for the “likes, shares, comments, gifts and follows.” Livestreams on YouTube from the riots, including one broadcast by a former member of the Ulster Volunteer Force, were found to promote hate against Muslims and migrants. These videos, which were clocking up tens of thousands of views, likely violated TikTok’s hate speech policies and, potentially, its policies on violent extremism or criminal organisations.
Footage from the disorder was shared by far-right figures in both the Republic of Ireland and Great Britain. Clips showing houses being targeted, buildings on fire and migrants pleading for clemency were amplified, while the violent actions of locals justified and endorsed by others who shared their anti-migrant hostility.
Platform failures and regulatory gaps
This cycle of violence fuelled by social media is alarmingly familiar and even predictable for those who monitor online extremist movements. Yet social media platforms which enable this activity are still failing to take appropriate action. While many of the posts and comments referenced in this report have since been removed from platforms, this only happened days after the violence had taken place. It is unclear whether this is a result of interventions by the platforms or if the account owners took action themselves.
Meta’s rollback of content moderation policies, including ending third-party fact checking and narrowing enforcement to only the most extreme violations, risks creating an online environment where hateful and inciteful content can circulate more freely. Similarly, YouTube’s recent loosening of moderation guidelines also has the potential to allow harmful content to gain traction before being addressed.
By letting such content spread unchecked, platforms are failing to meet their obligations under the UK’s OSA, which mandates that platforms assess and mitigate against illegal content (including incitement to violence) and act to swifty remove such material. Sections 9 and 26 of the Act impose duties on service providers to carry out a “suitable and sufficient” illegal content risk assessment, considering the likelihood and impact of users encountering content that incites hatred or violence. These duties are complemented by sections 10 and 27, which require providers to take proportionate steps to mitigate these risks and swiftly remove illegal content once it is identified. Despite these legal obligations, there is ongoing evidence that some platforms are not adequately addressing content that incites violence or hatred, as demonstrated in the events described.
The recent escalation in Northern Ireland again underscores the need for robust and locally-informed moderation systems capable of rapidly addressing emerging threats and coordinated acts of violence. This is especially acute for livestreamed content where platforms must evaluate and mitigate risks associated with real-time broadcasting under the OSA. Given this, platforms should devote additional resources to monitoring and moderating in response to emerging unrest and disorder. In this case, platforms appear to fall short in enforcing their safety obligations–failing to act against live broadcasts containing hate and incitement and allowing harmful content to be monetised through platform incentive mechanisms. Such failures raise concerns about compliance with duties and sections 9 and 10, associated with understanding and acting on risks.
Conclusion
The events in Ballymena and elsewhere make it clear that online incitement can rapidly escalate into targeted violence when left unchecked. Social media platforms were used not only to circulate inflammatory narratives and livestream attacks, but to disclose the locations of fleeing families–activity that directly preceded physical violence.
Beyond digital regulation, the violence in Ballymena underscores the urgent need for a coordinated government response bridging online and offline risk assessment and monitoring. In the UK, bodies such as the National Security and Online Information Team within the Department for Science, Innovation and Technology (DSIT) should play a key role in bridging real-time response from platforms, law enforcement, regulators, and researchers during such incidents, to enable faster disruption of digital activity driving offline harm.