ISD spoke to the BBC about the urgent need for legislation to keep up with online terrorist threats as generative AI becomes more accessible. This comes after an experiment by a UK independent terrorism legislation reviewer revealed that AI chatbots are mimicking extremist responses and even attempting to recruit individuals online.
The BBC reports that Jonathan Hall KC, the independent terrorism legislation reviewer for the UK, ran an experiment on Character.ai, a platform where users can have conversations with AI-generated chatbots created by others. In his experiment, he came across chatbots designed to mimic the responses of extremists and terrorist groups, including one that attempted to recruit him and claimed to be “a senior leader of Islamic State.” Hall was even able to create his own “Osama bin Laden” chatbot, which was then quickly deleted.
The reviewer explained that no crime was committed under current UK legislation since the messages were not generated by a human. Instead, he says, new legislation should hold chatbot creators and the websites that host them accountable.
The results of this experiment highlight the “clear need for legislation to keep up with the constantly shifting landscape of online terrorist threats,” said ISD to the BBC. Adding that the UK’s Online Safety Act, although a step in the right direction, “is primarily geared towards managing risks posed by social media platforms” rather than AI.
“If AI companies cannot demonstrate that [they] have invested sufficiently in ensuring that their products are safe, then the government should urgently consider new AI-specific legislation,” ISD said.
While ISD’s monitoring has shown that the use of generative AI by extremist organisations is still relatively limited, it is still subject of concern.
The full article is available on the BBC website.