Universiteit Leiden

nl en

Innovating terrorism with AI

On 7 and 8 November 2024, the International Centre for Counter-Terrorism (ICCT), the Institute of Security & Global Affairs at Leiden University, and The Netherlands’ National Coordinator for Security and Counter Terrorism (NCTV) partnered to host a ‘Blue-Sky’ Terrorist Exploitation of Artificial Intelligence (AI) Workshop in The Hague.

During this closed-door event, 26 terrorism, national security, counterterrorism, and technology experts from academia, industry, law enforcement, and policy-making, representing eleven countries came together to discuss and hypothesise how terrorists and violent extremists could adapt to the rise of AI and exploit these technologies. The two-day workshop was facilitated by Dr. Joana Cook (ICCT, Leiden University), Dr. Barbara Molas (ICCT), and Dr. Graig R. Klein (Leiden University).

A unique collaborative approach to emerging threats

The workshop facilitated a unique approach that prompted participants to think in a ‘blue-sky’ manner. Blue-sky thinking refers to the process of generating creative, imaginative, and out-ofthe-box ideas without any limitations or constraints. In short, it is a brainstorming format where participants are encouraged to think freely about a topic and explore, anticipate, and create potential scenarios. Using this proactive approach was aimed at overcoming the common critique that (counter)terrorism studies are too reactive by enabling participants to creatively anticipate emerging threats.

In preparation for the event, participants were invited to read an introductory concept note, which provided an overview of how terrorists have already exploited AI and ensured a common understanding of key concepts in the field of AI and terrorism. The participants were then divided into three groups, each consisting of a varied group of experts from different sub-fields and professions relating to terrorism and AI and one workshop organiser to moderate the discussions.
Each group then independently participated in a series of five roundtable discussions, focusing on specific aspects of potential terrorist adaptation to and exploitation of AI.

Exploring operational uses in terrorism

Day one started with participants discussing the potential use of AI for operational purposes.
They explored the capabilities of AI to enhance and enable weapon production and the risks of decentralised AI in providing instructions outside content moderation. This naturally led the experts to discuss current legal frameworks and new opportunities to counter AI-enhanced weapons. The second session focused on the misuse of decentralised and open-source AI.
Participants discussed the increasing decentralisation of large language models and their impacts on content creation, radicalisation processes, and civilian interactions with extremist chatbots.

Generative technology and state-sponsored support

On the second day, session three explored the lifecycle and trajectory of generative AI extremist online content. Participants compared decentralised AI, easy-access, and centralised AI to ‘traditional’ online extremist content. Central to this topic were questions on how AI could enable or more easily facilitate cross-platform migration and increase the general staying power of extremist online content. In the fourth session, participants considered how emerging AI technologies could further enable states to provide support to non-state actors, including terrorist
groups and violent extremist groups. Participants also discussed how these groups could adapt their tactics when provided AI products, weapons, and tools from state sponsors. Key topics during this debate included the ways in which AI could assist target identification, enhance the technical capabilities of groups, and how communication between state sponsors and non-state groups is affected by AI.

Thinking beyond the limits: Independent AI scenarios

The final round of discussions encouraged participants to push their bounds of creativity and truly think outside the box in considering scenarios in which AI could become a fully independent entity. The varied debates explored how AI could be used to automate the creation and dissemination of extremist content while circumventing current legal and content moderation frameworks. Moreover, the participants were asked to assess the risks associated with AI becoming independent to a point at which it functions as an uncontrollable weapon.

Key insights and future collaboration

After each session, participants were asked to complete a worksheet, in which they identified potential aims, methods, perpetrators, means, contexts, circumstances, and responses for future exploitation of AI by terrorists. Based on a combination of the five discussions and these worksheets, each workshop host moderator provided an overview of their group’s perspective on how terrorists are most likely to adapt to and exploit AI-driven technologies in a final roundtable presentation.

Ultimately, the workshop led to interesting insights, threat assessments, and hopeful and dystopian scenarios encompassing multiple aspects of terrorists’ and violent extremists’ exploitation of AI. As a result of its open-minded ‘blue-sky’ approach, participants were able to consider a much broader range of topics and scenarios, providing numerous potential avenues for future investigation and collaboration. The diverse input from experts in several fields related to terrorist exploitation of AI has further stressed both the need for and interest in this type of collaboration, demonstrating the success of this two-day event.

This website uses cookies.  More information.