Executive Summary
Extremist organisations are rapidly evolving their communications and propaganda strategies, harnessing modern encryption tools and artificial intelligence to enhance recruitment, concealment and influence. Groups such as ISIS and Al-Qaeda have systematically adopted encrypted messaging apps such as Telegram and Signal to secure their communications and evade intelligence detection. At the same time, these groups exploit artificial intelligence capabilities, including conversational agents and deepfake technologies, to produce carefully tailored propaganda aimed at targeted audiences and to recruit individuals with greater sophistication and secrecy. The fusion of encryption and artificial intelligence has altered the character of the threat and reshaped the risk environment, reducing the effectiveness of traditional surveillance and monitoring tools.
Investigations indicate that some terrorist operations were coordinated and executed without leaving an actionable intelligence trail, as seen in ISIS-linked attacks in Berlin and Istanbul. The challenge is heightened by the fact that artificial intelligence enables these organisations to produce persuasive audio and video material at low cost. In the Gulf, risks increase as these methods migrate to local cells, especially amid rising technological reliance and an expanding, unsecured cyberspace. Despite ongoing efforts, gaps in legislation and technical capabilities persist and hinder an effective response to this threat.
This paper recommends that Gulf states adopt an integrated package of measures that includes updating legal frameworks to permit disciplined judicial access to encrypted communications, investing in counter-AI tools to detect fabricated content and digital extremism, strengthening digital forensic and cyber intelligence capacities, and building sustained partnerships with technology firms to adopt proactive measures against electronic terrorist activity. The paper also emphasises the need to establish and develop Gulf-based centres specialised in cybersecurity and digital intelligence so they can track technical developments in closed environments such as the dark web and operate within a coordinated Gulf and international system capable of closing gaps and neutralising threats. Confronting organisations that are becoming more sophisticated and clandestine is no longer only a legal or security task; it has become a technological race against a non-traditional adversary that masters the tools of the future.
Introduction
From Al-Qaeda to the Islamic State, these armed groups recognise the digital domain as a new front in their struggle. After the attacks of 11 September 2001, Al-Qaeda turned to more secretive communications tools, a trend that culminated in the launch of the bespoke encryption programme “Mujahideen Secrets” in 2007. With the rapid development of communications and encryption technologies, newer extremist organisations, including ISIS, moved to adopt social media platforms and encrypted applications to broadcast messages, disseminate propaganda and coordinate operations. This shift created a novel threat landscape in which encryption and artificial intelligence are central elements: encryption ensures the confidentiality of communications, while artificial intelligence supplies powerful instruments for manipulating information and extending influence.
This report seeks to understand how armed groups employ these advanced technologies, to map the principal tools and tactics currently in use and to anticipate future trends in the technological exploitation of violent extremism. The report reviews the main technological instruments that serve militant activity, focusing on encryption and artificial intelligence, and provides concrete examples from ISIS and Al-Qaeda. It then examines the intersections between civilian technology and terrorist use, the challenges confronting monitoring and countermeasures, and the implications of these developments for Gulf security. The report concludes with an overview of regional and international responses and with recommendations for policymakers.
Technology in the Service of Terrorism
Extremist organisations rely on messaging applications that provide end-to-end encryption for their communications. Notable examples are Signal and Telegram. Telegram was among the earliest mainstream platforms to combine encryption with features such as public channels and programmable bots, which made it particularly attractive to jihadists. Signal is valued for the strength of its encryption and the difficulty of intercepting its content. Less well known applications, including the Swiss service Threema and Element, have also been adopted by some extremist networks.
It has become routine for inquiries into major terrorist incidents to reveal the use of encrypted channels by perpetrators. In the Paris attacks of 2015, which killed 130 people, investigators found Telegram installed on an attacker’s phone a few hours before the operation, but they were unable to extract any content, suggesting the perpetrators successfully used self-deleting messages to conceal communications. Similarly, German and Turkish intelligence services reported that the perpetrators of the 2016 Berlin attack and the 2017 Reina nightclub attack in Istanbul received instructions from ISIS leadership in Syria via encrypted Telegram messages shortly before carrying out the operations. These examples demonstrate encryption’s effectiveness as a means of evading oversight, depriving security services of readable content even when devices are seized.
Encryption relies on a straightforward but security-critical principle: a message is encrypted on the sender’s device and is only decrypted on the intended recipient’s device. As a consequence, the message traverses networks as unreadable ciphertext, inaccessible to intermediaries such as service providers or government agencies. Decryption therefore requires the secret key held by one of the endpoints. Developers of some messaging platforms have publicly challenged the security of their systems by offering bounties to anyone able to break their encryption, underscoring their confidence in the cryptographic strength of those services.
Artificial Intelligence: The New Recruitment Tool
Artificial intelligence has introduced an unprecedented level of effectiveness and deception into extremist propaganda and recruitment. In March 2024, ISIS supporters launched what they called the “Harvest News Bulletin,” a weekly propaganda programme that featured anchors who appeared realistic but were synthetic. These presenters wore military attire and narrated ISIS operations as if broadcasting on an ordinary international news channel. Researchers at SITE Intelligence Group described artificial intelligence as a game changer for ISIS because it enables the group to disseminate violent messaging rapidly and broadly.
There are growing concerns that AI-generated videos could be used to impersonate political or religious figures, thereby manipulating audiences, steering behaviour or inciting violence.
Chatbots. Indications suggest extremist groups are interested in exploiting conversational AI models for recruitment and engagement with sympathisers. In early 2023, a media outlet affiliated with Al-Qaeda organised online workshops on artificial intelligence and published a 50-page guide titled “Amazing Ways to Use Chatbots,” which included a simplified explanation of ChatGPT translated into Arabic with examples of how to exploit the tool. In principle, an advanced chatbot can sustain extended, personalised interactions with thousands of young people simultaneously, offering scriptural citations and tailored arguments that gradually harden convictions without direct human intervention. Chatbots can also be used for internal vetting and security functions within an organisation. Current field evidence remains limited and primarily theoretical, but the interest displayed by groups such as ISIS and Al-Qaeda indicates these tools may become part of their operational toolkits in the near term.
Big data analytics. Artificial intelligence provides powerful analytical capabilities that extremist organisations could harness for operational planning. With vast quantities of open data available on social networks, internet maps and leaked databases, intelligent algorithms can extract patterns that are useful to terrorists. Analytical software can track habits and movements of individuals or target facilities, aiding selection of optimal targets and timing of attacks. Organisations could use predictive analytics to synchronise propaganda releases or military operations with periods of peak audience attention. Experts note that AI is already being used to personalise extremist messaging for specific demographic segments, with algorithms sifting through data to determine the most effective messages and methods for each group. Similarly, these capabilities have military utility: advanced programmes can analyse satellite imagery and traffic data to identify crowded moments or security gaps suitable for an attack. Although public documentation of specific cases in which terrorist groups have applied big data analytics is scarce, security professionals warn that the potential for AI exploitation by these groups remains significant.
