Situation, Facts and Events
25.08.2024
American experts on the leveraging of modern information technologies by the Islamic State to attract and radicalize new supporters
With the recent news that jihadists inspired by Islamic State (ISIS) were planning to conduct a terrorist attack against a Taylor Swift concert in Vienna, Austria, it has also come to light that some of the perpetrators involved in the plot radicalized online.
While it remains unknown whether the radicalization occurred via TikTok or on another platform, what is clear is that Islamic State and its global network of affiliates continue to radicalize, recruit, and incite followers and supporters to launch terror attacks in the West.
Islamic State Khorasan (ISKP) has led a relentless push to expand and professionalize its approach to propaganda and media operations. ISKP has relied on different apps and messaging platforms to connect with various diasporas in the West — fr om the Balkans, Caucasus, and Central Asia — and convince its supporters to attack a range of soft targets, including concert venues, amusement parks, and houses of worship.
In the wake of Hamas' October 7 attacks and Israel's subsequent military actions in Gaza, Europe has seen a troubling rise in terror-related arrests, with nearly two-thirds of those apprehended between October 2023 and June 2024 being teenagers. This alarming shift in demographics, coupled with warnings from criminologists like Alain Bauer, who note that many minors arrested in France for plotting terrorist attacks were previously unknown to security services, underscores a significant shift in the speed and manner in which young individuals are being radicalized into jihadist ideology and subsequently plot violent attacks.
Although social media platforms like TikTok and messaging apps like Telegram are not new or emerging technologies, their increasingly sophisticated exploitation by jihadist recruiters and organizers suggests that the potential for misuse is still evolving and that young people may be particularly vulnerable. It also simultaneously implies that although the public and private sectors have tried to crack down on the use of platforms for the dissemination of disinformation and extremist content, they remain the primary choice by jihadis to efficiently recruit and spread their message, who adapt and leverage new capabilities such as generative AI to help in their efforts.
In early 2023, a thwarted plot on an LGBTQ+ pride parade in Austria highlighted how two teenagers and a 20-year-old Austrian of Chechen and Bosnian origin were inspired and radicalized through TikTok. While TikTok has a policy in place to remove extremist content, studies show the extent of violent propaganda available on the video-based platform. Young Austrians were exposed to videos created by Islamist influencers on TikTok glorifying jihad, which, through engagement, led them to be exposed to more similar content. Subsequently, the youngest, only 14-years-old, organized a Telegram channel with jihadists from around Europe to exchange information on the plotting of attacks and fundraising for weapons. Now, with the accessibility of AI, recruiting efforts on these platforms can be supercharged by creating more content artificially, testing what content performs well online, and, consequently, efficiently reiterating messaging.
One significant challenge associated with cracking down on extremist content that radicalizes people into violent action is that it does not eliminate the underlying phenomenon or reduce demand. Instead, it can lead to the displacement of extremists to other less regulated platforms or those harder to monitor. This phenomenon, known as "platform migration," has been observed across various online extremist communities. A notable example is the incel movement. After being banned from Reddit in 2017 due to the platform's efforts to curb hate speech and violent ideologies, members of the incel community dispersed to more obscure and less regulated online spaces, including niche forums, encrypted messaging applications, and alternative social media platforms like 4chan, 8kun, and various incel-specific websites. This pattern of migration underscores a broader issue in combating online extremism: while de-platforming can disrupt the visibility and reach of extremist groups, it may inadvertently lead to their consolidation in more insular and extreme spaces. Further, as these movements remain technologically agile and become more insular, monitoring and disruption efforts can become more challenging for practitioners and law enforcement.
The phenomenon of platform migration has also occurred in the global jihadi movement. ISIS was highly active on Twitter in the early 2010s, using the platform to recruit, disseminate propaganda, and coordinate attacks. However, as Twitter began cracking down on ISIS-related accounts by suspending thousands of them, members of ISIS migrated to other platforms. One of them is Telegram, an encrypted messaging service that offered more privacy and was less regulated then. On Telegram, ISIS supporters could create channels and groups wh ere they continued to share propaganda, communicate securely, and organize their attacks.
The use of social media to recruit and radicalize fits a long-standing trend: jihadi terrorists have historically adapted and leveraged new and emerging technologies for recruitment and operational ends – from Islamists’ early use of blogs to the leveraging of drones for propaganda videos.
While analysts have debated to what extent malicious actors will use AI in terms of spreading extremist content, ISIS has already used generative AI applications for content creation and broadcasting the claims of attacks. In 2024, ISIS supporters launched an AI-generated program called News Harvest to disseminate propaganda videos.
Additionally, a study published in the CTC Sentinel in January 2024 testing the robustness of AI platforms against misuse observed a high success rate in AI platforms responding to prompts related to extremist activities, even in the absence of jailbreaking — e.g., the process of bypassing the safety features embedded in an AI chatbot. Additionally, they noted that some platforms were more vulnerable to providing harmful information than others. Indirect or hypothetical prompts proved more effective in eliciting harmful responses from AI models. Such workarounds are bound to be exploited.
Main Conclusions
With the recent news that jihadists inspired by Islamic State (ISIS) were planning to conduct a terrorist attack against a Taylor Swift concert in Vienna, Austria, it has also come to light that some of the perpetrators involved in the plot radicalized online.
In the wake of Hamas' October 7 attacks and Israel's subsequent military actions in Gaza, Europe has seen a troubling rise in terror-related arrests, with nearly two-thirds of those apprehended between October 2023 and June 2024 being teenagers.
The use of social media platforms, encrypted messaging apps and video-based platforms remains a critical component of jihadist recruitment and planning efforts around the world, and they can be particularly attractive to young people.
While analysts have debated to what extent malicious actors will use AI in terms of spreading extremist content, ISIS has already used generative AI applications for content creation and broadcasting the claims of attacks. In 2024, ISIS supporters launched an AI-generated program called News Harvest to disseminate propaganda videos.
Source: Институт Ближнего Востока