Situation, Facts and Events
12.10.2024

American experts speak about new trends influencing the situation with terrorist threats

As noted by American security experts, there are now clear trends that, when combined, could significantly impact the terrorist threat landscape in both the short and long term.

The first trend is that the barriers to entry have been lowered for individuals and small groups to engage with a range of emerging technologies, including artificial intelligence (AI). Not only are many AI services, including Large Language Models (LLMs) free or open-source, the intuitive user interfaces of many of these generative AI tools have made the use of certain AI/ML applications extremely accessible.

There are myriad ways that terrorists and other extremists have, and will continue to, use AI for organizational and operational purposes, including: propaganda; interactive recruitment; automated attacks (e.g. unmanned aerial systems, or UAS); social media exploitation; and cyber attacks, as well as many others.

At the same time, terrorist groups like Islamic State Khorasan (ISKP) have intensified their commitment to media jihad and vastly expanded their propaganda capabilities while new and emerging technologies like artificial intelligence are becoming more widely available and easier to use.

Multiple domestic and international terrorist organizations and extremist groups across the ideological spectrum have explicitly issued guidance on how to securely and/or effectively leverage generative AI for content creation. In 2023, Islamic State pushed out a guide on how to securely use generative AI. 

Meanwhile, Tech Against Terrorism identified a post on a far-right message board that included a guide to memetic warfare, and the use of AI to make these propaganda memes.

In April, it became apparent that Islamic State supporters were expressing an interest in further using AI to boost the scale and scope of its public content. The group experimented with a “news bulletin” in the form of a video that read aloud Islamic State claims, presented by an AI-generated avatar. 

In the aftermath of the ISKP attack on Crocus City Hall in Moscow in late March, an IS supporter used Rocket Chat, an encrypted platform, to disseminate a video news bulletin that accompanied a broader propaganda campaign against Russia. The bulletin used AI-generated characters designed to emulate news broadcasters, and the aesthetics of the video designed to mimic the broadcasting style of mainstream media outlets. The group’s so-called “News Harvest” program is also an AI-generated video news broadcast, which the group has used to describe its operations in various regions of the world. This propaganda was produced using text-to-speech AI to translate written information into speech and audio with a plausible human voice, while video generators help produce other realistic effects. 
 
It is not only the media and propaganda component of AI that can be used by terrorist organizations. Operational strategy and tactics will inevitably shift as terrorists seek to maximize the impact of their attacks and minimize chances of being identified and arrested. 

On the interactive recruitment front, it is known that AI-powered chatbots have been highly effective in radicalizing individuals. In 2021, a 19-year-old Indian-British teenager attempted to assassinate Queen Elizabeth II at Windsor Castle, motivated by revenge for the 1919 Jalianwalah Bagh massacre, after he interacted online with his girlfriend, who was, in fact, an AI-powered chatbot he had created.
 
The importance of virtual recruiters and virtual planning advice in various extremist groups, especially those that rely on lone wolf or inspired actors, are bound to become more efficient with the use of trained generative AI systems that can speak to hesitations a recruit may have or logistical support they may need. 

There are also obvious vulnerabilities in AI-powered systems terrorists may seek to exploit. Many AI technologies are already embedded in modern warfare, specifically UAS. According to Professor Sarah Lohmann, terrorists are already exploring the use of automated vehicles to commit attacks. Traffic guiding systems could be hacked into and manipulated to cause mass loss of life.
 
The second trend is, while terrorist organizations are using advanced technologies, such as AI, to facilitate their operational tasks, they are also turning to low-tech solutions to evade surveillance by intelligence agencies. 

For instance, the recent pager explosions in Lebanon occurred not long after Hezbollah Secretary General Hassan Nasrallah explicitly encouraged Hezbollah members to enhance operational security by switching from mobile devices to pagers and walkie-talkies in light of October 7. Nasrallah claimed mobile devices serve as “Israeli collaborators” as they are relatively easy to intercept. 

October 7 was another case in point on the use of low-tech tactics: Hamas launched a coordinated assault on Israel, utilizing low-cost drones to disrupt advanced surveillance systems along the border. This strategy compromised Israel's $1 billion border defense system, considered one of the most secure in the world, allowing thousands of militants to infiltrate Israeli territory. That same day, Hamas utilized both twin-occupant and single-occupant powered paragliders to infiltrate Israeli territory, showcasing a creative use of low-tech innovation in conflict situations.
 
The issue of AI, radicalization, and terrorism is complex and multi-faceted. Considering the simultaneous awareness of terrorist groups of the potential benefits of low-tech communications infrastructure and tactics, it gives an even more nuanced picture. 

While it is clear that AI capabilities can be used for counter-radicalization and prevention programs, it remains important to consider the extent to which AI expands the attack surface and what this means for counterterrorism. 

Further, practitioners must also consider the human rights implications of AI’s utilization, including the potential biases embedded in the technology, and how that may impact their usage as a prevention tool, taking steps to mitigate such harms. Alarmism will not prepare us for terrorists and extremists’ shift in tactics, techniques, and procedures (TTPs), but mitigating “a failure of imagination” by serious red teaming and not succumbing to groupthink may at least prepare us to deal with what is to come. 
 
Across the board, the use of AI in conflict is becoming more common. While terrorist groups will continue to gravitate toward emerging technologies as a force multiplier, it is still nation-states that have the upper hand using autonomous systems. 

Still, this remains controversial, evidenced by Israel’s use of an AI-powered database, known as Lavender, to identify potential targets in Gaza, which as The Guardian reported, produced a list of as many as 37,000 potential targets. 

Given the high volume of civilian casualties and collateral damage, a host of legal, moral, and ethical issues have emerged and are unlikely to be solved any time soon, even as more states and non-state actors look to engage with technologies like AI to enhance their warfighting capabilities.

Main conclusions

Terrorist groups like ISKP have strengthened their commitment to media jihad and greatly expanded their propaganda capabilities, while new technologies like artificial intelligence are becoming increasingly accessible and easy to use.
 
In 2023, the Islamic State published guidelines for the safe use of generative AI, and just six months ago it became clear that IS supporters were interested in further using AI to increase the scale and coverage of their content.

While some groups are looking to embrace new technologies, other terrorist organizations are pivoting to low-tech solutions, from pagers to paragliders, to evade sophisticated surveillance systems and national defenses.


Source: Институт Ближнего Востока