skip to main content
European Commission Logo
en English
Newsroom
Overview    News

Cyber-Social Threats: How AI Fuels Modern Radicalisation

Radicalisation is evolving in today’s cyber-social ecosystem. AI-driven disinformation, meme warfare, and digital pseudo-cults are reshaping extremist threats. Strategic and ethical communication, grounded in cognitive and technological awareness, is essential to building resilience and countering polarising narratives in an era of post-truth and hybrid threats.

date:  29/04/2025

Reading time: 6 min

We live immersed in a cyber-social ecosystem that shapes our perceptions, interactions, and social structures, representing a hyperconnected, decentralised, and multidimensional environment where cognitive, emotional, and social spheres increasingly intersect. The cyber-social security paradigm is a fundamental pillar of contemporary security framework. 

Twenty years after the "mediamorphosis" of terrorism and violent extremism, these phenomena have evolved through hyper-personalised/pervasive narratives, now disseminated through cyber-social propulsion instead of traditional propaganda. 

The Malicious Use of AI (MUAI) stands as a key driver of transformation in the landscape of emerging threats. GenAI, deepfakes, AI-bots, and narrative automation are being exploited by both state and non-state actors to enhance the effectiveness of disinformation, execute large-scale social engineering and emotional manipulation ops, exploit cognitive biases and socio-psychological vulnerabilities. MUAI acts as a radicalisation and weaponisation accelerator, leveraging AI's capacity to personalise content, simulate human interactions, and maximise emotional resonance with the aim of redefining and dominating perceived reality. Information saturation, polarisation, mainstreaming of extremism, and normalisation of online hate provide fertile ground for FIMI. 

In such a context, the logic of post-truth becomes dominant. Echo chambers and filter bubbles reinforce pre-existing beliefs, isolate users from dissenting perspectives, and consolidate antagonistic and polarised worldviews. This process erodes individual/collective critical capacities, making the public more susceptible to manipulation and more inclined to embrace radical ideologies. 

Within the violent-extremist/terrorist infospheres, the construction of a "connective" identity is rooted in a dichotomous opposition of Us-vs-Them, fuelled by self-victimisation, heroic narratives, and ideological (self-) justifications for violence. The constant sharing of video content, slogans, and ideological symbols, transforms the sense of belonging into a powerful motivational and operational “connective glue”. 

In such a scenario, new forms of ambiguous communication, such as memes and dark irony, are increasingly prominent, as effective vectors of ideological and polarising content, capable of embedding extremist messages within an ostensibly playful and shareable language mainly among digitally literate teenage audiences. Their semantic ambiguity allows them to appeal to broad and diverse audiences, elicit emotional engagement, and facilitate the normalisation of hateful and socially divisive narratives

Algorithmic personalisation, the aestheticisation of violence through visual AI-storytelling, and the gamification of extremist ideologies are powerful tools in the hands of malicious actors to inspire and mobilise connective groups, new hybrid actors, and lone actors, often particularly vulnerable and increasingly triggerable due to mental health issues. These tools also nourish the collective imagination with dystopian representations of both present and future realities. 

Conspiracy theories proliferate and hybridise with other ideologies in the online space. In parallel, the emerging phenomenon of MUAI-as-a-Service (MaaS) fosters the building of bridges between different criminal entities, enhancing synergies, broadening the complexity, scope, and operational capacities - as illustrated by High-Risk Criminal Networks (HRCNs) - while expanding the pool of potential un-/aware FIMI-proxies. 

AI can have a crucial role in P/CVE and countering disinformation, provided it is grounded in ethical and transparent frameworks. The adoption of AI-based P/CVE strategies/technologies must include: the development of algorithms for monitoring extremist networks and detecting radicalising content, algorithms based on guidelines that prevent discrimination/misuse of technology, and AI-literacy programmes to foster individual resilience. Moreover, public-private collaboration is crucial to strengthening operational synergies among institutions, tech companies, NGOs, and civil society. 

We are not merely threatened by hybrid threats, but rather immersed in a hybrid world characterised by disinformation in a post-truth era, where the cyber-social ecosystem has become the primary battleground for hearts and minds in a scenario of converging threats to public, national, European, and global security. Geopolitical conflicts and crises cannot be fully understood without accounting for the cyber-social dimension, which has become central and whose vulnerabilities can undermine democracies from within. 

The future of security lies in the ability to anticipate - not merely react to – the evolution of threats, through technological, cultural, educational, and regulatory means. Anticipation must be based on constant training and knowledge exchange that integrates practice and research, through dynamic ad-hoc scenarios. Strategic communication plays a central role, not merely as a response tool, but as a lever for orientation, prevention, and resilience. In such a hybrid/hypercomplex environment, the ability to craft and disseminate coherent, timely, and culturally sensitive messages is fundamental to countering disinformation, strengthening public trust, and promoting credible and mobilising narratives. 

Only strategic communications aligned with cognitive processes and cyber-social dynamics can counteract polarising narratives and reestablish truth and social cohesion at the core of public discourse. 

 

Authors: Dr. Arije Antinori, Professor of Criminology, Sociology of Deviance, Terrorism, HRCNs at SAPIENZA University of Rome, EU Senior Expert on Terrorism, and Stratcomms Senior Leading Expert at the EU Knowledge Hub on Prevention of Radicalisation

 

Relevant links: 

Antinori, A. (2022). The Kremlin's troll network never sleeps: Inauthentic pro-Kremlin online behaviour on Facebook in Germany, Italy, Romania and Hungary. Political Capital.   

Antinori, A. (2022). P/CVE Stratcom potential challenges linked to disinformation and violent narratives in the (cyber-)social ecosystem. In RAN Spotlight – Ukraine. European Commission.   

Antinori, A. (2020). Terrorism in the early Onlife Age: From propaganda to ‘propulsion’. In Terrorism and advanced technologies in psychological warfare: New risks, new opportunities to counter the terrorist threat. Nova Science Publishers.   

Antinori, A. (2019). Terrorism and deepfake: From hybrid-warfare to post-truth warfare in a hybrid world. In ECIAIR 2019 – Proceedings of the European Conference on the Impact of Artificial Intelligence and Robotics. Academic Conferences and Publishing International Ltd.