This guide sets out the evidence base for ‘online radicalisation’, examining how individual use of the Internet, in conjunction with offline influences, can facilitate radicalisation processes.

Overview

This guide sets out the evidence base for ‘online radicalisation’, examining how individual use of the Internet, in conjunction with offline influences, can facilitate radicalisation processes. The UK is the main context of concern, however comparable evidence is found in studies with samples from the USA, Canada, Belgium, Germany, Austria, and Israel. 

Radicalisation remains a contentious concept and few studies explicitly define ‘online radicalisation’. For the purposes of this guide, ‘radicalisation’ is understood as leading to cognitive outcomes reflected in changes in beliefs and ideas, and/or behavioural outcomes which manifest in changes in behaviour.

Methodology

Two systematic literature reviews (Hassan et al., 2018; Carthy et al., 2020) directed initial searches for relevant research. Further literature was identified through forward and backward citation searching, and narrower key word searches conducted in Google Scholar. Literature searches were completed between June and August 2022. The guide primarily examines literature published between January 2017 and July 2022. Although the evidence base remains modest in size, the research underpinning this guide is assessed to be good quality. There is a growing body of evidence that uses qualitative and quantitative methods to examine a range of factors which are relevant to online radicalisation.

Key Findings

  • Online and offline activities and domains interact, challenging the ‘online/offline dichotomy’ popular in early research into online radicalisation. Radicalisation processes rarely take place in either the online domain or the offline sphere exclusively, but instead are characterised by complex and dynamic interactions between the two.
  • Research that sought to distinguish between online and offline processes may have over-estimated the extent to which the Internet contributes to radicalisation processes. This tendency to focus on the role of the Internet may have come at the expense of recognising the role of offline factors and the importance of the interaction between online and offline contexts.
  • The Internet in isolation does not cause radicalisation and is better understood as playing a role in facilitating this process. While the Internet can contribute to an individual’s radicalisation, it cannot drive the process on its own..

Behavioural radicalisation

  • Use of the Internet can enable behavioural outcomes including event planning and preparatory activities, communication and networking behaviours (including arranging offline activities) and ideology-seeking actions.
  • Pathways into violent extremism have been characterised as primarily offline, mainly online, and hybrid. Hybrid pathways seem to be the most common.
  • There is no single profile of, or standard trajectory taken by, individuals whose use of the Internet influenced their radicalisation. However different pathways seem to be associated with differing levels of intent, capability, and engagement. Hybrid pathways demonstrate greatest engagement and intent; offline pathways, greatest capability; and online, the lowest levels of engagement, intent and capability.

Cognitive radicalisation

  • Empirical research analysing the influence of online interactions and exposure to extremist content on violent extremist behaviour remains limited.
  • Video-sharing platforms and social networking sites are spaces where individuals are most likely to encounter extremist content online.
  • The individual is an active rather than passive actor in the radicalisation process. It is the individual’s behaviour and how they utilise the Internet that informs its relevance to radicalisation.
  • There is little robust evidence about whether and how recruiters try to identify or engage with those seeking out online extremist material.
  • Individuals who actively seek out violent extremist material online seem to be at greater risk of radicalising and engaging in violence, compared to passive consumers. 
  • Research on the role exposure to violent extremist content online plays in cognitive radicalisation has suggested that initial exposure to extremist content online has the potential to trigger an interest in extreme ideologies, and that exposure to content from a combination of online and offline spheres may be more influential than exposure via one or the other.
  • The amount of time spent online and willingness to express political views on the Internet seem to be associated with greater exposure to extremist material.
  • A study that looked at personality traits, specifically the role of empathy, hostility, and aggression, found that aggression may be more influential than exposure to extremist propaganda in influencing extremist cognitions. However, research on the dynamics of these processes remains limited.

Online indicators of behavioural radicalisation

  • Robust empirical evidence on how online activities might be used to identify individuals at risk of behavioural radicalisation is comparatively weak.
  • There is some evidence that exposure to extremist content online has a stronger link to radicalisation in comparison with other kinds of media-related risk factors, such as different platforms, mediums (e.g. Internet, newspaper etc.), content, activities, and attitudes.  
  • Recruiters may use different kinds of online extremist material to first nurture cognitive radicalisation and then try and move people towards violence.
  • Some research suggests that posting patterns on social media may be able to differentiate between violent and non-violent extremists, and between behavioural and cognitive outcomes, but further research is needed to fully understand these processes.
  • Future research is likely to benefit from combining computational and social science methods, and developing robust, publicly available standardised datasets which are free from bias.

Intervention strategies

  • The effectiveness of counter-narratives varies according to the intervention technique used and the type of outcome targeted.
  • There is insufficient evidence to determine whether counter-narratives can prevent violence, however they may be able to address some of the risk factors associated with radicalisation.
  • Inoculation theory may provide a foundation for developing deterrence strategies. This approach introduces individuals to weakened versions of an argument whilst providing evidence to refute it. Preliminary experiments indicate that ‘active’ inoculation methods (where the individual actively engages in a task such as a computer game) can improve critical thinking skills and reduce vulnerability to radicalisation. This research is at an early stage that will benefit from more attention before the potential risks, implications and scalability of this approach is understood.
  • Although the evidence base is very limited, interventions may benefit from adopting a fine-grained approach that is tailored to specific audiences and online contexts, including audience segmentation and micro-targeting.
  • Interventions have the potential to produce unintended outcomes, including further entrenching extremist views, for example where activists initiate arguments in response to extremist positions.
  • There is some, limited evidence to suggest that highlighting the personal impact of involvement in extremism may be more effective than challenging extremist ideas or arguments, and that online interventions may be less effective with those with more entrenched views. 
  • Intervention providers working online will benefit from training and support to mitigate the risks associated with this work, and to ensure their approach is evidence-informed.

Challenges to understanding online radicalisation

  • Accessing and gathering valid empirical data is one of the main barriers to producing robust research able to evidence whether, and to what extent, online activity influences violent offline behaviour. Similar difficulties arise in efforts to assess which factors influence attitudinal change.  
  • It can be difficult to generalise the findings of the research drawn from small-n sample sizes collected using qualitative methods, or which focuses on a specific ideology or geographical context. Drawing broader conclusions to groups or settings beyond the data sample should be undertaken with caution.
  • Large-n computational methods have the potential to identify broader trends in the data but can be at risk of over-simplifying radicalisation processes.
  • Efforts to understand the impact of online interventions face similar challenges to evaluations of offline P/CVE programmes. These include the difficulty understanding an intervention’s impact; accessing appropriate data; ethical and security risks; and the difficulty identifying and evidencing the causal factors that shape outcomes. 
  • Methodological differences in how data are collected, used and analysed can be difficult to translate across disciplines.
  • Ambiguous and/ or contested definitions of ‘online radicalisation’ can make it challenging to draw comparisons across studies which may be focused on different phenomena.

Recommendations for Policy and Practice

  • P/CVE interventions are likely to benefit from taking account of the hybrid nature of radicalisation processes and develop ways of targeting online and offline domains simultaneously, rather than separately. For example, by working in offline contexts to help develop digital literacy skills if the online space seems to be an important source of information for those engaged in primary or secondary interventions.
  • Intervention strategies which provide an alternative source of meaning and association to replace the relational networks offered by extremist groups, both online and offline, appear promising.
  • There is some evidence to suggest it may be beneficial to prioritise interventions which focus on those who actively seek extremist content online, as they may be at greater risk of radicalisation to violence.
  • The gamification (or use of mechanisms used in games) of interventions has the potential to appeal to active seekers. These types of intervention can encourage the development of critical thinking skills and may provide an element of interaction that active seekers are looking for.
  • Interventions targeting video-sharing platforms and social networking sites may have a greater impact than targeting other areas online. However, there are risks to this approach. Counter-messaging videos and extremist content can share key words. This means that the algorithms which drive automated recommendation systems may direct users to extremist content, rather than to counter-messaging videos.
  • Counter-narratives will benefit from careful targeting, taking account of the specific audience; the extent to which they may already be persuaded by extremist ideas; the risk factors the intervention is seeking to influence and the mechanisms by which positive outcomes might be enabled.
  • Evidence regarding the impact of removing extremist content is limited. Taking down material may help to reduce its accessibility. However, there is some limited evidence that where material is removed from non-encrypted, more accessible online spaces, this has the potential to encourage users to move to encrypted platforms which are more difficult to monitor and moderate.
  • Interventions should take account of unintended outcomes, including the potential to further entrench extremist views; generate risks to freedom of speech; and create incentives for tech companies to ‘over-censor’ content to avoid sanction.
  • Intervention providers working online should receive appropriate training, professional development opportunities, and support.

Directions for Future Research

Key areas of future research include:

  • Further work to understand the role of the Internet in pathways into extremism, including research able to interpret how online and offline dynamics interact.
  • Research that draws on first-hand accounts of how the Internet shaped an individual’s thinking and behaviour has the potential to elucidate the experiential aspects of radicalisation processes.
  • Studies examining the impact of the COVID-19 pandemic on online radicalisation could try to assess the impact of lockdowns and whether associated feelings of isolation and the increased use of technology as a substitute for physical, face-to-face interactions led to greater exposure to, or engagement with, extremist content.
  • Research which bridges computational approaches which analyse large amounts of data with social science-based methods able to interpret the experiential and subjective experiences of online users may provide greater insights and overcome the disjuncture between disciplines.
  • Studies focused on a specific ideology could be carried out with data on a different ideology. This would help to determine whether findings can be generalised or are ideologically specific, and whether targeted interventions would benefit from being tailored to specific ideologies.
  • Further research into the role of individual personality traits, pre-existing beliefs and other psychological factors that may shape responses to extremist content and radicalisation. This would help tailor and target interventions in ways which are appropriate for particular groups or individuals, and avoid unintended or negative outcomes.
  • Areas where results are limited, mixed or inconclusive would benefit from further research. These include:
    • The relationship between exposure to extremist content online and cognitive radicalisation.
    • Approaches able to interpret whether patterns of online engagement have the potential to identify individuals at risk of cognitive or behavioural radicalisation.
  • Further work to understand the impact of interventions is important, assessing:
    • What effect the removal of online extremist content has, and what risks this strategy carries.
    • The potential of realist evaluation to develop a better understanding of which counter-narrative interventions work, for whom, under what circumstances, and why.
    • The unintended consequences of different kinds of intervention strategy, including direct engagement online; efforts to direct people to counter messages; and counter-narrative material.  
Read more

Ajala, I., Feroze, S., El Barachi, M., Oroumchian, F., Mathew, S., Yasin, R., & Lutfi, S. (2022). Combining artificial intelligence and expert content analysis to explore radical views on twitter: Case study on far-right discourse. Journal of Cleaner Production, 362, 132263, 1-17. https://doi.org/10.1016/j.jclepro.2022.132263

Aly, A., & Lucas, K. (2015). ‘Countering Online Violent Extremism in Australia: Research and Preliminary Findings’. In S. Zeiger & A. Aly (eds.) Countering Violent Extremism: Developing an Evidence-Base for Policy and Practice. Perth, WA: Hedayah and Curtin University, pp. 81-89.

Araque, O. & Iglesias, C. A., (2022). An Ensemble Method for Radicalization and Hate Speech Detection Online Empowered by Sentic Computing. Cognitive Computation, 14, 48-61. https://doi.org/10.1007/s12559-021-09845-6

Badawy, A. & Ferrara, E. (2018). The rise of Jihadist propaganda on social networks. Journal of Computational Social Science, 1, 453-470. https://doi.org/10.1007/s42001-018-0015-z

Bastug, M. F., Douai, A., & Akca, D. (2020). Exploring the “Demand Side” of Online Radicalization: Evidence from the Canadian Context. Studies in Conflict and Terrorism, 43(7), 616–637. https://doi.org/10.1080/1057610X.2018.1494409

Baugut, P., & Neumann, K. (2020). Online propaganda use during Islamist radicalization. Information Communication and Society, 23(11), 1570–1592. https://doi.org/10.1080/1369118X.2019.1594333

Buerger, C. & Wright, L. (2019). ‘Counterspeech: A literature review’. Dangerous Speech. Available at: https://dangerousspeech.org/wp-content/uploads/2019/11/Counterspeech-lit-review_complete-11.20.19-2.pdf

Bilazarian, T. (2020). Countering Violent Extremist Narratives Online: Lessons From Offline Countering Violent Extremism. Policy and Internet, 12(1), 46–65. https://doi.org/10.1002/poi3.204

Braddock, K. (2022). Vaccinating Against Hate: Using Attitudinal Inoculation to Confer Resistance to Persuasion by Extremist Propaganda. Terrorism and Political Violence, 34(2), 240–262. https://doi.org/10.1080/09546553.2019.1693370

Braddock, K. (2020). Weaponized words: The strategic role of persuasion in violent radicalization and counter-radicalization. Cambridge, Cambridge University Press. https://doi.org/10.1017/9781108584517 

 

Braddock, K., & Horgan, J. (2016). Towards a guide for constructing and disseminating counternarratives to reduce support for terrorism. Studies in Conflict & Terrorism, 39(5), 381-404. https://doi.org/10.1080/1057610X.2015.1116277

Braddock, K., & Morrison, J. F. (2020). Cultivating Trust and Perceptions of Source Credibility in Online Counternarratives Intended to Reduce Support for Terrorism. Studies in Conflict & Terrorism, 43(6), 468-492. https://doi.org/10.1080/1057610X.2018.1452728

Brice, P. (2019). ‘Challenging the Far-Right in Australia’. In M. Peucker & D. Smith (eds.), The Far-Right in Contemporary Australia. Singapore: Palgrave Macmillan, pp.199-214.  https://doi.org/10.1007/978-981-13-8351-9

Carthy, S. L., Doody, C. B., Cox, K., O’Hora, D., & Sarma, K. M. (2020). Counter-narratives for the prevention of violent radicalisation: A systematic review of targeted interventions. Campbell Systematic Reviews, 16(3), 1-37. https://doi.org/10.1002/cl2.1106

Common, M. F. (2020). Fear the Reaper: how content moderation rules are enforced on social media. International Review of Law, Computers & Technology, 34(2), 126-152. https://doi.org/10.1080/13600869.2020.1733762

Conway, M. (2017). Determining the role of the internet in violent extremism and terrorism: Six suggestions for progressing research. Studies in Conflict and Terrorism, 40(1), 77–98. https://doi.org/10.1080/1057610X.2016.1157408

Costello, M., Barrett-Fox, R., Bernatzky, C., Hawdon, J. & Mendes, K. (2020). Predictors of Viewing Online Extremism Among America's Youth. Youth & Society, 52(2), 710-727. https:// doi.org/10.1177/0044118X18768115

Davey, J., Tuck, H. and Amarasingam, A. (2019). An imprecise science: Assessing interventions for the prevention, disengagement and de-radicalisation of left and right-wing extremists. Institute for Strategic Dialogue. Available at: https://www.isdglobal.org/isd-publications/an-imprecise-science-assessing-interventions-for-the-prevention-disengagement-and-de-radicalisation-of-left-and-right-wing-extremists/

Douek, E. (2020). Australia's 'Abhorrent Violent Material' Law: Shouting 'Nerd Harder' and Drowning Out Speech. Australian Law Journal 94, 41-60, Available at: https://ssrn.com/abstract=3443220

Douek, E. (2021). The Limits of International Law in Content Moderation. University of California, Irvine (UCI) Journal of International, Transnational, and Comparative Law 6(1), 37-76. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3709566

Ebner, J. (2020). Going Dark: The Secret Social Lives of Extremists. London; Dublin, Bloomsbury Publishing.

El Barachi, M., Mathew, S. S., Oroumchian, F., Ajala, I., Lutfi, S., & Yasin, R. (2022). Leveraging Natural Language Processing to Analyse the Temporal Behavior of Extremists on Social Media. Journal of Communications Software and Systems, 18(2), 195-207. https://doi.org/10.24138/jcomss-2022-0031

Fernandez, M., Gonzalez-Pardo, A., & Alani, H. (2019). Radicalisation Influence in Social Media. Journal of Web Science, 6, 1–15. https://doi.org/10.1561/106

Fernandez, M., & Harith, A. (2021). Artificial Intelligence and Online Extremism: Challenges and Opportunities. In J. McDaniel & K. Pease (eds.), Predictive Policing and Artificial Intelligence. Abingdon; New York: Routledge, pp. 132-162. https://doi.org/10.4324/9780429265365-7

Frissen, T. (2021). Internet, the great radicalizer? Exploring relationships between seeking for online extremist materials and cognitive radicalization in young adults. Computers in Human Behavior, 114(August 2020), 106549. https://doi.org/10.1016/j.chb.2020.106549

Frissen, T., & d’Haenens, L. (2017). Legitimizing the caliphate and its politics: Moral disengagement rhetoric in Dabiq. In S.F. Krishna-Hensel (ed.), Authoritarian and Populist Influences in the New Media. Abingdon: Routledge. https://doi.org/10.4324/9781315162744

Frissen, T., Toguslu, E., Van Ostaeyen, P., & d’Haenens, L. (2018). Capitalizing on the koran to fuel online violent radicalization: A taxonomy of Koranic references in ISIS’s Dabiq. Telematics and Informatics, 35(2), 491–503. https://doi.org/10.1016/j.tele.2018.01.008

Gaikwad, M., Ahirrao, S., Phansalkar, S., & Kotecha, K. (2021). Online Extremism Detection: A Systematic Literature Review with Emphasis on Datasets, Classification Techniques, Validation Methods, and Tools. IEEE Access, 9, 48364–48404. https://doi.org/10.1109/ACCESS.2021.3068313

Ganesh, B. (2019). Evaluating the Promise of Formal Counter-Narratives. In B. Ganesh & J. Bright (eds.), Extreme digital speech: Contexts, responses and solutions. VOX-Pol, pp. 89-98. Available at: https://www.voxpol.eu/download/vox-pol_publication/DCUJ770-VOX-Extreme-Digital-Speech.pdf#page=91

Gaudette, T., Scrivens, R., & Venkatesh, V. (2020). The Role of the Internet in Facilitating Violent Extremism: Insights from Former Right-Wing Extremists. Terrorism and Political Violence, 00(00), 1–18. https://doi.org/10.1080/09546553.2020.1784147

Gill, P., Corner, E., Conway, M., Thornton, A., Bloom, M., & Horgan, J. (2017). Terrorist Use of the Internet by the Numbers: Quantifying Behaviors, Patterns, and Processes. Criminology and Public Policy, 16(1), 99–117. https://doi.org/10.1111/1745-9133.12249

Hassan, G., Brouillette-Alarie, S., Alava, S., Frau-Meigs, D., Lavoie, L., Fetiu, A., Varela, W., Borokhovski, E., Venkatesh, V., Rousseau, C., & Sieckelinck, S. (2018). Exposure to Extremist Online Content Could Lead to Violent Radicalization: A Systematic Review of Empirical Evidence. International Journal of Developmental Sciences, 12(1–2), 71–88. https://doi.org/10.3233/DEV-170233

Hawdon, J., Bernatzky, C. & Costello, M. (2019). Exposure to Violent Extremist Material Online: Cyber-Routines, Political Attitudes, and Exposure to Violence-Advocating Online Extremism. Social Forces, 98(1), 329-354. https://doi.org/10.1093/sf/soy115

Helmus, T. C. & Klein, K. (2018). Assessing Outcomes of Online Campaigns Countering Violent Extremism: A Case Study of the Redirect Method. Santa Monica, CA: RAND Corporation. Available at: https://www.rand.org/pubs/research_reports/RR2813.html.

Herath, C., & Whittaker, J. (2021). Online Radicalisation: Moving beyond a Simple Dichotomy. Terrorism and Political Violence, 00(00), 1–22. https://doi.org/10.1080/09546553.2021.1998008

HM Government (2012). Channel: Vulnerability assessment framework. October. London: United Kingdom. Available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/118187/vul-assessment.pdf

Howard, T., Poston, B., & Lopez, A. (2022). Extremist Radicalization in the Virtual Era: Analyzing the Neurocognitive Process of Online Radicalization. Studies in Conflict and Terrorism, 0(0), 1–26. https://doi.org/10.1080/1057610X.2021.2016558

Institute for Economics & Peace (2022). Global Terrorism Index 2022: Measuring the Impact of Terrorism. March. Sydney: Australia. Available at https://www.visionofhumanity.org/resources/.

Jensen, M. A., Atwell Seate, A., & James, P. A. (2020). Radicalization to violence: A pathway approach to studying extremism. Terrorism and Political Violence, 32(5), 1067–1090. https://doi.org/10.1080/09546553 .2018.1442330

Kenyon, J., Binder, J. F., & Baker-Beall, C. (2022). Online radicalization: Profile and risk analysis of individuals convicted of extremist offences. Legal and Criminological Psychology, April, 1–17. https://doi.org/10.1111/lcrp.12218

Koehler, D. (2014). The radical online: Individual radicalization processes and the role of the Internet. Journal for Deradicalization, 1 (Winter), 116–134. Available at: https://journals.sfu.ca/jd/index.php/jd/article/view/8http://journals.sfu.ca/jd/index.php/jd/article/view/8%5Cnhttp://journals.sfu.ca/jd/index.php/jd/article/download/8/8

Lara-Cabrera, R., Gonzalez-Perez, A., Benouaret, K., Faci, N., Benslimane, D., & Camacho, D. (2017). Measuring the Radicalisation Risk in Social Networks. IEEEAccess, 5, 10892-10900. https://doi.org/10.1109/ACCESS.2017.2706018

Lara-Cabrera, R., Gonzalez-Perez, A., & Camacho, D. (2019). Statistical analysis of risk assessment factors and metrics to evaluate radicalisation in Twitter. Future Generation Computer Systems, 93, 971-978. https://doi.org/10.1016/j.future.2017.10.046

Lewandowsky, S., & Yesilada, M. (2021). Inoculating against the spread of Islamophobic and radical-Islamist disinformation. Cognitive Research: Principles and Implications, 6(1). https://doi.org/10.1186/s41235-021-00323-z

Lewis, J., Marsden, S. & Copeland, S. (2020). Evaluating Programmes To Prevent And Counter Extremism. Lancaster University, Lancaster: Centre for Research and Evidence on Security Threats (CREST). Available at: https://crestresearch.ac.uk/resources/evaluating-programmes-to-prevent-and-counter-extremism

Lewis, J. & Marsden, S. V. (2021). Countering Violent Extremism Interventions: Contemporary Research. Lancaster: Centre for Research and Evidence on Security Threats (CREST). Available at: https://crestresearch.ac.uk/resources/countering-violent-extremism-interventions/

Lowe, D. (2022). Far-Right Extremism: Is it Legitimate Freedom of Expression, Hate Crime, or Terrorism?. Terrorism and Political Violence, 34(7), 1433-1453. https://doi.org/10.1080/09546553.2020.1789111

Macdonald, S., & J. Whittaker, J., (2019) “Online Radicalization: Contested Terms and Conceptual Clarity,” In J.R. Vacca (ed.), Online Terrorist Propaganda, Recruitment, and Radicalisation. Boca Raton: CRC Press, pp. 33-46. https://doi.org/10.1201/9781315170251

Meleagrou-Hitchens, A., Alexander, A., & Kaderbhai, N. (2017). The impact of digital communications technology on radicalization and recruitment. International Affairs, 93(5), 1233–1249. https://doi.org/10.1093/ia/iix103

Mills, C. E., Freilich, J. D., Chermak, S. M., Holt, T. J., & LaFree, G. (2021). Social Learning and Social Control in the Off- and Online Pathways to Hate Crime and Terrorist Violence. Studies in Conflict and Terrorism, 44(9), 701–729. https://doi.org/10.1080/1057610X.2019.1585628

Nagy, Z. (2018). Artificial Intelligence and Machine Learning Fundamentals: Develop Real-World Applications Powered by the Latest AI Advances, Birmingham, Packt Publishing, Limited.

Nienierza, A., Reinemann, C., Fawzi, N., Riesmeyer, C. & Neumann, K. (2021). Too dark to see? Explaining adolescents’ contact with online extremism and their ability to recognize it. Information, Communication & Society, 24(9), 1229-1246. https://doi.org/10.1080/1369118X.2019.1697339

Odağ, Ö., Leiser, A., & Boehnke, K. (2019). Reviewing the role of the internet in radicalization processes. Journal for Deradicalization, 21 (Winter), 261–300. Available at: https://journals.sfu.ca/jd/index.php/jd/article/view/289

Panday, P. (2020). ‘One year since the Christchurch Call to Action: A Review’. ORF Issue Brief No. 389, August 2020. New Delhi, India: Observer Research Foundation. Available at: https://www.orfonline.org/research/one-year-since-the-christchurch-call-to-action-a-review/

Pauwels, L., & Schils, N. (2016). Differential online exposure to extremist content and political violence: Testing the relative strength of social learning and competing perspectives. Terrorism and Political Violence, 28, 1-29. https://doi.org/10.1080/09546553.2013.876414

Reeve, Z. (2021). Engaging with Online Extremist Material: Experimental Evidence. Terrorism and Political Violence, 33(8), 1595–1620. https://doi.org/10.1080/09546553.2019.1634559

Roiger, R. (2017). Data Mining: A Tutorial Based Primer. 2nd edition. Boca Raton: CRC Press. https://doi.org/10.1201/9781315382586

Saleh, N. F., Roozenbeek, J., Makki, F. A., McClanahan, W. P., & van der Linden, S. (2021). Active inoculation boosts attitudinal resistance against extremist persuasion techniques: a novel approach towards the prevention of violent extremism. Behavioural Public Policy, 1–24. https://doi.org/10.1017/bpp.2020.60

Schmitt, J. B., Rieger, D., Rutkowski, O., & Ernst, J. (2018). Counter-messages as prevention or promotion of extremism?! the potential role of YouTube. Journal of Communication, 68(4), 758–779. https://doi.org/10.1093/joc/jqy029

Scrivens, R., Gill, P., & Conway, M. (2020). The Role of the Internet in Facilitating Violent Extremism and Terrorism: Suggestions for Progressing Research. In T. Holt & A. Bossler (eds.) The Palgrave Handbook of International Cybercrime and Cyberdeviance. Cham: Palgrave Macmillan, pp. 1417–1435. https://doi.org/10.1007/978-3-319-78440-3_61

Scrivens, R., Wojciechowski, T. W., Freilich, J. D., Chermak, S. M., & Frank, R. (2021). Comparing the Online Posting Behaviors of Violent and Non-Violent Right-Wing Extremists. Terrorism and Political Violence, 00(00), 1–18. https://doi.org/10.1080/09546553.2021.1891893

Siegel, A. A. (2020). ‘Online hate speech’. Social Media and Democracy: The State of the Field, Prospects for Reform. Cambridge: Cambridge University Press, pp. 56-88. https://doi.org/10.1017/9781108890960

Shortland, N., & McGarry, P. (2022). Supplemental Material for The Personality and Propaganda Puzzle: Exploring the Effect of Personality on Exposure to Extremist Content Online. Psychology of Violence, 12(1), 1–10. https://doi.org/10.1037/vio0000396.supp

Shortland, N., Nader, E., Thompson, L. & Palasinski, M. (2022). Is Extreme in the Eye of the Beholder? An Experimental Assessment of Extremist Cognitions. Journal of Interpersonal Violence, 37(7-8), NP4865-NP4888. https://doi.org/10.1177/0886260520958645

Smith, L. G. E., Blackwood, L., & Thomas, E. F. (2020). The Need to Refocus on the Group as the Site of Radicalization. Perspectives on Psychological Science, 15(2), 327–352. https://doi.org/10.1177/1745691619885870

Tharwat, A. (2021). Classification assessment methods. Applied Computing and Informatics, 17(1), 168-192, https://doi.org/10.1016/j.aci.2018.08.003

Theodosiadou, O., Pantelidou, K., Bastas, N., Chatzakou, D., Tsikrika, T., Vrochidis, S., & Kompatsiaris, I. (2021). Change point detection in terrorism-related online content using deep learning derived indicators. Information (Switzerland), 12(7). https://doi.org/10.3390/info12070274

Tworek, H. & Leerssen, P. (2019). An analysis of Germany’s NetzDG law. Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression. Available at: https://www.ivir.nl/publicaties/download/NetzDG_Tworek_Leerssen_April_2019.pdf

Valentini, D., Lorusso, A. M., & Stephan, A. (2020). Onlife Extremism: Dynamic Integration of Digital and Physical Spaces in Radicalization. Frontiers in Psychology, 11(March). https://doi.org/10.3389/fpsyg.2020.00524

von Behr, I., Reding, A., Edwards, C., & Gribbon, L.  (2013). Radicalisation in the digital era: The use of the internet in 15 cases of terrorism and extremism. Santa Monica, CA: RAND Corporation. https://doi.org/10.7249/RR453

Voogt, S. (2017). Countering far-right recruitment online: CAPE’s practitioner experience. Journal of Policing, Intelligence and Counter Terrorism, 12(1), 34-46. https://doi.org/10.1080/18335330.2016.1215510

Wendelberg, L. (2021). An Ontological Framework to Facilitate Early Detection of ‘Radicalization’ (OFEDR)—A Three World Perspective. Journal of Imaging, 7(60), 1-27, https://doi.org/10.3390/jimaging 7030060

Winter, C., Neumann, P., Meleagrou-Hitchens, A., Ranstorp, M., Vidino, L., & Fürst, J. (2020). Online extremism: Research trends in internet activism, radicalization, and counter-strategies. International Journal of Conflict and Violence, 14(2), 1–20. https://doi.org/10.4119/ijcv-3809

Wolfowicz, M., Perry, S., Hasisi, B., & Weisburd, D. (2021). Faces of radicalism: Differentiating between violent and non-violent radicals by their social media profiles. Computers in Human Behavior, 116 (November 2020), 106646, 1-10. https://doi.org/10.1016/j.chb.2020.106646

Wolfowicz, M., Hasis, B. & Weisburd, D. (2022). What are the effects of different elements of media on radicalization outcomes? A systematic review. Campbell Systematic Reviews, 18, e1244, 1-50. https://doi.org/10.1002/cl2.1244

Youngblood, M. (2020). Extremist ideology as a complex contagion: the spread of far-right radicalization in the United States between 2005 and 2017. Humanities & Social Sciences Communications, 7(49), 1-10. https://doi.org/10.1057/s41599-020-00546-3