An overview from the CREST report (coming soon) on how to mitigate the impact of terrorism on venues and public spaces.

Introduction

  1. In recent years, governments across the world have included legislation and published guidance material on how to mitigate the impact of terrorism on venues and public spaces. Known as, Protective security this pillar of Counter Terrorism comprises physical security, personnel security, and cyber security.
  2. In 2023, the UK brought forward a draft Terrorism (Protection of Premises) Bill – a legal responsibility for owners and operators of venues and public spaces to take steps to reduce the threat to the public from terrorist attacks. As a result, there is a growing need to understand how protective security can be co-created effectively and efficiently and – more fundamentally – what data should be collected to gain a better picture of whether and how the intervention measures have ‘worked’ in terms of both delivery and uptake.
  3. This summary paper is part of a project, which will develop a new methodological, evaluative framework for understanding and evaluating the effectiveness and effects of policies, methods and approaches designed to protect venues and public spaces including application to the Protect Duty.

Evaluation challenges in the field of terrorism including for protective security

  1. The lack of a common, globally accepted definition of terrorism and violent extremism presents a clear limitation in the design and targeting of interventions. Given the diversity of focus areas, confused or contested definitions of terrorism, and congruently vague policy objectives, those involved in counter terrorism initiatives often find it hard to formulate indicators of success that relate concrete measures to impact on beneficiaries. Whilst output level indicators are relatively easy to identify, outcome and impact level indicators are much harder. The resulting tendency to rely on output level indicators alone is insufficient if we are trying to understand effectiveness.
  2. Terrorist decision-making when carrying out or preparing attacks is poorly understood, as is the effectiveness of deterrence by denial, an approach that seeks to demonstrate the low likelihood that an attack will succeed. To some authors, it is ineffectual to view effectiveness in CT interventions in terms of the lowered frequency or number of terrorist attacks, the number of plots disrupted, or the degree of lethality caused by attacks, because a terrorist adversary might view success differently. Certain terrorist groups may benefit even if an attack fails: foiled attempts attract publicity, cause fear within target audiences, and demonstrate groups are active.
  3. The challenge of ‘measuring a negative’ is one of the key challenges in protective security, as with the wider CT field. The need is to evaluate an intervention’s impact in terms of what it prevented (i.e., estimating what would have happened in the absence of the intervention).
  4. Attributing changes to a particular intervention is challenging in this field because projects are often part of multi-faceted initiatives, containing a variety of measures. Given that the intended outcome of any CT intervention tends to be that nothing happens, proving that this was the case because of the intervention itself and that the outcome would have been different in the absence of the intervention is difficult in practice. Given the infrequency of terrorist attacks, it is important that counter terrorism evaluations do not restrict themselves to this requirement.
  5. The evidence base for protective security programmes is shallow with efforts to evaluate the effectiveness and impact of these activities limited. In the UK, a number of guidance documents focused on Crowded spaces do not include steps on monitoring effectiveness or evaluating impact. Flagship protective security projects, such as Project Griffin, Project Argus or Project Servator are insufficiently evaluated.
  6. Little is known about the true cost of counterterrorism or the potential return on investment. It is therefore almost impossible to ascertain whether the measures adopted are ‘‘performing well’’ or are ‘‘effective’’ in countering and mitigating the threat, risk, and harm of terrorism, unless an attack occurs. In terms of impact, the literature highlights it is important to consider the concept of proportionality. However, the empirical evidence tends to be limited and contradictory. To fully understand whether a measure is proportionate requires an understanding of the actual risk and the perception of it. More broadly, it is important to plan for unintended consequences of protective security measures, such as the over-securitisation of spaces, visible measures that don’t blend into the environment, and unintended vulnerability where protective security has not been considered holistically. This may then increase the threat of terrorism rather than to manage, mitigate, or reduce it.

Potential Ways to Mitigate these Challenges

An effective evaluation process must account for what is considered success, as well as what are the differences between measuring effectiveness of the programme itself versus the impact of the programme on levels of security. Evaluation needs to take place at different levels.

  1. Success: The review indicates the importance of having a comprehensive theory of change in place for the Protect Duty itself. It is important to be clear about what the duty intends to do, why it is doing it and to outline the intended outputs, outcomes and impact level change that is anticipated. From our (albeit rudimentary) understanding of what the Protect Duty will cover, we assess that the primary focus will be on two areas.
    • Building, through training and awareness raising, the understanding, knowledge and capacity of staff employed in public venues (including sports and entertainment venues, tourist attractions and shopping centres), large organisations that operate in venues and public spaces, and public spaces, such as parks, bridges or beaches.
    • The implementation of basic physical measures to ‘strengthen’ physical assets.

In this reading, issues excluded include public awareness campaigns around threat identification and reporting and activities and measures to protect information security.

  1. Effectiveness: The review indicates that there are multiple approaches to measuring effectiveness of programming. Measurement of effectiveness is closely linked to the efficacy of monitoring processes, which is dependent on clearly defining indicators of success or positive change and identifying effective data gathering tools for the different contexts of protective security.

Measuring the number or scale of terrorist incidents is unlikely to be a helpful indicator of effectiveness in this project. Measures of success may be adapted to include proxy measures such as reported concerns about a potential attack or suspicious behaviour or thwarted attempts instead of actual terrorist incidents.

Indicators related to the project goals could also be used – for example, based around risk and vulnerability frameworks. A risk management approach could draw upon the existence of risk or vulnerability assessments, which are more common in this field. Indicators included in the risk assessment could form a checklist, which can be measured at the outset to form a baseline that can be monitored annually to measure future progress. If implemented sufficiently, a process evaluation could take place and a certain level of effectiveness could be pragmatically assumed. A more comprehensive approach could assume a risk-based cost-benefit approach, assessing overall risk measured by the degree of threat, the vulnerability of a target including the expected cost to protect it, and the consequences in terms of loss of attacking the target. A risk management approach could also distinguish between an attack’s primary impact in terms of fatalities and injuries, property damage and economic disruption, and secondary loss in terms of political, social, economic and legal costs.

Measures of effectiveness could also include attitudinal or behavioural change indicators resulting from activities implemented, for example, the uptake of training and learning (staff understanding of security and procedures, for example), the dissemination of learning etc, the attitudes of staff around security and safety matters.

  1. Impact: In the P/CVE field, a ‘contribution analysis’ is often used to demonstrate impact. This sets out a narrative about why it is reasonable to infer that the intervention(s) contributed to the observed results. In inferring the contribution, attention is paid to critically assessing and identifying whether the programme logic is strong or weak and if the observed change was more likely to have been caused by the intervention, or by an external factor, or by a combination. Emphasis is placed on demonstrating how the project has contributed to the outcomes, rather than on trying to attribute outcomes to individual task areas or activities. The feasibility of this approach in the protective field should be discussed during the consultation phase of this project.

Ultimately, to demonstrate impact there are three levels of engagement to consider:

  1. those who are going to be implementing the programme,
  2. those who are going to be impacted by the programme
  3. those who are responsible for success of the Duty.

Opportunities to evaluate impact of the forthcoming Protect Duty at the national level should be considered during consultations. The same legal framework will be rolled out nationally and can therefore generate national level data (e.g., social media analytics, content analysis, suspicious activity reports, etc.) and monitoring over time, which allows for an impact evaluation. The responsibility for gathering data and conducting impact level evaluations would rest with national level authorities, such as the Home Office or the NPST.

An intervention time series analysis (ITSA) methodology could also be used to demonstrate the impact of protective CT policy change over time. Since terrorist attacks are relatively rare, impact could be measured by a proxy indicator, such as crime figures in selected areas before and after the implementation of the duty. This methodology has previously been applied by the City of London Police and has been proposed in relation to Project Servator, which is an aspect that could be explored in the consultation.

The Full Report is coming soon
Read more

  1. Paul Martin, The Rules of Security: Staying Safe in a Risky World, (Oxford: Oxford University Press, 2019).
  2. See: Evaluating Security Interventions in Public Locations: Developing and Testing a Co-created Framework for Protective Security (crestresearch.ac.uk)
  3. Joshua Sinai, Jeffrey Fuller and Tiffany Seal, “Research Note: Effectiveness in Counter-Terrorism and Countering Violent Extremism: A Literature Review,” Perspectives on Terrorism 13, no. 6 (December 2019): 90-108, https://www.jstor.org/stable/26853743.
  4. Ibid.
  5. Sinai, Fuller, and Seal, “Research Note: Effectiveness in Counter-Terrorism.”
  6. Simon Copeland and Sarah Marsden. “Behavioural-Focused Protective Security Programmes – Full Report,” Centre for Research and Evidence on Security Threats, 4 November 2020. https://crestresearch.ac.uk/resources/behavioural-focused-protective-security-programmes/
  7. Steven Heydemann, ‘Countering Violent Extremism as a Field of Practice’, United States Institute of Peace, Insights (No. 1, Spring 2014), p. 11; Berger, ‘Making CVE Work’. 
  8. Cynthia Lum, Leslie W. Kennedy, and Alison Sherley. “Are counter-terrorism strategies effective? The results of the Campbell systematic review on counter-terrorism evaluation research,” Journal of Experimental Criminology 2, no. 4 (2006): 489–516, https://doi.org/10.1007/s11292-006-9020-y; Michael Jones and Emily Winterbotham, “Research Methodology: The Prevention Project,” RUSI, 2020.
  9. Home Office, “Protecting Crowded Places: Design and Technical Issues.”
  10. McIlhatton et al., “Protecting Commercial Real Estate and Crowded Places from Terrorism.”
  11. Monaghan and McIlhatton, “Chapter 23: Prevention of Bomb Attacks by Terrorists in Urban Settings.”
  12. Sinai, Fuller, and Seal, “Research Note: Effectiveness in Counter-Terrorism.”
  13. Mayne, John. 2001. “‘Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly’.” The Canadian Journal of Program Evaluation 16.1 1-23.
  14. European Commission, Directorate-General for Regional and Urban Policies. 2013. “Evalsed Sourcebook: Method and Techniques, European Commission. .” https://ec.europa.eu/regional_policy/sources/docgener/evaluation/guide/evaluation_sourc.