This CREST guide looks at how to improve decision-making, communication, leadership, and interoperability in areas ranging from urban search and rescue and mass decontamination to hostage negotiation and counter-terrorism.

Although natural and manmade disasters, terrorist attacks, and other major incidents are relatively rare occurrences, these high-impact events pose significant threats to public safety, finances, and organisational reputation (e.g. the 7/7 bombings resulted in 52 deaths and more than 720 people injured; financial costs to the economy were estimated to be in excess of £2 billion).

While each event is unique, they share a number of common challenges, including the need to make decisions under severe time constraints. Where risk and uncertainty are high, goals may be ill-defined, shifting or competing, and information is either scarce, unreliable, conflicting, or there is too much of it to make sense of quickly. In response to these difficulties, emergency services are becoming increasingly involved in collaborations with academic institutions to identify an evidence base for ‘what works’ in improving emergency response, recovery, and resilience.

This guide is based on work with emergency responders to develop and evaluate classroom-based, table-top, simulated, and large-scale live exercises. It has been designed to improve decision-making, communication, leadership, and interoperability in areas ranging from urban search and rescue and mass decontamination to hostage negotiation and counter-terrorism.

The Five Fs

The five Fs are the key areas to consider when developing this kind of training:

  1. Form
  2. Focus
  3. Frequency
  4. Feedback
  5. Findings

Form

What form should the training take?
Training interventions come in a variety of different shapes and sizes, from short mental rehearsal tasks that can be completed in isolation, through to large-scale live multi-agency training exercises. It is important to consider what form is appropriate for the aims of the training.

For example, short but regular and targeted training interventions and monitoring are cheaper, better at increasing individual performance effectiveness, and provide more sustainable change than complex, lengthy, and infrequent interventions.

However, despite being costlier and more time-consuming to run, live training exercises are valuable for testing intra- and inter-agency interactions (e.g. how effectively information is communicated and made sense of, both within organisations across command levels and also between organisations, in order to assess risk and implement actions).

Focus

What is the focus of training in terms of both domain and specificity?
Before designing any form of training it is important to know what specific skills will be developed, why these skills are necessary and how they will be tested.

For example, if the purpose of training is to test the ability of multiple agencies to develop a shared appreciation of risk, then it is essential to embed specific features and tasks within the training that will test this particular ability.

Additionally, if people are required to learn new and complex skills, it is better to focus on each one in isolation first, providing the opportunity to practice and receive feedback, rather than immediately focusing on several skills at once. Later shifts in focus, once individual skills have been developed, can emphasise the need to understand the relationship these have in more complex environments.

It is better to focus on each new skill in isolation first ... rather than immediately focusing on several skills at once.

Frequency

How often do we need to commit to training in order to improve performance?
This issue is also known as the ‘dosage effect’. Different skills may take longer to learn or require more opportunities to practice and receive feedback than others, depending on their level of complexity. Skills can also degrade over time once training has been completed and so it may be necessary to provide shorter ‘top-up’ training sessions to prevent them from being lost.

As this can be costly to implement, knowing the optimal amount of time after the initial training that skills will be maintained before a ‘top-up’ is needed is important. Knowing how intensive both the original training and subsequent ‘top-up’ sessions will need to be are also important considerations for balancing skill sustainability against the cost (e.g short reflections lasting minutes that can be done on a daily basis vs. events lasting days or weeks).

Feedback

How will feedback operate? Who gives it and how structured must it be?
In order to maximise learning, it is important for people to receive regular and timely feedback and to be able to have the opportunity to repeat tasks so that they can practice putting this feedback into action. Accordingly, it is essential for any learning environment to build in time slots where such feedback can be provided so that lessons are made explicit throughout this process.

Having the opportunity to reflect on personal performance and receive external feedback from subject-matter experts is beneficial for identifying strengths and gaps in performance as well as how learning can be embedded into operational procedures.

The personal reflections and feedback provided should be validated and mapped onto clear learning plans with distinct thresholds and stages against which individuals can monitor their progress and identify areas for further development.

Findings

How will the change be identified and measured? What is the criterion for success one is looking for?
A consistent difficulty for some public service training can be a lack of measures to establish the effectiveness of training intervention. Knowing how to measure whether a particular skill has been acquired is an important issue for improving performance and demonstrating the cost-effectiveness of training.

This also requires an up-to-date and dynamic operating picture of the available human resources within agencies, in order to identify gaps and target training and development plans accordingly. The more complex the environment is, the more difficult it will be to gain a clear picture of what constitutes best practice and the areas in which performance could be improved.

It is therefore important to develop and implement frameworks for evaluating and monitoring performance. This provides a clear set of criteria against which changes in performance and capabilities can be measured, along with being able to anticipate issues that could affect organisational resilience and continuity.

A consistent difficulty for some public service training can be a lack of measures to establish the effectiveness of training intervention.

Academic institutions are well qualified to assist with this, making this a particularly fruitful area for training hubs to develop a link to local academic institutions. However, it is important for practitioners to maintain a close association with the institution to prevent the academic focus drifting from the core principles and requirements of end-users and beneficiaries.

Research shows each of these considerations to be important for measuring the effectiveness of training in terms of impact on performance, sustainability of skills, and cost-effectiveness.

Read more
  • Alison, L., van den Heuvel, C., Waring, S., Power, N., Long, A., Crego, J., & O’Hara, T. (2012). Using immersive simulated learning environments (ISLEs) for researching critical incidents: A knowledge synthesis of the literature and experiences of studying high risk strategic and tactical decision-making. Journal of Cognitive Engineering & Decision Making, 20.
  • Alison, L., & Waring, S. (2011). The value of simulation based training in reducing decision avoidance. Home Team Journal, 3, 102 – 112.