Do you want your own version of IndiKit?

Learn more

Disaster Preparedness Simulations

Indicator Level

Output

Indicator Wording

number of disaster simulations conducted

Indicator Purpose

This indicator measures how many disaster simulations were organised during the reporting period. Simulations test how well disaster management committees, institutions, and communities can implement their preparedness and response procedures. They help identify gaps, strengthen coordination, and improve readiness for real emergencies.

How to Collect and Analyse the Required Data

Determine the indicator’s value by using the following methodology:

1) Define for which period you will count simulations (for example, the last 12 months) and use this period consistently. 

2) Define what counts as a simulation. A (disaster) simulation exercise is a planned, organised event that imitates an emergency situation to test response capacities. It follows a predefined scenario, involves relevant participants, and has a clear learning or testing objectives. Some examples of simulations are Tabletop exercises (TTX), evacuation drills, functional exercises or full-scale field simulations. Do not count as simulation ordinary training sessions, awareness meetings, or workshops without an emergency scenario. If a single simulation exercise tests response to multiple hazards under one integrated scenario (for example, flooding followed by disease outbreak), count it as one simulation. If clearly separate simulations are conducted for different hazards, count each separately.

3) Identify all simulations carried out during the reporting period defined in Step 1. Use a combination of information sources, such as:

  • Interviews with key informants (e.g. government officials, local DRM committees, partner organisations); and

  • Review of relevant documents (e.g. activity reports, partner records, minutes from coordination meetings)

4) For each identified simulation, record the date, location, type of hazard(s) simulated, organising institution, and main participant groups involved.

5) To determine the indicator’s value, count the total number of disaster simulations conducted during the reporting period in line with the definitions you made in step 2.

Disaggregate by

Disaggregate by type of simulation conducted and by participants (Local Authorities, community members, etc.). Where feasible, note the extent to which marginalised or at-risk groups participated in the simulations. Such groups may include people with disabilities, older persons, or remote communities.

Important Comments

1) Keep evidence for each simulation (attendance list, scenario description, photos, or reports) to verify results.

 

2) Use the same data collection format for all simulations so results can be compared across areas and time periods.

 

3) Count multi-day or multi-phase simulations as one event when they are part of a single exercise.

 

4) A full technical quality assessment is not required. Where feasible, and if your capacity allows, a simple checklist may be used to confirm minimum quality elements, such as: a clear scenario, participation of relevant actors, and a short debrief or evaluation session.

 

5) Where participant numbers are available from reports or key informants, you may also record how many people took part in each simulation, disaggregated by gender and role (e.g. community members, local authorities, volunteers). These data can complement this indicator or be used with a separate indicator such as “Number of people who attended disaster simulation exercises.”

 

6) For further examples of simulations, see resources below.

This guidance was prepared by People in Need ©
Propose Improvements