Do you want your own version of IndiKit?

Learn more

Perceived Improved Capacity

Indicator Level

Output
Outcome

Indicator Wording

% of [target groups] reporting increased ability to implement disaster preparedness and response activities

Indicator Purpose

This indicator measures to what extent the target groups, such as local authorities, disaster management committees, or civil society organisations, feel that their ability to plan and carry out disaster preparedness and response activities has improved compared to an earlier point in time (for example, before receiving project support). It helps you assess whether capacity-building or institutional strengthening activities have led to key stakeholders feeling more capable of implementing disaster preparedness and response activities.

How to Collect and Analyse the Required Data

Determine the indicator’s value by using the following methodology:

1) Define what “ability to implement preparedness and response activities” means in your context, for example, the capacity to conduct simulation exercises, update plans, mobilise resources, or coordinate with other actors.

 

2) Develop a single, clear survey question using a confidence scale. Ask respondents to assess how their ability has changed compared to before receiving project support. Avoid yes/no questions. Encourage respondents to answer anonymously (self-administered paper or online survey) where possible. Clearly explain that responses will not affect future support or assessments.

RECOMMENDED SURVEY QUESTION (Q) AND POSSIBLE ANSWERS (A)

Q1Since [specify project start/support received], how has your ability to plan and implement disaster preparedness and response activities changed, if at all?

A1: Decreased a lot / Decreased somewhat / No change / Improved somewhat / Improved a lot

3) Collect the data from a representative sample of the target group members. Include only people who actually participated in relevant capacity-building activities (e.g. training, mentoring, technical support), as others cannot assess change. Choose the most feasible data collection method (e.g. online / paper-based anonymous survey) based on your context and ability to ensure respondent comfort and honesty.

 

4) Count the number of respondents who selected “Improved a lot” or “Improved somewhat” - i.e. reported increased ability.

 

5) To determine the indicator’s value, divide the number of respondents who reported increased ability by the total number of respondents. Multiply the result by 100 to convert it to a percentage.

Disaggregate by

Disaggregate by target group (e.g. CSOs, local authorities, DRM committees), role or level of responsibility of the respondent (e.g. senior management, technical staff frontline staff), gender of respondent, type of support received, and other relevant criteria.

Important Comments

1) Perceived improvement may be influenced by respondent position or social desirability bias. Senior staff or decision-makers may report greater improvement than frontline or operational staff, and some respondents may overstate improvement to please project staff. Where feasible, mitigate this by collecting responses from different roles within the same organisation or group, encouraging honest responses by emphasising that there are no “right” or “wrong” answers, and triangulating perceived improvements with light verification (see point 2 below).

2) When feasible, ask respondents for examples or evidence that demonstrate their improved ability, such as completed simulation exercises, updated plans, coordinated actions, or access to emergency resources. You can also complement the self-reported changes with a light review of other evidence that can indicate improved capacity - for example, the existence and quality of preparedness / response plans, records of activities implemented, participation / attendance lists, completed simulation or drill outputs, or brief follow-up conversations. This strengthens credibility through triangulation.

 

3) Use a confidence scale rather than a yes / no format to get more nuanced results and to track gradual improvement over time.

 

4) Make sure respondents understand that the question refers to their ability to carry out preparedness and response actions, not only their knowledge of them.

 

5) Involve different levels of staff or members within each target group to capture both management and operational perspectives.

 

6) To compare results with baseline or previous measurement rounds, make sure you are using the same scale and question.

This guidance was prepared by People in Need ©
Propose Improvements