Indicator Level
Indicator Wording
Indicator Purpose
How to Collect and Analyse the Required Data
Determine the indicator's value by using the following methodology:
1) Develop a short, practical knowledge assessment (5-10 questions) that reflects local hazards and expected protective actions. The assessment should focus on what people should do in real situations, not on technical definitions or theory. The assessment should normally be conducted orally, using simple language, scenarios, or visual prompts where helpful, to avoid excluding people with lower literacy levels, learning difficulties, or language barriers. Written formats should be used only where appropriate.
Examples of questions:
What should you do first if a flood warning is issued?
Where is the nearest safe evacuation point?
What actions help prevent injuries during strong winds or storms?
Who should you contact in case of an emergency?
2) Agree on the minimum score required to “pass”. For example, require 60% of correct answers. Verify the feasibility of this threshold when pre-testing the questionnaire. Fix this threshold at baseline and keep it unchanged for all survey rounds to ensure comparability.
3) Collect the required information by conducting individual interviews with a representative sample of the target group members and administer the knowledge assessment in a supportive, non-evaluative way. Enumerators should clearly explain that the purpose is to understand learning and preparedness, not to grade or judge respondents, and that responses will not affect assistance or participation. Adjust delivery to respondents’ language, age, disability, and comfort level.
4) To calculate the indicator value, divide the number of respondents who achieved at least the minimum passing score by the total number of respondents. Multiply the result by 100 to convert it to a percentage.
Disaggregate by
Disaggregate the data by gender, age group, location, and other criteria relevant to the context and focus of your intervention.
Important Comments
1) Keep the assessment short and focused on practical knowledge that can be applied in real situations, rather than recall slogans or training content.
2) Pre-test the questions to ensure they are clear, locally relevant and understandable across different groups within the community.
3) Use the same questions and scoring rules in all survey rounds to ensure comparability over time.
4) When feasible, complement the assessment with simple checks of applied knowledge, such as observing behaviour during a simulation exercise.
5) Interpret results carefully: lower scores among certain groups may reflect unequal access to information or training, language barriers, or exclusion from activities, rather than lack of capacity. Where possible, document such limitations and use findings to improve the inclusiveness of preparedness efforts.