Indicator Level
Indicator Wording
Indicator Purpose
How to Collect and Analyse the Required Data
Determine the indicator's value by using the following methodology:
1) Define a limited number of the most important knowledge or/and skills that the project participants should gain as a result of the provided support. Avoid having unrealistically high or unnecessarily low requirements by verifying the test’s difficulty by pre-testing it with at least 10 people.
2) Prepare simple tests assessing whether the targeted project participants have the pre-defined, most important knowledge and/or skills. The testing can consist of, for example:
> in the case of literate persons, a written test, and in the case of nonliterate persons, an interview where the data collector asks knowledge-related questions and records whether the participant provided correct answers (in the case of largely nonliterate persons)
> observations where the participants are asked to perform the tested skills and the data collector records whether they were performed correctly
3) Decide the minimum result a person needs to reach in order to pass the test (for example, answering at least 7 out of 10 knowledge-related questions correctly and performing at least 3 out of 5 tested skills correctly).
4) Administer the test to a representative sample of your target group members. Ensure that the sample adequately represents the target population. For example, if the population includes women and men, younger and older people, and other relevant subgroups, make sure the subgroups are adequately represented in the sample (for instance, avoid interviewing only heads of households).
5) Calculate how many participants reached the minimum required result.
6) To calculate the indicator’s value, divide the number of participants who have the minimum required knowledge/skills by the total number of tested participants. Multiply the result by 100 to convert it to a percentage.
Disaggregate by
Disaggregate results by gender and other relevant criteria (for example, age group, disability status, location, displacement status, and other context-relevant vulnerability markers), provided sample size allows meaningful comparison.
Important Comments
1) Always conduct both a “pre-test” and “post-test”, otherwise you will not know the extent to which the respondents improved (or not) their knowledge and skills.
2) Decide whether to measure the direct effect of a one-off activity (e.g. a demonstration) or the effect of a longer learning process (e.g. series of several trainings over a period of time).
3) If you use this indicator to evaluate the effectiveness of trainings (or similar events), prepare standardized tests that relevant staff can use across all the trainings.
4) If possible, conduct the “post-test” twice – once immediately after the “capacity building” activity is completed (showing you the immediate learning) and then 1 - 2 months later (showing you the knowledge and/or skills that people actually remember and might use). However, the tests do not need to relate to a single activity only (e.g. training) – they can be provided during the baseline and endline surveys, assessing the overall change in the target population’s specific knowledge and/or skills.
5) If relevant, consider asking respondents where they acquired the knowledge / skills. It might help you understand the contribution of your intervention.
6) Ensure the assessment method is accessible and does not disadvantage specific groups (for example, by using oral or pictorial tools where literacy is low or conducting the test in the participant’s preferred language where feasible).
7) Analyse whether different subgroups achieved the minimum knowledge/skills threshold at similar rates. Where gaps persist, assess likely barriers (for example, language, literacy, time burden, mobility and safety constraints, unequal access to learning opportunities) and adapt training content and delivery accordingly.