Indicator Level
Indicator Wording
Indicator Purpose
How to Collect and Analyse the Required Data
Determine the indicator's value by using the following methodology:
1) Define the key knowledge and/or skills areas that the project intends to strengthen. These should correspond to the learning objectives of the intervention and be clearly defined before data collection. Ensure the team consistently applies the distinction between knowledge (improved understanding or awareness of concepts, policies, or procedures) and skills (the ability to apply this knowledge effectively).
2) Design an appropriate measurement approach considering your target audience to assess change in knowledge and/or skills. This usually requires measuring the difference between the baseline (the situation before support) and the post-intervention status (the situation after the support). The most common options include:
Standardised knowledge and/or skills test: Administer the same or equivalent test before and after the intervention and compare scores of individual participants.
Self-assessment: Ask participants to rate their knowledge and/or skill level related to each of the assessed areas before and after the intervention using a standard scale (e.g., 1 = very low, 5 = very high).
Observation or practical performance assessment: Directly observe participants performing relevant tasks or demonstrating skills in real or simulated settings and record how well they performed using a structured observation form.
If the indicator includes skills, at least one method that captures application in practice must be used. This can be direct observation, practical tasks, or evidence-based proxies that assess performed work (e.g. assessment of work outputs using pre-defined quality criteria, structured assessment of task performance by a trainer or mentor based on observed application, or task-based simulations scored against clear standards).
3) Set a minimum threshold for improvement that defines “increased knowledge and/or skill.” Examples:
A minimum percentage improvement between pre- and post-test scores (e.g., at least 20% increase).
A shift of at least one point upward on a 5-point self-assessment scale.
Verified improvement in task performance according to established criteria.
Avoid setting unrealistically high or unnecessarily low requirements by verifying the assessment method’s difficulty by pre-testing it with at least 10 people.
4) Collect the pre-intervention data from either the entire target population or its representative sample, as appropriate.
5) Collect post-intervention data. Repeat the data collection ideally immediately after the intervention, and, if possible, again at a later stage to assess retention. Ensure that respondents have had sufficient time to internalise or apply what they learned. The same individuals must be assessed at all measurement points to allow for individual-level comparison between pre- and post-intervention results.
6) Assess results at participant level. Calculate how many participants reached the minimum required threshold for increased knowledge and/or skills defined in step 3.
7) To calculate the indicator’s value, divide the number of participants who meet the defined threshold of increased knowledge/skill by the total number of assessed participants. Multiply the result by 100 to convert it to a percentage.
Disaggregate by
The data can be disaggregated by gender, age group, geographic area, and/or other relevant characteristics (e.g., type of training or topic, organisations targeted), depending on your project’s context and focus.
Important Comments
1) When surveying a sample rather than the full target population, ensure it is sufficiently large to account for attrition at endline, as not all baseline respondents may be available at that stage. This helps ensure that the post-intervention assessment remains representative (your sample size remains large enough) and allows data to be collected from the same respondents at both baseline and endline measurements.
2) If you intend to measure both knowledge and skills, relying solely on pre- and post-tests may be insufficient. Additional methods, such as observation or practical performance assessments, may be necessary to adequately measure the increase in skills.
3) The methodological steps above recommend collecting both pre-intervention and post-intervention data to capture the extent to which the respondents improved (or not) their knowledge and/or skills. However, where baseline data collection is not feasible or pre- and post-testing is inappropriate for the target group, a recall-based evaluation form may be used as an alternative. Such forms may include counterfactual or baseline questions, for example:
To what extent would you say that support provided by [specify organisation / actor] influenced your knowledge about [specify the topic]?
Did you gain any skills and knowledge that you did not have before?
A composite score from several such questions may be used to define “increased knowledge and/or skill.” Such an evaluation form is also useful for collecting feedback for intervention improvements.
Recall-based approaches should be treated as a second-best option and used only when baseline data cannot be collected, as recall bias and social desirability bias can inflate results, particularly in donor-funded training contexts. Findings from recall-based methods should therefore be interpreted as perceived change, not measured change.
4) If your project aims to measure whether the target population has a specific level of knowledge and/or skills - rather than whether they have improved over time - use the indicator % of [specify the target group] with the desired knowledge/skills of [specify the topic] instead. The advantage of this approach is that you do not need to collect baseline and follow-up data from the same respondents.
5) Decide whether to measure the direct effect of a one-off activity (e.g. a demonstration) or the effect of a longer learning process (e.g. series of several training sessions over a period of time).
6) If using self-assessments, validate them with an external check to reduce bias. Ask trainers or peers to review whether participants can demonstrate the knowledge or skills they rated themselves on. This verification step ensures the results reflect actual learning rather than subjective perceptions.
7) If possible, conduct the “post-assessment” twice: once immediately after the “capacity strengthening” activity is completed (showing you the immediate learning); and then, 1–2 months later (showing you the knowledge and/or skills that people actually remember and might use). However, the assessments do not need to relate to a single activity only (e.g. training) - they can be provided during the baseline and endline surveys, assessing the overall change in the target population’s specific knowledge and/or skills.
8) Drawing on your accessibility and gender analysis, assess whether additional measures are needed to ensure fair and accurate measurement for all respondents and, where required, enact them. These may include offering multiple data collection formats (written, oral, or digital) to accommodate different abilities; ensuring assessment venues are barrier-free; allowing flexible timing for participants who need it; permitting the use of assistive applications or programmes; and providing support materials such as large-print text or visual aids. Use gender-neutral language and examples in all questions and disaggregate and interpret results by gender to identify whether girls/young women, boys/young men, or non-binary youth experience different learning outcomes or barriers.
9) If relevant, consider asking respondents where they acquired the knowledge/skills. It might help you understand the contribution of your intervention.
10) If possible, add questions to collect feedback on the usefulness and relevance of the capacity strengthening. Use the results for intervention improvements.
11) To strengthen learning and accountability, share assessment results with participants where feasible and appropriate and use them to refine future capacity development initiatives.