Do you want your own version of IndiKit?

Learn more

Youth Knowledge of Local Governance

Indicator Level

Output

Indicator Wording

% of targeted youth with increased knowledge of local governance structures and decision-making processes

Indicator Purpose

This indicator measures the proportion of youth who show a positive change in their knowledge of how local governance works—including structures, roles, and opportunities for participation. It is used to assess whether civic education, leadership training, or engagement initiatives have strengthened young people’s understanding of who makes decisions at local levels, how decisions are made, and how youth can participate. Tracking this indicator helps demonstrate the effectiveness of interventions designed to build youth civic literacy and engagement readiness.

How to Collect and Analyse the Required Data

Determine the indicator's value by using the following methodology:

 

1) Set clear indicator definitions.

  • Define “youth” according to project or national criteria.

  • Define the target group (e.g. youth participating in civic education, training, mentoring, or community dialogue initiatives).

  • Define the key “knowledge” that the project intends to strengthen, e.g. structures and roles, processes, participation opportunities, and accountability mechanisms. These should correspond to the learning objectives of the intervention and be clearly defined before data collection.

2) Design an appropriate measurement approach to assess change in knowledge. This requires measuring the difference between the baseline (situation before the knowledge strengthening activity) and the post-intervention status (situation after the activity). The most common options include:

  • Standardised knowledge test: Administer the same or equivalent test before and after the intervention and compare scores.

  • Self-assessment: Ask participants to rate their knowledge level before and after the intervention using a standard scale (e.g. 1 = very low, 5 = very high).

  • Supervisor or trainer assessment: In some cases, a trainer or supervisor can rate each participant’s knowledge at a baseline and at a follow-up to determine improvement.

Where possible, use objective pre/post knowledge assessments. Self- or trainer-assessments are acceptable alternatives in contexts where testing is not feasible, but they should be triangulated through peer verification or qualitative checks.

3) Set a minimum threshold for improvement that defines “increased knowledge.” Examples:

  • A minimum required level of knowledge or minimum percentage improvement between pre- and post-test scores (e.g. at least 20% increase), as suitable taking into consideration the baseline level.

  • A shift of at least one point upward on a 5-point self-assessment scale.

  • Avoid having unrealistically high or unnecessarily low requirements by verifying the test’s difficulty by pre-testing it with at least 10 people.

4) Collect the pre-intervention data from either the entire target population or its representative sample, as appropriate.

5) Collect post-intervention data. Repeat the data collection ideally immediately after the intervention, and, if possible, again at a later stage to assess retention. Ensure that respondents have had sufficient time to internalise or apply what they learned. The same individuals must be assessed at all measurement points to allow for individual-level comparison between pre- and post-intervention results.

6) Assess results at participant level. Calculate how many youth participants reached the minimum required threshold for increased knowledge defined in step 3.

7) To calculate the indicator’s value, divide the number of youths who meet the defined threshold of increased knowledge by the total number of assessed youths. Multiply the result by 100 to convert it to a percentage.

Disaggregate by

The data can be disaggregated by gender, age group, geographic area, and/or other relevant socio-economic characteristics, depending on your project’s context, focus and resources.

Important Comments

1) When surveying a sample rather than the full target population, ensure it is sufficiently large to account for attrition at the endline, as not all baseline respondents may be available at that stage. This helps ensure that the post-intervention assessment remains representative (your sample size remains large enough) and allows data to be collected from the same respondents at both baseline and endline measurements.

2) Always collect both pre- and post-intervention results, otherwise you will not know the extent to which the respondents improved (or not) their knowledge. Align the assessment tools to maintain consistency in tracking progress.

3) If your project aims to measure whether the target population has reached a specific level of knowledge and/or skills - than whether they have improved over time - use the indicator % youth with the desired knowledge of local governance structures and decision-making processes instead. The advantage of this approach is that you do not need to collect baseline and follow-up data from the same respondents.

4) Ensure the assessment tools are context-appropriate, easily understood by youth, and pre-tested for reliability.

5) If using self-assessments, validate them with an external check to reduce bias. Ask trainers or peers to review whether participants can demonstrate the knowledge they rated themselves on. This verification step ensures the results reflect actual learning rather than subjective perceptions.

6) Decide whether the indicator will measure the immediate effect of a single learning activity, for example one training session or awareness event, or the cumulative effect of a longer learning process, this could be a series of trainings, mentoring, or ongoing civic education activities. This will determine the timing of data collection and how changes in youth knowledge are interpreted. Where feasible, consider conducting post-assessment twice. Ideally, this should be done immediately after the learning activity to capture short-term learning and again 1–2 months later to assess knowledge retention and potential use. Assessments may also be integrated into baseline and endline surveys to measure overall change rather than effects of a single activity.

7) Drawing on your accessibility and gender analysis, assess whether additional measures are needed to ensure fair and accurate measurement for all youth respondents and, where required, put them in place. These may include offering multiple data collection formats (written, oral, or digital) to accommodate different abilities; ensuring assessment venues are barrier-free; allowing flexible timing for youth who require and need it; permitting the use of assistive applications or programs; and providing support materials such as large-print text or visual aids. Use gender-neutral language and examples in all questions and disaggregate and interpret results by gender to identify whether girls/young women, boys/young men, or non-binary youth experience different learning outcomes or barriers.

8) If relevant, consider asking respondents where they acquired the knowledge / skills. It might help you understand the contribution of your intervention.

9) If possible, add questions to collect feedback on the usefulness and relevance of the capacity strengthening. Use the results for intervention improvements.

10) To strengthen learning and accountability, share assessment results with youth participants where feasible and appropriate and use them to refine future capacity development initiatives.

This guidance was prepared by People in Need (PIN) ©
Propose Improvements