Indicator Level
Indicator Wording
Indicator Purpose
How to Collect and Analyse the Required Data
Determine the indicator’s value by using the following methodology:
1) With key project partners and other relevant stakeholders, discuss and agree on clear indicator criteria and definitions. Discuss and define what counts as “increased opportunities” by clearly describing the types of actions or measures that increase opportunities for citizen engagement. These might include but are not limited to:
Establishing new consultation mechanisms (e.g. fora, committees, digital feedback tools).
Expanding the scope, frequency, or inclusiveness of existing participation processes.
Adopting or revising legal or policy frameworks that formalise citizen participation.
Publicly committing to open, participatory approaches.
Allocating resources to sustain citizen engagement mechanisms.
2) Set the reference period for which you will collect evidence of increased opportunities. Typically this should measure changes within the current reporting year or project period (e.g. past 12 months).
3) Develop a tool to record evidence of increased opportunities. Prepare a simple tool (table or database) to document the types of changes and evidence of public authorities increasing opportunities for citizen engagement according to the predefined criteria (step 1). The tool may record for example the following information for each observed case of increased opportunities:
authority / level
type of change (new / strengthened opportunity)
type of stakeholder(s) who contributed to change
description of new or improved opportunity
significance of change
period / date
source of verification/evidence
project contribution
external contribution
4) Collect evidence through two or more suggested methods:
Document and policy review: Analyse government documents, regulations, decrees, meeting minutes, and official announcements for references to new or expanded participation mechanisms.
Key informant interviews: Engage public officials, CSO representatives, community leaders, and media to confirm if and how participation opportunities have increased.
Direct observation: Attend participatory meetings or review online engagement platforms to confirm that mechanisms are operational and accessible.
Survey: Conduct a short survey with citizens participating in decision-making processes to capture perceptions of whether opportunities for engagement have increased. This can include awareness of new or expanded participation mechanisms, ease of access, and perceived openness of authorities to citizen input.
Other sources: Cross-check official information with other evidence such as citizen feedback, media reports, watchdog reports, or CSO monitoring.
5) Record the collected and verified information into the developed tool/database (step 3).
6) Develop a set of clear criteria and standards for assessing the extent of increased opportunities. One option is to use a rubric (more guidance in resources below)—a structured assessment tool that uses descriptive levels to judge the achieved level of performance. Rubrics provide clear narrative criteria for each level, allowing users to classify progress in a consistent, systematic, and comparable way.
The rubric in this case should describe the increase in opportunities for citizen engagement in decision-making processes. Users should always formulate their own project-specific rubrics at the inception (e.g. during a joint workshop with project partners), which are informed by baseline findings. An illustrative example of a simple rubric scale and the description of each level can be:
None (=1) – no opportunities for citizen participation have been introduced.
Emerging (=2) – authorities demonstrate early openness or willingness to involve citizens, often in ad hoc or informal ways (e.g. ad hoc consultations or pilot initiatives).
Moderate (=3) – participation opportunities are becoming more regular and inclusive, with clearer structure and follow-up.
Significant (=4) – participation mechanisms are well-established, inclusive, structured, and followed upon.
Institutionalised (=5) – participation is embedded within governance systems (e.g. policies, regulations, or budget processes), with allocated resources, and sustained by the authorities themselves.
When formulating rubric levels, you may draw upon progress marker language (more guidance in resources below) such as Expect to see, Like to see, and Love to see. You could also formulate Wouldn’t like to see progress marker to capture negative change, as well as Need to see progress marker to report on the outputs necessary for the outcome to happen.
7) Assess the indicator’s achievement. Use the information recorded in the developed tool/database (step 3) to assign a rubric level (e.g. none, emerging, moderate, significant, institutionalized) to each assessed opportunity for citizen engagement. To determine the appropriate level, engage expert(s) or - if you want to promote participation and strengthen ownership - organise participatory workshop(s) with representatives of citizens, local authorities, and other relevant stakeholders.
Using a numerical score (e.g. 1–5, as outlined in step 6) can make comparisons and aggregation easier.
If desired, aggregate results to show how many communities, authorities, or processes are at each level of progress. Outcome Harvesting methodology (see resources below) can help explain shifts between rubric levels, collect concrete examples, and provide deeper understanding of how and why opportunities for citizen engagement have evolved.
Consider engaging an external expert or evaluator to substantiate/validate your results during project’s evaluation.
8) Report on the indicator. Provide a narrative summary of the indicator’s achievement using the collected evidence and the assigned rubric levels (step 7), along with any documentation from participatory scoring, validation, or reflection sessions—if available. Describe how and in what ways government authorities increased opportunities for citizen engagement, highlighting the type, scope, accessibility, inclusiveness, and significance of the new or strengthened opportunities. Combine any available quantitative information—number of communities, authorities or processes showing increased opportunities—with qualitative interpretation that explains the nature, depth, and significance of the changes observed. Use the rubric results to summarise overall patterns or shifts in the extent to which engagement opportunities expanded during the reporting period.
Disaggregate by
Report and interpret findings with reference to relevant contextual factors such as types of authorities involved, types of opportunities, and geographic location, as feasible and appropriate.
Important Comments
1) Use this indicator if you want to assess the quality of improved opportunities, not just the number of meetings or participants. It allows to assess the quality and significance of how authorities expand opportunities for citizen participation, rather than relying on numerical activity counts. Quantitative measures alone may misrepresent progress, while a qualitative approach helps determine whether these opportunities have truly become more meaningful, inclusive, and accessible.
2) Consider using Outcome Harvesting methodology to document and verify changes in participation opportunities. Outcome Harvesting is well suited to this indicator, as it helps systematically identify and verify concrete instances where government authorities have changed policies, practices, or behaviours to expand citizen engagement. Take advantage of the guidance on Outcome Harvesting methodology provided in the documents below. For each “harvested outcome,” record:
What changed? (e.g. new consultation process introduced);
Who changed? (which authority or institution);
When and where did the change occur?;
How significant is the change?; and
How did the project contribute to it? (e.g. through facilitation, advocacy, technical input, or networking).
3) To track progress over time, apply the rubric at baseline and at planned reporting points (e.g. annually and/or at endline) to assess whether the level of opportunities for citizen participation shifts over time—e.g. from “emerging” to “significant” opportunities.
4) Define “increased opportunities” jointly. Agree on criteria with project partners and authorities early in implementation. This will ensure consistent interpretation across locations.
5) Interpret results and any scoring within the institutional, political, and governance context and at the appropriate level of authority. Compare observed changes to the pre-intervention situation and recognise that progress may take different forms across levels—for example, a township holding its first community consultation may represent as meaningful a step as a national ministry expanding a digital participation platform. When assessing progress, encourage partners to reflect on what steps are needed to further expand engagement opportunities in the next period. Identify enabling conditions and barriers that shape authorities’ willingness and ability to open participatory spaces. This helps avoid inappropriate comparisons across contexts and provides insight into the broader environment influencing change.
6) Given that civic space has a lot of actors, to understand your contribution more in depth, examine how your activities may have influenced the observed shifts in participation opportunities, while recognising that government actions are shaped by multiple factors. Use qualitative methods (interviews, reflection sessions, document review) to explore whether:
The project directly supported authorities (e.g. training, facilitation, or funding);
Project-supported CSOs or platforms advocated for these changes; or
Project evidence, research, or communication informed government decisions.
7) If resources allow, consider also alternative or external factors contributing to change. These can be assessed by asking questions such as:
How has the political context influenced this change/outcome, either positively or negatively?
How did cooperation with other actors affect the achievement of this change/outcome? Which actors were involved, and in what ways did their involvement help or hinder progress?
8) Where resources allow, consider documenting missed or unrealised opportunities for change to better contextualise results and avoid over-interpreting progress. This can include participation spaces that were planned but not opened, or proposals that were discussed but not taken forward.
9) If your project aims to strengthen participation and ownership of the key stakeholders, engage them in the indicator methodology design and/or indicator results validation. Involve community members, civil society actors, local authorities, and other partners in defining rubric criteria, reviewing assigned levels or scores, and developing a shared understanding of what constitutes meaningful expansion of citizen engagement opportunities.
10) Verify accessibility and inclusivity with citizens and CSOs. Collect feedback from those invited to participate—including women, youth, and those from minority or marginalised groups—to confirm that new or strengthened opportunities are genuinely open, inclusive, and used in practice.
11) If using rubrics is too resource-intensive or if sufficient data cannot be collected to reliably justify scores (e.g. due to project scale or partner coordination constraints), consider simplifying the approach. You may use Outcome Harvesting independently without rubric scoring, while still maintaining a results-based and evidence-driven approach. You can also opt for a simpler quantitative indicator such as the Number of increased opportunities for citizen engagement in decision making processes provided by authorities, which focuses on counting verified instances of opportunities rather than assessing the extent of increase. These alternatives reduce the measurement burden while capturing meaningful changes in participation opportunities.
12) For EU-funded projects, consider the following OPSYS indicators instead (more options can be found on Predefined indicators for design and monitoring of EU-funded interventions website):
Level of participation of women, youth, persons with disabilities, the elderly, indigenous people and other populations in development of urban policies supported by the EU-funded intervention
Score (1-10) based on the degree to which the political leadership recognises the value of, and enables, participation by civil society in policymaking processes
Score (1-10) based on the degree to which the political leadership recognises the value of, and enables, participation by civil society in legislative making processes
Access Additional Guidance
- Outcome Harvesting - Better Evaluation
- Outcome Harvesting
- INTRAC (2017) Outcome Harvesting (.pdf)
- Use of Rubrics
- INTRAC (2024) Outcome Mapping (.pdf)