Indicator Level
Indicator Wording
Indicator Purpose
How to Collect and Analyse the Required Data
Determine the indicator’s value by using the following methodology:
1) With key project partners and other relevant stakeholders, discuss and agree on clear indicator criteria and definitions.
Define what counts as “other partners” according to the project context. These may include project-supported non-formal groups, networks, CBOs, private sector actors, research organisations, and community groups.
Specify the target topic(s) or issue(s) of CSOs’ and partners’ advocacy efforts.
Clarify which stages of the policy cycle will be assessed such as agenda-setting, drafting, consultation, adoption, implementation.
Define what “contribution” means in your context. Consider the following suggestions:
Providing research, data, or technical input used in policy development
Organising or facilitating consultations or dialogues that informed policy direction
Advocating for inclusion of specific issues, groups, or approaches
Building coalitions or networks that influenced policymaking
Supporting implementation or monitoring of new or revised policies
2) Set the reference period for which you will collect evidence of CSOs’ and partners’ contributions to policy change. Typically assess the development within the current reporting year or project period (e.g. past 12 months).
3) Develop a tool to record evidence of CSO and partner contributions to policy change. Prepare a simple tool (table or checklist) to document each policy process where CSOs/partners were involved as per the predefined criteria (step 1). The tool may record for example the following information for each observed case of CSOs’ and partners’ contribution to policy change:
location
period / date
CSO / partner
type of policy / issue
stage of policy process (agenda-setting, drafting, consultation, adoption, implementation)
type of contribution (e.g. advocacy, technical input, dialogue facilitation etc.)
description of CSO / partner role and extent of their contribution
significance of change
source of verification / evidence
project contribution
external contribution
4) Collect evidence through two or more suggested methods:
Document and media review: Review policy drafts, consultation reports, position papers, media articles, and official statements referencing CSO / partner input.
Key Informant Interviews: Interview policymakers, CSO leaders, and partners to understand each actor’s contribution and influence.
Focus Group Discussions or reflection workshops: Facilitate participatory discussions among CSOs and partners to rate and validate the extent of their collective contribution.
5) Record the collected and verified information into the developed tool / database (step 3).
6) Develop a set of clear criteria and standards for assessing the extent of CSO/partner contribution. One option is to use rubrics (more guidance in resources below) - a structured assessment tool that uses descriptive levels to judge the achieved level of performance. Rubrics provide clear narrative criteria for each level, allowing users to classify progress in a consistent, systematic, and comparable way. The rubric in this case should describe how far and how substantively CSOs/partners influenced policy outcomes at each stage of the policy cycle. Users should always formulate their own project-specific rubrics at the inception in line with baseline findings. Ideally, formulation should take place during a joint workshop with project partners. An illustrative example of a simple rubric scale and the description of each level can be:
None (=0) – CSOs/partners do not participate in policy processes.
Minimal (=1) – CSOs/partners were aware of or invited to policy processes but played a limited or observer role.
Emerging (=2) – CSOs/partners provided input or evidence, but their contribution had limited visible influence on the policy outcome.
Moderate (=3) – CSOs/partners made meaningful contributions that shaped sections of policy or process design.
Significant (=4) – CSOs/partners had substantial influence on policy content, framing, or implementation plans.
Transformative (=5) – CSOs/partners co-created policy or institutionalised participation mechanisms for ongoing policy development.
When formulating rubric levels, you may also draw on progress marker language (more guidance in resources below) such as Expect to see, Like to see, and Love to see. You could also formulate Wouldn’t like to see progress marker to capture negative change, as well as Need to see progress marker to report on the outputs necessary for the outcome to happen.
7) Assess the indicator’s achievement: Use the collected information recorded in the developed tool/database (step 3) to assign a level - minimal, emerging, moderate, significant, transformative - to each assessed policy process, policy topic or location as relevant for your project. To determine the appropriate level, engage expert(s) or - if you want to promote participation and strengthen ownership - organise participatory workshop(s) (e.g. with CSOs and other partners).
Using a numerical score (e.g. 1-5, as suggested in step 6) instead of level names can make comparisons easier.
If desired, aggregate results to show how many policy processes or policy topics fall into each level.
Consider engaging an external expert or evaluator to substantiate/validate your results during project’s evaluation.
8) Report on the indicator. Provide a narrative description of the indicator’s achievement using the collected evidence and the assigned rubrics level (step 7), as well as any minutes or documentation from the participatory scoring workshop(s) if available. Describe the policy changes or developments identified, explain the extent of CSO and partner contributions, and outline the pathways through which their advocacy, engagement, or dialogue influenced the policy process. In your reporting, combine any available quantitative information - number of policy processes assessed - with qualitative interpretation that explains the extent of the contributions observed. Use the rubric results to summarise overall patterns or shifts in the extent of CSO/partner contribution.
Disaggregate by
Report and interpret findings with reference to relevant contextual factors such as the type of policy process or policy topic, level of government (local, regional, national), and the type of contributing actors (CSO, private sector, academic, network), as feasible and appropriate.
Important Comments
1) Use this indicator if you want to focus on the extent, not just the occurrence, of contribution. Use clear qualitative criteria (e.g. rubrics) to describe how influential or deep CSO/partner engagement was, rather than simply whether it happened.
2) Consider using Outcome Harvesting methodology to assess and verify CSO and partner contributions to policy change. Outcome Harvesting is well suited to this indicator, as it helps systematically identify, describe, and validate concrete instances where CSO or partner actions influenced policy content or processes. Take advantage of the guidance on Outcome Harvesting methodology provided in the documents below. For each “harvested outcome,” document:
What changed? (policy content or process)
Who changed it? (which institution or decision-maker)
How did the change happen? (advocacy, evidence, coordination)
How significant was this change?
How did the project contribute? (e.g. facilitation, capacity strengthening)
3) To track progress over time, apply the rubric at baseline and again at planned reporting points (e.g. annually and/or at endline) to assess if the level of CSOs’ and partners’ contribution shifts over time - e.g. from “emerging” to “significant.”
4) Interpret results and scoring within the policy, institutional, and political context. For example, a local CSO influencing a small municipal guideline may be as meaningful as a national coalition shaping elements of a major sector policy. When assigning a score, encourage partners to explain what steps would be needed to reach the next level of progress in the following period, reflecting on enabling conditions - openness of government, capacity of networks, timing, donor support - as well as barriers that may need to be addressed. This helps avoid inappropriate comparisons across contexts and provides insight into the political space in which CSOs and partners operate.
5) Given that civic space has many actors, examine how project activities may have influenced the extent of CSO and partner contributions to policy development. This will help you to understand your contribution more in depth. Determine whether the observed shifts can be linked to project support (e.g. capacity strengthening, coordination, facilitation of dialogue spaces, funding, evidence generation, networking efforts). When assessing contribution, check whether (a) the project’s activities align with the outcome, (b) stakeholders confirm project influence, and (c) whether there is stronger alternative explanation. Document contribution pathways using interviews, reflection sessions, or Outcome Harvesting to understand how the project helped strengthen CSO engagement in the policy process.
6) If resources allow, consider also alternative or external factors contributing to change. These can be assessed by asking questions such as:
How has the political context influenced this change/outcome, either positively or negatively?
How did cooperation with other actors affect the achievement of this change/outcome? Which actors were involved, and in what ways did their involvement help or hinder progress?
7) If your project aims to strengthen participation and ownership of the key stakeholders, engage them in the indicator methodology design and/or indicator results validation. Involve CSO/partner and/or government representatives developing the rubric criteria, reviewing the assigned levels / numerical ratings, and discussing their shared understanding of contribution levels.
8) If your project has a strong Gender Equality and Social Inclusion component, consider assessing whether marginalised or underrepresented groups (e.g. women, youth, persons with disabilities, ethnic minorities) are engaged in advocacy, stakeholder engagement, or policy dialogue processes. Examine whether their perspectives are reflected in policy proposals or drafts and whether they benefit from the resulting policy changes. Document barriers these groups face in contributing to or influencing policy development, and use these insights to recommend ways to make advocacy and policy dialogue processes more inclusive and equitable.
9) If using rubrics is too resource-intensive or if sufficient data cannot be collected to reliably justify scores (e.g. due to project scale or partner coordination constraints), consider simplifying the approach. You may use Outcome Harvesting independently without rubric scoring to document and verify results in a qualitative, results-based, and evidence-driven way. You can also opt for a simpler quantitative indicator such as Number of government policies at [local / provincial / national] level developed or revised with active CSO participation, which focuses on counting verified instances of policy engagement rather than assessing the depth of influence. These alternatives reduce measurement burden while still capturing meaningful CSO engagement.
10) For EU-funded projects, consider the following OPSYS indicator instead (more options can be found on Predefined indicators for design and monitoring of EU-funded interventions website): Extent to which the EU-funded intervention supported effective civil society advocacy promoting the expansion of the social protection system (OPSYS core indicator).
11) Consider other IndiKit advocacy indicators:
number of engagements with relevant decision-makers regarding the advocacy efforts
increased representation in fora relevant to the advocacy objectives
number of existing and new partnerships for implementing advocacy efforts
number / % of [specify the target group] actively involved in designing the advocacy actions
number / % of [specify the target group] actively involved in implementing the advocacy actions
Access Additional Guidance
- Better Evaluation - Outcome Harvesting
- INTRAC (2017) Outcome Harvesting (.pdf)
- Outcome Harvesting
- Use of Rubrics
- INTRAC (2024) Outcome Mapping (.pdf)