Indicator Level
Indicator Wording
Indicator Purpose
How to Collect and Analyse the Required Data
Determine the indicator’s value by using the following methodology:
1) Set clear indicator definitions and reference period.
Specify which local decision-making processes are being assessed. This might include community consultations, planning meeting, participatory budgeting session, development planning, service committees.
Identify the target population (e.g. citizens or civil society organisation representatives who have participated in at least one such process within the past [define the period]) and relevant authorities.
Define the reference period for assessing satisfaction (e.g. “in the last 12 months”).
Clarify what “responsiveness” means in the local context, such as acknowledging citizens’ inputs, explaining decisions, implementing citizen suggestions. Ensure all enumerators use the same interpretation.
2) Design the satisfaction survey questions using the examples below:
RECOMMENDED SURVEY QUESTIONS (Q) AND POSSIBLE ANSWERS (A)
Q1: In the past [specify the time period], have you participated [specify the local decision-making process(s)]?
A1: yes / no / doesn’t remember
(ask the following question only if the previous answer is YES)
Q2: During this process, did you mainly observe, or did you actively participate (e.g. by sharing views, asking questions, or submitting suggestions)?
A2: mainly observed / actively participated
Q3: Overall, how satisfied were you with how [specify the local authority(ies)] responded to citizens’ opinions or suggestions during this process? Were you very satisfied, fairly satisfied, rather dissatisfied or very dissatisfied?
A3:
1) very satisfied
2) fairly satisfied
3) rather dissatisfied
4) very dissatisfied
5) doesn’t remember/won’t say
3) Collect data through household or participant surveys with a representative sample of participants in local decision-making processes.
4) To calculate the indicator’s value, divide the number of people who were “very satisfied” or “fairly satisfied” by the total number of respondents who participated in local decision-making processes (exclude those who didn’t remember or refused to respond). Multiply the result by 100 to convert it to a percentage.
Disaggregate by
The data can be disaggregated by gender, age, population group, urban/rural location, or other context-relevant categories, as feasible and appropriate.
Important Comments
1) It is highly recommended to include additional survey questions to better understand the main reasons for respondent's (dis)satisfaction with the given service. The following questions can be used as examples:
Q4: What were the main reasons for your satisfaction with how local authorities responded?
A4: pre-define the options based on the context and pre-testing (e.g. “my suggestions were acted upon,” “other citizens’ suggestions were acted upon,” “authorities explained decisions clearly,” “authorities followed-up,” “authorities provided feedback,” etc.) + include an option “other – specify: …………………………………….”
Q5: What were the main reasons for your dissatisfaction with how local authorities responded?
A5: pre-define the options based on the context and pre-testing (e.g. “my suggestions were not acted upon,” “other citizens’ suggestions were not acted upon,” “authorities did not explained decisions clearly,” “authorities promised follow-up but did not act,” “no feedback was provided,” “consultation felt symbolic or formal only,” etc.) + include an option “other – specify: …………………………………….”
Ask Q6 only if the respondent has actively participated in the local decision-making process(s).
Q6: After the process, were you informed or did you learn how your inputs (or citizens’ inputs) were used in the final decisions?
A6: yes / no / partly (received some information)
If deeper insights into the reported reasons for people's (dis)satisfaction is needed, collect this information through focus group discussions, key informant interviews, or additional survey questions. Document key drivers and use these insights to interpret trends and inform improvements in how authorities engage with citizens.
2) Use a larger sample. Unless you know in advance that all your respondents have participated in local decision-making processes in the given time period, your survey sample will need to be relatively large, so that even when you exclude people who have not participated, you will still have a representative number of respondents.
3) If you do not have the capacity to conduct a quantitative survey on a representative sample, consider changing the indicator to a qualitative one, for example Extent to which target population involved in local decision-making processes were satisfied with the responsiveness of relevant authorities and use qualitative approaches and methodologies instead.
4) If you want to understand how satisfaction changes over time, establish a baseline during the first data collection cycle and repeat the survey at defined assessment intervals (e.g. annually and/or at endline) using the same tools and sampling strategy to ensure comparability.
5) If your resources allow, consider verifying survey responses against meeting records or decision documents to confirm whether citizen proposals were addressed. Cross-check citizens’ perceptions with local authority representatives to triangulate findings.