Open Government is all the hype these days. A mythical fairy dust that can improve effectiveness of institutions, enhance efficiency of service deliveries, squash corruption, bolster democracy, and empower citizens all in one flick of the wrist. But do these claims hold true?
This quite the loaded question. It seems that at the moment we are focused more on the supply side of these initiatives (what new app is being developed to track service delivery, how many datasets have been publicly published, how many citizens are subscribed to send in their opinions through an SMS platform?) and less on the demand side (who is using these initiatives, what is the level of motivation, what are the incentives?) When it comes to looking at the impact of open government, let’s shift our focus for a second away from the open and towards the government. How is the government using these initiatives to change their processes?
The challenges are two fold when it comes to the measuring the impact of open government. As my previous post noted, there is a difficulty and lack of consensus in what constitutes impact. Are we trying to improve service delivery? Curb corruption? Amplify voices? Start a conversation? In order to measure what effect open government has on governance, we need to define what we are trying to accomplish.
Enter challenge #2. The great inadequacy in relevant evaluation frameworks to measure the open government success or failure. These wide stroking aims of open government – improve institutional processes, systems, and structures and empower citizen participation and mobilization – exist in a web of human interaction at the individual and group level. The impact evaluations I read about in the books that track linear changes, cause and effect processes, A caused B, however, don’t seem to have the bandwidth to take on this type of evaluation. When it comes to impact evaluations, you have three traditional trails to travel.
1) Experimental Designs: This category of evaluations involves selecting a sample of participants and randomly dividing them into two groups; those who will receive the intervention (treatment group) and those who will not (control group). These two groups are to be statistically equivalent to each other and measures should be taken prior to this random alloca9tion of the intervention to ensure this is the case. The control group then becomes the counterfactual which most evaluations seek to have. Estimating the impact then in this design is relatively straightforward as the results are interpreted by comparing the means of the two groups for the indicator to which you are estimating. This method seems to be considered the golden standard for estimating a project impact.
2) Quasi-Experimental Designs: A step below the experiment design is quasi-experimental. This involves constructing your own control or comparison group through matching and comparisons methods. Unlike the control group which is randomly assigned, this design seeks out a group of non-program participants that is as similar as possible to the program participants. These groups are matched based on observable characteristics or a set of mutual characteristics that will affect the intervention’s outcomes. Propensity score matching is a common type of matching but reflexive comparisons are also used. This is when the program participants are compared to themselves before the intervention took place.
3) Non-Experimental Designs. The lowest on the statistically credible totem poll of evaluations appears to be the non-experimental designs. These designs are conducted when it is not possible to randomly select a control group or identify a comparison group through propensity match scoring or reflexive comparisons. Rather we compare the program participants with a group of non-program participants using statistical methods to account for the differences. Correcting for selection bias is major concern and challenge of this design.
These quantitative methods have their place, of course. But how appropriate are they for measuring the impact of open government and e-participation mechanisms? When it comes to the question of the impact of citizen feedback data on government, we must move away from measuring outputs quantitatively and more towards measuring experiences, perceptions, personalities, context, etc. So when I ask – is there any place for ethnography within evaluations? – I say, why yes, yes there is.
So what is ethnography exactly and how could it fit within an evaluation? Ethnography simply defined as the study of people and cultures. Think of it as the methods used by anthropologists to collect and analyze their qualitative data. I believe ethnography holds great potential for understanding the impact of initiatives such as U-Report. If this burgeoning field is meant to deliver such renowned social and institutional change, we must understand the human system and the individual choice and action that creates this system. Social change – that which open government is to create – is too complex a matter to quantify linearly. The existence of social change is highly context specific and immensely influenced by the norms and preferences. Additionally it take place within a larger structure of processes, each intertwining and affecting the other, creating a myriad of reasons which may be impacting a program’s outcome. A collection of the individual experience can tell us wonders about a program’s potential success. It can help provide insight into some of the following questions….
- How do citizens view their relationship with their government after participating in an open government program?
- How do they express this change?
- What has influenced this expressed change?
- Is there greater two-way communication? Are both sides talking about the same thing?
- Have people become more politically engaged?
We must move beyond this “built it and they will come’ mentality. We must gather answers to questions like those posed above to begin to understand what impact these programs have and where we need to alter our approach if the intended benefits of open government initiatives are not manifesting. We must loosen the grip on the assumption that governments will have the capacity and motivation to respond and that citizens will be energized and able to demand accountability.
There are several studies that have incorporated this practice into their M&E to greater understand the impacts of the program. The University of Sussex took part in a study that focuses on the empowerment of young people and their sexual health in Uganda. Based on the study’s findings, they were able to improve program principles and assumptions, understand the socio-cultural logic, understand the barriers to program delivery and participation, and identify their appropriate outcomes. All these lessons learned are then incorporated into the policies relating to HIV and sexual health. Complimenting my point of view, much of the research/literature on evaluating the impact of open government initiatives is either non-conclusive, simply questioning how we can go about doing this, focuses on qualitative collection methods , or downright says experiment designs have no place in these types of evaluations. Questions remain and consensus lack but this only means there is great potential for creative solutions and new ways to go about doing old business.
Through the ethnographic framework, we can better understand the reality of the process of social change and how the individual fits within it. We can begin to explore the motivations behind why people became engaged. The traditional results-based M&E still has their place and will continue to be needed for accountability and efficiency measures but within the field of open governance, where there is a lack of consensus on definable outcome indicators, an ethnographic approach offers us the opportunity to learn more about the complexity in the identifying change within the current processes.