Please fill out the required fields below

Please enable JavaScript in your browser to complete this form.
Checkboxes

Program Evaluation in India: The Perspective of Evaluators and Practitioners

Binoy Cherian and Vishwas V Patel bring in varied perspectives from an important set of stakeholders – impact assessment agencies; they also share the state of the art in the space in terms of methods and technologies.

9 mins read
Published On : 30 March 2022
Modified On : 13 November 2024
Share
Listen

The domain of monitoring and evaluation (M&E) activities has evolved over the years and secured a firm standing in the development sector in India today. With the aid of technological tools, and the imperative of accountability on programs – both internally to organizations themselves, and externally to funders – M&E has become an inalienable part of every program. “Coming up with a strong M&E [framework] with a justifiable resource allocation can also make a proposal very strong,” says a practitioner with over 18 years of experience in the development sector, hinting at the importance of having an M&E system in place even before program rollout.

Here, it is pertinent to note a distinction between regular project monitoring and overall program impact evaluation – M&E cannot always be viewed as a monolithic domain. While monitoring of every program has become indispensable for accurate record keeping, tracking KPIs (Key Performance Indicators) and reporting to donors, impact assessment is more discretionary. Depending on the nature, scale, time-frame of the project, available budget, and whether there is any real need for evaluation evidence – for instance, for upscaling or replicating the project – impact evaluation (IE) is undertaken.

While large implementing organizations have well-capacitated independent M&E teams, smaller organizations confine themselves to monitoring through standard approaches such as logframe and theory of change (ToC). Where program evaluation is required, it is outsourced to external agencies. It is not only that evaluation requires a more sophisticated, systematic approach with corresponding skills and capacities, but also that an external professional agency imparts more credibility. While IE may not be required for each and every program, from a policy-making perspective – to see whether programs are scalable, not just for organizations themselves but for policymakers in general – rigorous evaluation evidence becomes imperative.

This article, based on about-an-hourlong, semi-structured interviews with twelve practitioners from nine leading implementing/evaluation organizations, presents some of the current practices as well as challenges with regard to M&E systems/practices in India.

Current M&E Practices: Some Salient Aspects

As with other sectors, technological advancement has had a major bearing on M&E practices. Some of the common data collection tools include KOBO, SurveyCTO, ODK Collect, etc. Frequently used data analysis tools include Excel, SPSS, STATA, R, NVivo and Atlas.ti. And popular geographic mapping tools include Google Earth Pro and QGIS.

A more sophisticated aid, Airtable, is used to map the questionnaire to the outcomes outlined in ToC. The length of the questionnaire tends to be inversely proportional to the quality of data collected. “Applications like Airtable help with keeping the questionnaire short while corresponding with all the outcomes,” opines a researcher at a top evaluation organization.

Proper data collection is a crucial task over which rest of the M&E work rests. It should be done in a timely manner and with integrity. Some of the organizations have developed a large cadre of community data collectors (CDCs), equipped with smartphones and digital data collection applications. Otherwise, data collection could get tedious and hamper other program-related work of development practitioners.

Delegating data collection – with suitable compensation – to trained data collectors is a welcome practice. This indirectly strengthens outreach and engagement with the community too. The upskilled CDCs can also find work with other organizations in need of data collection. Training workshops for CDCs keep them up to date with best practices and technological upgrades regarding data collection.

Here, as a good practice, care is exercized to contextualize the questionnaire, so that survey respondents easily relate with the questions. For example, in the words of a senior practitioner, “bigha is a unit of measurement of land used in various parts of the country. There should be an option to collect data in bigha, and later during analysis it may be converted into acre or other standard units.”

Viewing through the methodological lens, there is a growing appreciation of the importance of context in program evaluation. Though RCT (Randomized Controlled Trial) is considered the gold standard in research methodology, there is an increasing trend of adopting the mixed methods approach to obtain more contextual insights. Here, in addition to impact evaluation, process evaluation takes due emphasis by relying on suitable qualitative methods so as to address not only the ‘what’ question but also the ‘why’ and ‘how’ questions of program impact.

Having a separate vertical in charge of quality assurance mechanisms is another good practice. Some organizations use audio audits as a mechanism for back checks in survey data collection as part of quality assurance. SurveyCTO, for instance, provides an option for audio recording for different durations.

By randomizing such recordings across data fields, any attempt at gaming the survey process could be effectively tackled. Using these audio audits for back checks – as opposed to doing it physically by revisiting a sample of households – is a major innovation in M&E field practice.

Informed consent forms with research participants are a necessary requirement and some of the organizations are comparatively more stringent about it. There are efforts to uphold data privacy and integrity, under the larger ambit of research ethics. Data privacy agreements between organizations, stipulating protocols for management and protection of data, are followed in some organizations. For instance, at no cost, personally identifiable information could be leaked. Data sharing in these organizations happens legitimately and through encrypted files only. In the odd chance of data breach, remedial protocols are laid out.

Some organizations are expanding the scope of M&E by linking these with research and advocacy work. They produce status reports pertaining to their domain of work. For instance, the status of agrarian economy, adivasi livelihood, etc. is published and disseminated. There may already exist reports that provide a broad picture of the state of different domains; incorporating insights from M&E complements this knowledge repository.

“There is a subtle distinction, with the emphasis M&E brings on data and metrics,” points out a senior practitioner. “For example, details such as mean income of adivasis in Jharkhand; percentage of adivasi people lacking food security; data on their dietary diversity, etc. add richness to reports.”

In addition, some organizations tie up with university researchers to conduct evaluation research. Such studies often aim for methodological innovations to uncover program insights, ultimately contributing to evaluation literature.

Challenges and Way Ahead

While the M&E space is witness to many good practices, there remain several challenges too. One of them is the power dynamics between funding organizations and implementing organizations. There is a need for more dialogue and deliberation between donors and development practitioners, and for better cross-sectoral knowledge sharing. The corporate sector, from which donors predominantly hail, is incongruous with the development sector in several respects.

For instance, owing to the social and politico-economic realities on the ground, the latter operates amidst peculiar problems and constraints. An appreciation of this is often missing among donor agencies who typically, as perceived by development practitioners, do not have sufficient exposure to these ground-level realities.

Also, there is much difference between the corporate sector and development sector in terms of strategy, timelines, stakeholder engagement, and ways of functioning in general. So, in designing project proposals, and determining project outcomes and indicators, there is a need for consensus building with inputs and expertise drawn from across sectors.

In a similar vein, there is a need for better appreciation among donors about the importance of qualitative methodologies, in addition to quantitative ones in evaluations. Given high cost, owing to longer timelines and the requirement of high-skilled professionals, qualitative studies are usually disregarded, despite the rich contextual and process-related insights they could offer on projects. The advent of the mixed methods approach, with the inherent triangulation benefits, is encouraging; yet this methodological open-mindedness should be genuine and not merely employed for cosmetic reasons.

In this regard, a recent innovation within qualitative methodology is worth highlighting. Audio diaries – audio recording devices, typically the ubiquitous mobile phone – are used to collect (mostly semi-literate) “participants’ practices, feelings, reflections, and interactions with their physical and social environment in real time.”

Talking about methodologies, while RCT is considered the gold standard of evaluation, it may not always be feasible. An essential feature of RCT is the random choice of intervention and control groups from the same eligible population – but often in reality it is hard to satisfy this strict requirement. There could be conflicting interests of implementers and evaluators. While the latter may ask for delaying certain interventions for control groups, the former might have to deliver their interventions in a time-bound manner.

In such scenarios, deft negotiation is called for. And, in lieu of RCT, a suitable differencein-differences (DD; either double difference or triple difference) estimator could be employed. With DD, intervention and control groups are chosen from naturally occurring settings (quasi-experimental) – as opposed to RCT which requires their homogenization (strict experimental setting) – and differences in outcomes over time between those groups are compared for analysis.

Regarding the operational and procedural challenges of M&E, some senior executives candidly revealed that there is a sense of dissatisfaction, if not suspicion, about M&E among practitioners. Often the response is that nothing new or striking is revealed by M&E – with the refrain being “This was always known.” On the contrary, there are doubts expressed about the validity of M&E results, with the objection that insights and experience gathered from regular fieldwork run counter to the results of M&E. Some practitioners feel that M&E is used for surveillance of field staff, which should not be the case.

This fragile state of affairs can only be effectively addressed by designing and executing participative, comprehensive and professional evaluation studies. Further, “There should be a concerted effort to elevate M&E into MEL: A Learning framework should complement M&E activities,” as suggested by a senior practitioner. This would facilitate drawing lessons from the work of various programs and projects for the professional growth of practitioners, as well as for the growth of the organization as a whole. This would help curb the resentment against M&E that it is ‘limited,’ ‘extractive’ and ‘ritualistic,’ catering only to the requirements of external donors. It would encourage more willing participation in M&E from the staff as well.

There is also much talk around participatory M&E. Generally, community participation, and inclusivity therein, is limited to the stage of data collection alone. To enhance the participatory value of M&E, ensuring meaningful community participation in the design of evaluation studies is advisable. Also, efforts are needed to take the findings and evidence of evaluations back to the community, for their intimation and deliberation. It should be presented in their language and in a way that is simple and easy to understand. This would help improve community investment and participation in enhancing the success of developmental programs. In the words of another senior practitioner, good M&E should see “community as partners rather than as beneficiaries.”

The observance of transparency, reproducibility and ethics (TRE) in M&E practice must become more commonplace. It has to go beyond the standard practices of ethical research such as obtaining informed consent from research participants, respecting their rights, maintaining confidentiality, etc.

There must be more emphasis on ensuring transparency and reproducibility of data throughout the program cycle so that other, independent researchers could verify the evaluation findings and results through their own analyses, enhancing the credibility of evaluation studies.

Often there is a bias toward positive outcomes. But there remain valuable lessons to be learnt from failed programs. By taking the effort to publish zero-result and negative-result studies as well, key insights and evidence could be tapped that would be helpful for future programs and interventions.

Conclusion

Over the last few decades, M&E has gained in momentum within the development sector in India. One sees a terminological advance with MEL, MERL (including Research and Learning), MEAL (including Accountability and Learning) – signifying the broadening scope and ambit of M&E activities. In consort, there are several best practices witnessed in this space, encompassing technological aids, procedural and methodological innovations, and growing importance of data and research ethics. In tandem, however, are the challenges that need to be tackled. Navigating intersectoral competing interests, under-appreciation of qualitative approaches, and poor community engagement are some areas in need of further work. With a growing movement of M&E practice, research and advocacy, the road ahead promises durable solutions and more good practices.

References

O’Reilly, K., Ramanaik, S., Story, W. T., Gnanaselvam, N. A., Baker, K. K., Cunningham, L. T., Mukherjee, A., & Pujar, A. (2022). ‘Audio Diaries: A Novel Method for Water, Sanitation, and Hygiene-Related Maternal Stress Research.’ International Journal of Qualitative Methods, 21, 160940692110732.
https://doi.org/10.1177/16094069211073222

Acknowledgement:

We express our deep gratitude to the development practitioners and evaluation professionals for participating in our interviews and sharing their insights. Thank you: Anuradha Navalkar, Clement Dayal, Dibyendu Chaudhuri, Divyanshu Chaturvedi, Indira Patil, Jacob John, Preeti Misra, Rajeev Kumar Singh, Ruchinilo Kemp, Satyanarayana Ramanaik, Shaurya Gupta, and Tania Tauro. It is hereby stated that the views shared by them were personal without necessarily reflecting that of their respective organizations. We also express heartfelt thanks to our University faculty, Rahul Mukhopadhyay and Suraj Jacob, for their guidance and support throughout this study.

Share :
Default Image
Binoy Cherian
Binoy Cherian is an engineer with interest in leveraging technology and data for the development sector. He is currently pursuing a master’s degree in Education from Azim Premji University.
Default Image
Vishwas V Patel
Vishwas V Patel is a final-year post-graduate student of M.A. Public Policy and Governance at Azim Premji University (2020-22), with interests in education practice, research and policy.
Comments
0 Comments

Leave a Comment

Your email address will not be published. Required fields are marked *

No approved comments yet. Be the first to comment!