Please fill out the required fields below

Please enable JavaScript in your browser to complete this form.
Checkboxes

Impact Assessments in Education

In their essay, Anuradha De and Meera Samson discuss aspects of the impact assessment process in the domain of education interventions.

7 mins read
Published On : 26 March 2022
Modified On : 8 November 2024
Share
Listen

I mpact assessments in education have become an increasingly visible strand of educational research. In the past two decades, multiple initiatives have been introduced in government schools in India towards improving access, quality and equity. These initiatives have generally been implemented with resource and technical support received from the private (for profit or non-profit) sector, UN bodies or foreign aid.

For example, the Activity Based Learning pedagogy was designed by Rishi Valley school in Andhra Pradesh. It was introduced by Tamil Nadu government in schools in Chennai in 2003 with support from UNICEF.

Other well-known initiatives include Pratham’s Read India Campaign introduced in 2007 and Teach for India providing voluntary teachers – introduced in 2009 – both supported by multiple donors.

Impact assessments seek to provide reliable evidence on the impact of such education interventions. This is an important exercise as it not only seeks to estimate the value added by the intervention, but also to provide insights towards what works, and what doesn’t work, in a particular context. Assessments are usually undertaken in response to the donors’ requirements to see the impact of the initiative they are funding. They are also used to showcase an intervention, to persuade the government to scale up a particular intervention in government schools, or to convince the management of private schools to implement it.

When any new initiative is introduced into the school system, it is expected that it will lead to a series of changes in its functioning including classroom processes, and that these in turn will lead to improved schooling outcomes such as decrease in dropout rates or improved learning levels. These outcomes depend on many factors. To isolate the impact of the initiative, assessments have generally involved comparison of the outcomes in the schools where the intervention has been introduced (intervention schools) with that in schools where there has been no intervention (control schools).

Such assessments are primarily done using a quantitative lens – in the sense that interventions and outcomes are measured by certain indicators and the results interpreted as the change in outcome for a unit change in inputs. For example, studies have estimated the impact of appointing an additional female teacher in school, or the construction of girls’ toilets, on the changes in female enrolment and attendance. Other examples include the impact on enrolment, attendance, and learning levels of introducing midday meals, appointing a volunteer teacher, as well as multi-pronged initiatives spread over a number of years.

Need for Critical Look at Findings of Impact Assessments

The results of such impact assessments are not always conclusive. The estimated impacts can be very different from that expected and interventions tend to show diverse impacts in different contexts. The results may also depend on how long after the implementation the assessment is conducted. So when findings from such studies are presented, they need to be interpreted with care. We discuss below some factors which are important to consider.

The Intervention Schooling outcomes depend on characteristics of schools, teachers, students and parents. The intervention could be a change in one or more of these — we have mentioned some examples earlier. When examining the results of any assessment, it is important to reflect on the following questions. What is the initiative that has been undertaken? Is there a clear theory of change behind the selection of the initiative, as in what are the factors which are expected to change and why? How will the intervention affect school functioning and classroom processes? How likely are these changes to bring about the expected outcomes? Which factors might act as constraints in reducing the effectiveness of the initiative? Does the theory of change appear to be consistent with the way in which the education system functions?

For example, when a voluntary teacher is appointed to a school, they may well impact classroom processes positively, and so impact students’ learning outcomes. But the impact will also depend on the actions of the existing teachers, whether they continue to work as earlier or reduce their teaching efforts or work harder. The final outcome will depend on the extent to which the intervention accounts for these alternate possibilities. The more important issue of concern may be that such an initiative is not sustainable, and is not dealing with systemic issues of teacher shortage and teacher accountability.

Study Design

A second set of concerns deals with the research design and methodology selected for the study. Most impact assessments in education are done using quantitative methods as policy makers and donors feel that numerical data are a powerful source of evidence, and also useful for purposes of comparison between different sites and over time.

As mentioned earlier, this methodology mostly involves the study of one group of schools in which the intervention is introduced (intervention group) and the study in a second group where there is no intervention (control group), collecting data on certain variables before the initiative is introduced (baseline) and after a certain period of time when some impact might be expected to be visible. This could be midway through the project (midline) and at the end of the project (endline).

It is important to know whether the intervention and control groups had similar characteristics at the baseline and were selected randomly (to avoid introducing any bias in the final results), whether the time frame is long enough for the changes to be visible, and what variables have been selected to capture the impact of the intervention (impact variables).

A major problem in a quantitative study is that it is applicable only when impact variables are measurable, or if the impact on them can be seen through some other variables – for example, if the teachers are provided training to provide inclusive education, changes in their attitudes towards students with disability can be captured through analyzing their responses to a series of questions using a Likert scale.

Qualitative research methods such as indepth interviews, observation, focus group discussions etc. are ideal to capture many aspects of the education process including quality of teaching and learning. But these are resource and time intensive and do not easily lend themselves to capturing changes over time or across sites.

Assessment studies have sometimes used a mixed methods approach – one which combines quantitative and qualitative research methods. The quantitative data provides the evidence on the measurable variables across a wider sample of schools. Qualitative data provides more detailed information on a smaller sample of schools, selected from within the wider set of schools. Through triangulation of findings from these different strands of research, a study can provide a more nuanced understanding of impact, critical in the field of education research.

Implementation Issues

It is often observed that the implementation of the initiative is quite different from the way it was conceived. One of the important reasons is the context in which it is implemented. There are interstate, inter-district and even inter-block differences related to socio-economic and geographical characteristics, as well as levels of governance. So even an initiative like construction of a girls’ toilet in each school may not show a positive impact, for example, in an area where there is water scarcity.

More important, there are major differences in the functioning of the school system. The same initiative is likely to be implemented differently in areas where schools have a shortage of teachers and poor infrastructure, compared to areas where schools are better resourced.

Even within a specific area, there could be varying degrees of cooperation from the teachers and the Head. When the teachers play an important role in the implementation of an initiative, they are usually required to put in additional time and effort. So the quality of implementation depends on their availability, their motivation, and their ability. When new staff is recruited for program implementation, the cooperation and support given by school staff varies too.

The assumption behind an impact assessment is that the initiative is introduced right after the baseline survey, and is implemented with regularity and the same intensity during the intervention period. This is often a challenge as introduction of the initiative requires active support from different persons from within the system. For example, organizations implementing the initiative may get formal permission at the state level to intervene in government schools. But at the district level, they could face additional delays in receiving the go ahead. This may impact the timing of when the initiative can be introduced.

For example, an intervention which is to take place at the beginning of the school year may not be undertaken until later in the year when the teachers and students are already stressed by the upcoming annual examinations. The initiative then become much less effective.

Maintaining the regularity and intensity of the initiative is also difficult as it depends on the resources and motivation of the implementors. And the transfer of officials in the government which is a regular feature also slows down the implementation.

All these factors are critically important when looking at the findings presented. These challenges are typically not included in an impact assessment study. During dissemination most studies are focused only on the impact and the final outcomes.

Some Concluding Thoughts

Education interventions are quite complex. They require active involvement of many individuals. If successfully implemented, they may be able to have a positive impact in the education system in some contexts. However, they are difficult to replicate.

Their impact depends on conditions within the school and external to the school, all of which vary greatly in different contexts and are changing over time. The more serious issue is how sustainable they are, and to what extent they can strengthen the education system in the long run.

A nuanced and holistic approach is required to assess the impact of these initiatives. Often a quantitative approach is assumed to be most suitable, particularly given the value placed on findings from a large and randomly selected sample. Going wide is more appreciated than going deep.

While quantitative studies are useful, they would be of greater value if they were integrated with qualitative research, including insights on the process of change, if any. While impact assessments have the potential to provide useful evidence for policy makers, it is important to be aware of their limitations.

Share :
Default Image
Anuradha De
Anuradha De is a researcher in Collaborative Research and Dissemination (CORD) a not-for-profit research organization based in New Delhi India. She is an economist by training and has been involved in surveys and research on issues related to education, labour and governance. Her particular focus has been on education statistics, finance and policy.
Default Image
Meera Samson
Meera Samson is a researcher in CORD. She has a background in Economics, and has done extensive field research in the area of school education, public and private school provision, and compliance with the Right to Education Act.
Comments
0 Comments

Leave a Comment

Your email address will not be published. Required fields are marked *

No approved comments yet. Be the first to comment!