The last stage of the analysis is evaluating program effects after implementation and/or if completion is underway. This evaluation usually includes three components: i) results evaluation, ii) impact evaluation, and iii) cost-effectiveness analysis.
This stage intends to answer five fundamental questions about the program:
Answering these questions not only serves as an evaluation of what has happened but also provides valuable information for the project’s continuity and application in various contexts.
Results evaluation
What is an evaluation?
Evaluations are periodic and objective assessments of a planned or ongoing project, policy, or program. They are also a comprehensive process of observing, collecting, measuring, systematically analyzing, and interpreting information about a program’s performance and effectiveness, and communicating information that provides inputs for policy formulation and decision-making1.
There are different types of evaluations varying in program components. Some evaluate design, some outcomes, and some evaluate both. Evaluation provides an in-depth understanding of an intervention, whether public or private, which allows an evidence-based assessment of its design, implementation, results, and impact. Therefore, evaluation is essential for achieving the objective of formulating policies based on empirical evidence.
What is the evaluation for?
Evaluation guides decision-making for an efficient and effective allocation of resources and helps to identify obstacles.
In terms of accountability, transparency, and negotiation, it is useful to:
– Know how resources are being used and whether planned objectives are being achieved.
– Guide investment decisions.
In terms of improving the design and operation of investment programs and projects, evaluation is useful to:
– Generate greater knowledge about the type of investments needed to obtain better results.
– Determine if what is being done works and the mechanisms that lead to this result.
– Optimizar el proceso de operación de un programa o proyecto.
Factors to keep in mind
During evaluation, we must consider the four previous stages: diagnosis, design, follow-up and monitoring, and implementation. This makes it possible to verify whether the objectives have been met and the relevant answers have been addressed.
To identify if you have met these factors in your program design, please complete the self-evaluation survey.
Types of evaluation
Two types are discussed here: results evaluation and impact evaluation.
Results evaluation: Responds to descriptive questions, about what is being executed.
Impact evaluation: Responds to questions of causality, i.e., to what extent does the program or project directly affect the desired outcome.
This stage also addresses the cost-effectiveness analysis, albeit not being a type of evaluation.
Results evaluation
What is a results evaluation?
Results evaluation focuses on the gross effects of the program, i.e., the extent to which the specific objectives of the program or project are met without discounting the influence of external factors. It allows us to measure and monitor the program results in the short and long term.
In this type of evaluation, it is necessary to differentiate between the goods or services delivered and the results.
The results evaluation examines:
The changes in the beneficiaries' conditions after a certain time of exposition to the intervention.
The contributions generated by the organization for a greater result.
The factors that affect the given outcome in the exposed beneficiaries.
How to carry out a results evaluation?
Contributions generated by alliances with partners to change the outcome.
A results evaluation implies measuring the changes perceived by beneficiaries over time.
Measuring program results is done by comparing the indicator of interest before (baseline) and after (follow-up) the intervention.
To carry out the measurement we must:
Establish specific objectives
Stepwise randomization
Determine the criteria to be evaluated
Conduct periodic evaluations
Use the results of the evaluation
Remember that the factors that can influence the result are multiple, complex, interrelated, and vary constantly. The following are the steps to identify these factors3:
01.
Collect and analyze information
02.
Identify the most important contributing factors that drive change
03.
Review local sources of knowledge about the factors influencing the result
04.
Solve the problem of the results’ unintended second-order effects
When assessing results, it is crucial to consider the organization’s contributions and those arising from partnerships to modify the outcome.
Organizational contributions span the full range of activities and initiatives, within and outside projects4. These contributions can be established by:
A.
Analyzing the consistency between the organization’s strategy and the overall operational management about changes in results.
B.
Verifying if the organization's planning and intervention management are aligned to leverage synergies contributing to results.
C.
Determining whether or not individual outputs are effective in contributing to results.
Results are influenced by a complex range of factors. Change invariably requires the joint action of several institutions and stakeholders5. The contributions generated by partnerships can be established by:
Examining the degree of mutual support between partners
Examining how partnerships were formed and their performance.
Conducting joint evaluations
Impact evaluation
What is an impact evaluation?
Impact evaluation is a type of evaluation focused on answering questions related to the causal relationship between a result and the specific program or project. It aims to identify observable changes in the result that can be directly and uniquely attributed to the evaluated program or project.
Questions that an impact evaluation seeks to answer
All impact evaluations answer a cause-effect question.
.
These questions can be applied to any setting, as long as they assess the impact of a program modality or design innovation, not just the program itself.
Here are some examples of questions that an impact evaluation might address:
What is the causal effect (impact) of providing scholarships to students for school attendance and academic performance?
Does improving roads and access roads increase access to the labor market and boost household incomes?
How does replacing dirt floors with concrete floors impact children's health?
Does class size affect student learning?
To answer cause-effect questions, it is necessary to understand the concept of potential or counterfactual results.
The counterfactual is the hypothetical scenario that would have occurred in the absence of the program. Particularly, to see the treatment effect, the counterfactual is the result observed in the scenario in which the program participants would not have participated in the program.
While it is not possible to observe a counterfactual for each individual, it is possible to construct counterfactuals if our objective is to analyze averages. Moreover, this is why when we talk about the effect of a program or intervention, we usually talk about the average treatment effect.
Usually, to achieve the correspondence between the two groups, we use statistical models that depend on the program design and the targeting method used in the program. Thus, depending on targeting method, impact evaluation designs are classified as experimental or non-experimental6.
In experimental designs, individuals are randomly assigned to the treatment or control group. Assigning individuals to program beneficiaries in this way ensures that they are comparable in all observable and unobservable characteristics.
Experimental designs can assign treatment in different ways depending on the situation of the project to be evaluated, always maintaining the criterion of randomization. Thus, randomization can be:
Simple randomization
Stepwise randomization
Randomization by strata
Block randomization
The effects of a program with an experimental design can be obtained through the difference in the averages of the results of the treatment and control groups, once a considerable time has elapsed in the implementation of the program.
Non-experimental designs are those in which treatment assignment is not randomized and, therefore, multiple observable and unobservable characteristics are related to the inclusion of individuals in the program. Given these dissimilarities, the difference in the average results of the two groups does not provide sufficient information on the impact of the intervention. It is therefore necessary to employ more robust statistical methods such as:
Differences-in-Differences (DiD)
Matching
Instrumental variables
Discontinuous regression
It is worth noting that the two main factors for choosing the non-experimental method are the program allocation method and information availability. Therefore, it is not feasible to apply all methods in every situation.
Types of impact evaluations according to time
Impact evaluations can be classified as prospective or retrospective depending on how the result variables are measured. Prospective evaluations are usually more robust in that they measure program results from the beginning of the program. Retrospective evaluations are usually conducted when there is very little information on program implementation and design.
An impact evaluation is much more robust if designed jointly with the program.
How do you decide whether or not to do an impact evaluation?
Not all programs or projects require an impact evaluation; one is carried out because you want to answer a cause-effect question that provides specific information about the program or project.
Keep in mind that, if information is unavailable, you must collect data, and this can make the evaluation very costly.
If you are starting a new program, or are thinking of expanding or innovating an existing one, answer a series of questions to help you decide whether to conduct an impact evaluation. The following figure shows the questions that will help determine whether or not to conduct an impact evaluation.
Cost-effectiveness analysis
Cost-effectiveness analysis is a method used to measure the ratio between a program or intervention’s monetary cost and the results obtained during the impact evaluation stage. Accordingly, a cost-effectiveness analysis makes it possible to quantify the effectiveness obtained for each unit of additional cost by aggregating total costs and benefits.
In addition, it is possible to compare different interventions through their cost-effectiveness analyses and, based on these, determine which program is the best alternative in financial terms.
Requirements for a cost-effectiveness analysis
A cost-effectiveness analysis requires detailed considerations of the impacts and costs of the relevant program.
A cost-effectiveness analysis requires detailed considerations of the impacts and costs of the relevant program.
Regarding program costs:
A program cost detailed analysis should identify:
01.
List of the components and materials required for program implementation
02.
Quantities and unit costs of these materials
A program cost detailed analysis should identify:
Program administration
Targeting costs
Staff training
Participant training
Implementation costs
Beneficiary costs
Averted costs
Monitoring costs
Ingredient method for cost-effectiveness analysis
In line with J-PAL, we believe it is advisable to follow the guide proposed by Dhaliwal et al. (2012) to perform cost-effectiveness analysis. This guide facilitates the collection of disaggregated program costs using an ingredient method, which consists of listing the program ingredients, estimating their unit costs, and determining their utilization through the implementation of the intervention. According to Dhaliwal et al. (2012), the costs associated with program design, evaluation, and research should not be considered in such an analysis7.
Once all the program costs and effects are obtained, they can be aggregated to make the program cost-effectiveness ratio as follows:
- Gertler, P. J., Martínez, S., Premand, P., Rawlings, L. B. and Vermeersch, C. (2011). La evaluación de impacto en la práctica. Washington D.C.: World Bank. Extracted from: https://openknowledge.worldbank.org/entities/publication/f090e5a0-16f6-5795-a3b2-7f711c8b7eb7
- PNUD.(2002). Informe sobre desarrollo humano 2002: Profundizar la democracia en un mundo fragmentado. Madrid/Barcelona: Mundi-Prensa Libros. Extracted from: https://hdr.undp.org/system/files/documents/hdr2002espdf.pdf
- Ibid
- Ibid
- Ibid
- Morra-Imas, L.G. y Rist, R.C.(2009). El camino hacia los resultados: diseño y realización de evaluaciones eficaces para el desarrollo. Washington D.C.: World Bank.
- Dhaliwal, I., Duflo, E., Glennerster, R. and Tulloch, C. (2012). Comparative Cost-Effectiveness Analysis to Inform Policy in Developing Countries: A General Framework with Applications for Education. Abdul Latif Jameel Poverty Action Lab (J-PAL), MIT. Extracted from: https://www.povertyactionlab.org/sites/default/files/research-resources/CEA%20in%20Education%202013.01.29_0.pdf