Dustin Homer and Dr. Daniel Nielson, Political Science
Abstract
We use AidData to assess the effectiveness of education aid. Focusing on the subset of the world’s poorest nations, we empirically evaluate the effects of educational foreign aid on primary school enrollment rates over the period 1975-2005. While past literature suggests that aid has had positive effects on education, this new and more comprehensive database allows a broader evaluation of aid’s impact in poor countries. Our findings – subjected to a variety of robustness tests – indicate that education aid dollars have, at best, mixed effects and, at worst, no significant impact, on school enrollments in less-developed nations.
My intent in pursuing this research was to use AidData, a comprehensive database of foreign aid projects compiled by BYU researchers, to evaluate the effect of foreign aid specifically directed toward the education sector. AidData paves the way for better-than-ever evaluations of foreign aid effectiveness because it provides the most comprehensive collection of aid information ever created. As such, we were able to work at the cutting edge of international development research. The work consisted of a comprehensive literature review on education aid research, then rigorous statistical tests of our data on education aid and education outcomes.
We approached the research with the hypothesis that the millions of dollars directed toward improving education in developing countries should have had some aggregate effect on education outcomes over the years. Essentially, we hoped to measure the effect that aid has on improving education in poor countries.
First, we chose primary school enrollment rates to serve as a proxy for education outcomes. This was in accordance with previous literature and available data. Essentially, we assumed that the proportion of children in a nation who attend primary school should be reflective of a country’s educational quality. While not without its faults, this method was the easiest to measure and most logical of available education proxies. We also tested literacy rates as a second dependent variable, but the data here was spotty and the results were inconclusive.
After determining that education enrollment rates would serve as our dependent variable, we obtained the independent variable data on education aid amounts from the AidData database. While all of this data was relatively easy to obtain, we had to perform extensive cleaning and organizing of the data to make it usable. We then used STATA statistical software to perform regression analysis on the data. We included a number of important control variables and tested the overall effect that education aid has had on enrollment rates in the developing world over the past 30 years. After several rigorous tests, we found no conclusive evidence that aid has had any significant relationship with enrollment over past decades.
While disappointing, these findings were important. They show that the trillion spent by rich countries on foreign aid may have questionable effectiveness. By learning when aid works and when it doesn’t, we are better equipped to influence more effective aid policies.
We presented these findings first the Utah Conference of Undergraduate Research in February 2010. A few weeks later, we took the paper to the “Aid Transparency and Development Finance: Lessons from AidData” conference at Oxford University. The opportunity to meet so many distinguished international development scholars and receive feedback from them was truly beyond compare.
Taking this feedback, we drafted a paper that has been reviewed by World Development. We are currently in the revise-and-resubmit phase and hope for a prestigious publication in the coming year.
In the future, we hope to expand these evaluations to other sectors of foreign aid – e.g. governance, health, etc. – and continue to critically evaluate the effectiveness of foreign aid.