Megan Spencer and Dr. Michael Findley, Department of Political Science
Abstract
In today’s NGO environment, evaluations are frequent, but NGOs rarely have outsider expectations on which to base their assessments. Thus, NGOs have strong incentives to make their evaluations strictly contextual. Consequently, NGO stakeholders—donors, beneficiaries, and local government officials—have little information as to the quality of a given organization (Barr et al 2004). This leads to a large spectrum of evaluation techniques and environments in which information asymmetries on both sides of the market are detrimental to NGO improvement (Edwards and Hulme 1998). We believe that a standardized evaluation mechanism for NGOs can help to alleviate some of these obstacles. However, it is unclear whether NGOs would be willing to participate in such an evaluation. In fact, little information emphasizes NGO willingness to be evaluated, in general. What factors influence NGO willingness to be evaluated?
Summary
To begin to address the information asymmetries previously discussed, we developed a standardized evaluation mechanism called the NGO Scorecard. With help from large NGO networks comprised of several hundred organizations across Uganda, we identified NGO policy structures and development strategies that are applicable across all nonprofit areas. So far, over 140 organizations have participated in the program in which we evaluated how well their internal structure held to these policies and strategies. We plan to publish the first set of scorecards before the end of the year.
Using my ORCA grant, I was able to conduct an experiment which tested NGO attitudes toward this NGO Scorecard idea. The experiment was designed to help us better understand both the effect of published evaluation (as opposed to unpublished evaluation) on willingness to be evaluated and the effect of the scorecard itself on willingness to be evaluated.
The experiment consisted of an email being sent to 1200 NGOs in South Africa informing them of a new evaluation program recently implemented in Uganda. Half of the sample received an email that stated that the evaluation in this program would be published. The other half was told that the evaluation would not be published. At the end of the invitation, the organization was asked if they would be interested in participating in this evaluation opportunity if we decided to South Africa. This segregation into “published” and “non-published” groups was the first treatment effect in which I could find out the effect of publication on willingness to participate in the evaluation.
Each organization that clicked “Yes” in either the published or non-published group was then directed to the second treatment effect. The click led them to an example of a scorecard (hereafter referred to as “report card”). Each organization randomly received one of three report cards each having 1) no grade, 2) an A grade, or 3) a D grade. They
were then asked whether the organization would be interested in participating in our evaluation that produces a report card like the one they saw. By randomizing the grades, I was able to find out how willing organizations are to participate in the evaluation once they know exactly what the evaluation report entails, and I was able to understand the effect of positive and negative publicity on willingness to participate. The following table shows graphically these primary and secondary treatment effects.
Thus, there are two independent variables in this design. Firstly, I primed the organization with the non-publication and publication treatment in the emails. Secondly, I presented each organization that desired more information with a report card presenting no grade, a good grade or a bad grade. Whether or not an evaluation is published and the grade the organization receives are the independent variables.
The dependent variable is willingness to participate in the evaluation. This is measured twice–once after the publication treatment and once after the grade treatment.
My initial results actually show the opposite of what I expected. Organizations were more willing to deny the evaluation opportunity if their evaluation was to be published, and grades did not change willingness to be evaluated at all.
I had posited that organizations might view publication as a way to decrease the “publicity gap” between them and stakeholders. However, it is also possible that publication in general is extremely risky for all organizations. If donor funding is their highest priority, organizations might fear complete accountability to all stakeholders, especially if the organization is new and under-developed. I hoped that organizations would be willing to take that risk—organizations in Uganda consistently were willing to take it; however, these results indicate that publication does not encourage evaluation.
The grades did not affect organization willingness in any way; however, the manipulation check on the grades was not as strong. These results are not discouraging, however. Most of the organizations that viewed the scorecards clicked “yes” for desiring to participate. This means that of the organizations that were originally willing to gain more information, the scorecard did not further deter their willingness to get involved. Whether organizations viewed the positive or negative grade did not affect their willingness. This is extremely encouraging as far as the NGO Scorecard in Uganda is concerned.
This study was presented as a poster at the 70th Annual Midwest Political Science Association Conference in Chicago in April of this year. It was also presented at various BYU discussions and at the 26th Annual National Conference of Undergraduate Research in Odgen, March 2012.
References
- Barr, Abigail, Marcel Fafchamps, and Trudy Owens. 2004. The governance of non-governmental organizations in Uganda. Forthcoming (August). Oxford University.
- Edwards, Michael, and David Hulme. 1998. Too close for comfort? The impact of official aid on nongovernmental organizations. World Development 24, no. 1:961-973.