Guest post by Marc Maxson.
"Calls for aid reform assert that better evidence will lead to better policy."
What if this isn’t true?
What if policymakers don’t really care about whether their policies align best with the evidence? Are we screwed?
No. Good systems can still achieve progress and coordination in spite of the people within that system. The example I know best is
scientific peer review.
You might think that the peer review process is a system to weed out garbage and improve publications – but that’s a pleasant side effect.
The actual “system” is built around these three components:
1) One “cannon” – Scientists within a field implicitly agree that there will be one shared body of knowledge, to which everyone will contribute. This knowledge can be spread across hundreds of journals because (a) all use the same peer review mechanism and (b) specialized search engines (
PubMed, Web of Knowledge, and
Scopus) index virtually everything – reducing the odds that an important paper will go unnoticed. Grantmakers also consult the one cannon before allocating.
2) Forced confrontations – Scientists must face their critics and respond to them. Most neuroscience papers are submitted at least 3 times, meaning you get to read about your professional inadequacies a hundred times over a typical career. Peer review also alerts your most successful competitors ahead of others, further pressuring you to address the weaknesses in your paper and resubmit.
Dialogue results.
3) A reputation system for scientists (see the
H-index) – This system fairly reflects your breadth and depth, ignores non-peer-reviewed work, and requires that your work not only pass peer review but also be valuable to others (frequently cited).
Science is simpler than international development. Knowledge is the only measurable output, and the
system outlined above reinforces quality. Moreover,
the relationship between scientists and grantmakers is driven by the quality of that knowledge, which is determined by one's peers - not the grantmakers. The NIH program officer doesn't need the knowledge he is funding; he wants to know how many people cited it and what the author's cumulative impact (H-index).
What happens when people try to game the system?
Let's assume for the sake of argument that scientists are only concerned with their own reputations. Scientific facts become a means to an end: prestige.
A scientist could try to publish a bunch of “facts” to vault his career, but only peer reviewed “facts” affect his prestige. Four or more quasi publications equal a peer-reviewed one; that's a lot of wasted effort.
A scientist who promotes his “facts” outside of journals won’t get cited and could get “scooped” by a competitor. These non-canonical publications win the media and public but lose in grant competition. The “herd” protects itself because the H-index never lies.
Groups of scientists colluding to publish each others’ papers and move their reputations forward also fail, for a number of reasons I explain in my
longer post.
Dialogue
The best part of the system is that scientists are forced to confront the other viewpoint
in order to publish and be heard by the larger community. This is something that is badly needed in aid, because it is currently an afterthought, and disorganized. Some discussion questions:
Q: What incentive do aid practitioners have to discuss work with their peers?
Q: How can grantmakers in international development work from a common set of knowledge, as science grantmakers do?
Q: What compels people to consider different viewpoints before acting?
Q: What is the basis of personal reputation in international development?
Q: What happens when we replace "experts" in the peer review model with "crowds" of beneficiaries?
Q: Could a system that guides grantmakers in this way work in international development?