Does research output increase significantly when funding allocation is concentrated on a small number of elite researchers?
We examined this question by exploiting the design features of funding mechanisms used by the South African National Research Foundation (NRF). The NRF allocates large funding ($150,000 – $300,000 per annum for a minimum of 5 years, renewable to a maximum of 15 years) to 80 research chair holders selected by the NRF as world-class in their fields. The introduction of this mechanism in 2008 identifies a discrete time point after which the performance of the research chair holders can be contrasted with the performance of comparable researchers who did not receive the funding allocation associated with the chairs.
Appropriate control groups can be constructed for the South African context by means of at least two methodologies. First, the NRF itself operates a rating system of researchers which ranks researchers in a set of categorical tiers based on peer review of the quality of their output, which is independent of the research chair selection mechanism. While the NRF publishes an array of ratings, the A and B rating categories are those that are held to indicate world class research output, and offer an immediate control group against which to compare the performance of the research chairs. Importantly, the research chairs are themselves subject to this peer review rating system. Second, we also identify a control group based on propensity scores loading on objective bibliometric performance measures of researchers.
Our principal findings are as follows.
First, for the 2009-12 period there is no statistical difference between the performance of the NRF chairs on average and A-rated researchers without chair funding, nor between the average NRF chair and researchers without chair funding who under the propensity score matching methodology had the strongest pre-award research performance. Despite a minimum 15:1 funding advantage, there is thus no statistically observable superior research performance on the part of the NRF research chairs.
Second, there is a strong dispersion of performance amongst NRF research chairs. The stronger the research performance of research chair recipients prior to the award of the funding, the stronger the subsequent improvement in performance, under both the NRF peer review and propensity score matching methodologies. By contrast, research chair recipients with poor prior research records performed worse than the control groups despite the receipt of the substantial research funds. Strikingly, more than 50% of the South African research chair recipients had a NRF rating that indicates no international peer recognition. Symmetrically, under the propensity score matching methodology the probability of receiving a research chair is maximized amongst researchers with the lowest research performance prior to the award. The peer-based selection of the research chairs thus appears to have been significantly biased away from the stated goal of selecting research excellence.
Third, funding shows a differential rate of return across disciplines. Consistent statistically significant returns are evident only with respect to the Biological, Medical and Physical sciences, only weakly so for Chemical and Engineering sciences, and never for the Business, Economic and Social (which includes all Humanities) sciences.
Policy inferences are immediate.
The productivity impact of selective funding allocation is enhanced if it is more responsive to prior research performance. Funding needs to go to the strongest researchers.
But the evidence suggest that the marginal returns from raised funding are savagely diminishing. In the South African case, even for the most productive research chair recipients, the cost per additional publication is 22 times as high as comparable researchers without funding, and 32 times as high per citation. If the objective of research funding is to raise the level of output and impact across and entire research system, a more broad-based inclusive funding approach that gives smaller awards to more researchers may carry more promise.
Differential rates of return across academic disciplines also suggests that funding allocations that adjusts funding allocations to reflect these differences can raise aggregate levels of output and impact.
Finally, if funding allocation is to follow revealed productivity, productivity has to be transparently and objectively monitored. Use of the growing number of objective bibliometric measures at least in part in reaching allocative decisions, is an obvious supplement to the reliance on peer review alone. All the more since peer review is itself not immune from bias – as the South African case demonstrates.