Back in May, after some criticism of the Marsden Fund processes made it into the media, I wrote about Te Pūnaha Matatini investigator Adam Jaffe’s study of the Marsden Fund. Adam presented his preliminary findings at our Launch workshop in February, and today they were released as a Motu working paper.
There is a short media release here, but the upshot is that it shows that receiving a Marsden grant leads to higher productivity and impact, at least in terms of papers published and the citations received. This won’t surprise many, but it is very exciting to see the benefits of Marsden funding quantified for the first time.
In fact I think this is a watershed study. It is the first rigorous evaluation of a New Zealand research funding process ever undertaken and it has thrown up some fascinating insights. It also demonstrates the benefits of the sustained collection and retention of science and innovation data, and the Marsden Fund should be commended for its commitment to doing so.
Unfortunately, the Ministry of Business, Innovation and Employment and its predecessors have done a poor job of curating their data since New Zealand moved to a contestable funding system in the early 90s, which means that much of our funding system remains opaque. I understand, however, that the Ministry is working on a plan to put in place systems and practices that will enable these sorts of evaluations to be made in coming decades.
What sort of data do you need? The difficulty in evaluating contestable research funding is that funding agencies go to great lengths to select the best projects and the best applicants. You can’t just compare the performance of those applicants who got funded to those who didn’t, because any difference in performance might just be a sign that your application process is doing its job in selecting performers from non-performers, rather than a signal from the funding.
One way to avoid this selection bias would be to allocate funding randomly, but few funding agencies are willing to do this. And even if we decided that a randomized control trial was a good idea, we’d still have to wait a decade or so to acquire data for the study.
Instead, Adam and his team have made use of the Marsden fund panel scores that are used to rank the projects of applicants in the second round of the Marsden fund. These panel scores can be used to estimate the selection bias in your performance data, enabling you to back out the effect of the funding itself. The Marsden fund has kept the panel scores for both the successful and unsuccessful projects for a number of years, and this has been matched with bibliometric data for applicants to measure subsequent performance.
The most interesting finding from this data is that the expert panels that evaluate Marsden Fund proposals do not seem to have a selection bias! You see a jump in performance for those applicants who were funded, but otherwise the subsequent performance of applicants seems to be independent of their ranking by the panel. Panels are not able to pick winners, but those that they do give the money to go on to win.
As a panelist myself, I seldom felt that we were making meaningful selections at the second round – almost all the proposals we looked at seemed eminently fundable. This inability to pick winners does not necessarily mean that the panels are redundant. I expect that there might still be benefits that accrue from encouraging researchers to plan and develop research plans that can stand up to scrutiny from these panels. It does suggest though that we should be cautious about using success in Marsden as a proxy for research quality, particularly when it comes to career advancement.
Perhaps the best news for researchers is that the study suggests that there would be no diminishing returns if we were to double or treble the size of the Marsden fund. If we could fund all second round applicants, we would be unlikely to see any decrease in the quantity and impact of the research carried out, just a step change in performance across the research sector.
There are some caveats to the study, so it is well worth reading in its entirety (here it is again). For instance, the lift in performance measured could be indirect. If winning a Marsden grant increases your chances of getting funding from other sources, then some of the boost in performance might come from other funding rather than Marsden. If we had good data from MBIE, we might be able to tell …
It is also worth noting that the Marsden Fund is there to do more than generate papers and citations. Ultimately we would like to be able to measure impacts in other ways. The sort of study that might come next would be to look at the subsequent careers of Marsden-funded PhD students. Does working at the cutting edge of science set you up for a successful career?
Declaration: I was a Principal Investigator on two Marsden-funded projects during the period that this study covers (in 2006 and 2008), and I was on the Physics, Chemistry and Biochemistry Panel from 2010-2012.
This is a very useful piece of work. It’s good to have solid data that increasing the size of the fund would not dilute research quality.
Has there been any analysis that looks for possible “success goes to the successful” bias? That is, what is the probability of an applicant, who has not received a Marsden grant, being funded, versus the probability for an applicant who has previously received one or more grants.
The hypothesis is that the panels are more likely to give a high score to applicants who are good at writing applications. If so, one would predict that previous grant winners will be over-represented in the list of successful applicants. I am not suggesting that this is the case, but it would be reassuring to know that the hypothesis is false.
It would be interesting if for 3 years half the grants were awarded by a lottery of “fundable” applications.
Success certainly breeds success in the sense that a track record of previous publication leads both to higher panel rankings and to higher post-proposal publication.
We did not look at whether previous Marsden success bred current Marsden success. I suspect that given our sample size, and the relative rarity of multiple second round proposals, it would be hard to pin that down statistically. But maybe we’ll take a look when we get to revisions of the paper!