Editorial - 29 September 2009
To view previous Editorials click here


 


Redesigning the ARC's System of Peer Review

 

 pdf file-available from Australasian Science

 

On September 14 The Australian Research Council (ARC) announced a consultation process to overhaul its system of peer review.

 

Peer Review Processes Consultation Paper

The Consultation Paper outlines a range of issues and potential improvements the ARC is considering as part of its review of peer review processes.

The ARC released the Consultation Paper on 14 September 2009 to seek feedback on specific issues relating to ARC’s peer review processes.

The consultation period will close on 19 October 2009

Peer Review Processes Consultation Paper [28 pages]

PDF Format (828kb) - RTF Format (2.68mb)

Peer Review Processes Consultation Paper response template [13 pages]

PDF Format (103kb) - RTF Format (473kb)

Content Last Modified: 14/09/09

 

On reading  through the Consultation Paper and the response template it becomes clear that the approach being taken to revise the process will constitute minor adjustments and will continue the process of attempting to minimise risk through what we might term hierarchical sieving at the expense of funding research of problematic but possibly of outstanding value.

 

The overall claim by the ARC is that:

 

The ARC Peer Review Processes Consultation Paper includes a number of proposed changes to existing ARC peer review processes, including the:

In fact what is needed is a far-reaching simplification of the implementation together with a marked improvement of the cohort of assessors.

 

At present the system consists of three tiers and will be morphed into another three tier system with different nomenclature. At each step, just as is now the case, second and third tier assessors will be further removed from evaluating proposals than those who are in the preceding tier, and  in the end game a pseudo quantitative assessment (score) will be given to the proposals .

____________________________________

 

A foremost requirement of the ARC, if Australian researchers are to obtain sound and equitable support for their research, is the separation of funding for proposals from young, midterm, and established investigators. Once those parameters are determined, a sensible, streamlined peer-review system should be instituted which has adequate funds at its disposal.

  1. Firstly, qualified members of the ARC administration should assess all proposals and return those (but only those) that are clearly inadequate. Where appropriate, applicants should be given the opportunity to submit a revision within 14 days. Where rejection is outright, individuals should be notified immediately and advised of the reason(s).

  2. The ARC must have at its disposal the resources to engage qualified individuals (reviewers) to evaluate proposals that have passed the initial administrative assessment.

  3. Based on their expertise reviewers either recommend that a proposal be funded or rejected and why. Where a proposal is considered multidisciplinary, reviewers should evaluate those sections in which they are qualified.

  4. If a reviewer requires clarification of certain elements of the proposal, he/she may request it from the principal investigator through the ARC.

  5. Proposals which are in effect requests for renewal of funding for research supported in a previous grant should be evaluated on the same terms as a new proposal.

  6. Once the reviewers' recommendations are collected the ARC administration should eliminate those proposals where there is a clear consensus for rejection.

  7. The reviewers' comments with regard to the remaining proposals should then be considered by qualified ARC administrators, and a final decision made to recommend or reject funding. Note that simple counting of reviewer numbers for and against is not appropriate, the comments by individual reviewers are an essential aspect.

  8. At this point all proposals accepted for funding should be treated to be of equal merit. If available funds are inadequate to fund all of the proposals, funding should be determined by lottery. To suggest that the current quasi-quantitative system is not a surreptitious form of lottery is humbug. 

  9. The ARC should be prepared initially to fund proposals for up to 5 years -- the time period to be specified by the investigator -- and should have no bearing on the reviewers evaluation (other than if the period requested is considered inappropriate). The enforced use of short granting periods is a sure road to mediocrity.

Either the ARC administration is going to obtain competent primary reviewers to read and evaluate the proposals they are given or the administration should be replaced by those who can make such selections. The use of a multi-tiered approach to overcome what is tacitly perceived as a less than adequate group of primary reviewers is no solution.

 

The British non-profit science lobby Sense About Science has published a preliminary assessment of peer review of scientific papers submitted for publication. However, the findings also have bearing on the peer review of funding proposals. The poll revealed that 91% said that their last paper was improved through peer review, and most think that more secrecy could improve the process -- only 20% of respondents supported the idea of "open peer review" in which reviewer names are revealed, while 76% of researchers cast a vote in favour of 'double-blind' peer review, in which just the editor knows who the reviewers are. A significant percentage, 41%, responded that getting paid would make them more keen to review papers.

It is noteworthy that "just 15% of respondents felt that 'formal' peer review could be replaced by usage statistics". 

 

The survey was "an electronic survey conducted between 28th July 2009 and 11th August 2009; 40,000 researchers were randomly selected from the ISI author database, which contains published researchers from over 10,000 journals. Altogether 4,037 researchers completed the survey. The error margin was ± 1.5% at 95% confidence levels; reviewers answered a subset of questions aimed specifically at reviewers (3,597 - a subset of the base) the error margin for this group was ± 1.6% at 95% confidence levels".

 

The overall finding is that the mechanism of peer review while in need of improvement, should be retained.

 

To obtain a detailed PowerPoint presentation of the preliminary findings Click Here.

_______________________________________________

 

 

Below are reprinted the selection criteria and scoring system currently used by the ARC.* 

 

Should a competent reviewer really require this quasi-quantitative sort of leading by administrators?

 

 

________________________________

 

* Page 22 of Peer Review Processes Consultation Paper, PDF Format (828kb) - RTF Format (2.68mb)

 

 

Alex Reisner

The Funneled Web