Viewpoint-16 March 2008

 

 

 The Dissection of Peer Review

or

 Should We Just Leave It to the Computer  

 

 

Credit: ACS Chemical Biology

 

Evelyn Jabri in her July 21, 2006 editorial in ACS Chemical Biology wrote:

 

Experimenting with Peer Review

The current peer-review system began in the late 1800s. More than a century later, we at ACS Chemical Biology (ACS CB) and many of our colleagues in the publishing world are discussing how the system will evolve. The process of peer review is much like the ongoing discussions that take place in small communities, such as journal clubs, and among a larger group of scientists at conferences. In both cases, data are presented, questions are asked, and points are clarified with the expectation that the discourse will improve the science. In effect, peer review is an ongoing conversation between reviewers and authors. The emergence of the Internet and electronic publishing is changing the way we disseminate, discuss, and sanction scientific content. Can the new electronic tools be used to improve the quality of the review process and provide a more fruitful interaction among reviewers, authors, and readers? Answers might come from recent experiments with alternative open peer-review systems.

 

More recently, the US National Institutes of Health instituted a critical review of the peer review system used to judge applications for its research grants, and while Dr Jabri lays emphasis on improving the quality of peer review, the question of streamlining the process, while also improving it, is exercising many editors as well as researchers and funding bodies, private and governmental alike.

 

And Australia's National Health and Medical Research Council (NHMRC) and Australian Research Council (ARC) are examining their peer review methodologies.

 

Yet certain statements made over the past couple of weeks by Margaret Sheil, CEO of the ARC and the intention of Kim Carr, Minister for Innovation, Industry, Science and Research, to introduce an Excellence in Research for Australia (ERA) system for the awarding of block grants, suggest that neither has an understanding of the problems facing current peer review.

 

Surely to introduce a system, such as envisioned by Senator Carr for the ERA, that not only underwrites duplication of assessment but also overemphasises past performance isn't in the interest of raising the quality of the nation's research effort.

 

Since the beginning of January, the journal  Science has published four letters dealing with peer review -- two in the January 4 issue and two (in reply) in the issue of  March 7.

 

And then there is  Nature's contribution on February 28, 2008 From the Blogosphere -- I'll leave that as the tail piece.

 

The letters deal with reviewing manuscripts submitted to scientific journals but they are pertinent to reviewing the quality of grant applications or published papers.

 

In their letters of January 4, 2008 Robert Zucker puts forward a "Peer Review How-To" while William Perrin discusses the tribulations entailed in running down appropriate reviewers.

 

First, Professor Zucker offers some cogent advice:

 

As a member of three editorial boards, author of 90-some scientific papers, and reviewer of over 900 manuscripts in the past 30 years, I have seen my share of scientific reviews.

 

Reviewers should highlight a paper's strengths and weaknesses, but they need not delineate strengths in very weak papers nor stress minor weaknesses in strong papers.

 

Reviewers make two common mistakes... Avoid demanding that further work apply new techniques and approaches, unless the approaches and techniques used are insufficient to support the conclusions... authors need not exclude every possible explanation for their results... if the evidence does not distinguish between reasonably likely alternatives, recommend that the editor reject the manuscript.

 

[As a reviewer]do not reject a manuscript simply because its ideas are not original, if it offers the first strong evidence for an old but important idea. Do not reject a paper with a brilliant new idea simply because the evidence was not as comprehensive as could be imagined. Do not reject a paper simply because it is not of the highest significance, if it is beautifully executed and offers fresh ideas with strong evidence.

 

Finally, step back from your own scientific prejudices in order to judge each paper on its merits and in the context of the journal that has solicited your advice.

 

William Perrin brings up a matter which appears to be glossed over by both Professor Sheil and Senator Carr:

 

As a past editor of Marine Mammal Science and a present associate editor of the Journal of Mammalogy, I have had great difficulty in lining up reviewers. Sometimes it takes 8 or 10 tries to find someone who will agree to review a paper. The typical excuse is "I'm too busy."

 

First I try the people who have published the most relevant and recent papers on the topic in question. Then I move down the range of choices. The temptation, and sometimes the need, is to turn to potential reviewers in less-related fields or those who are not so "busy" (i.e., are not producing much themselves). This inevitably leads to less-knowledgeable reviewers and often reviews of lesser quality, which of course complicates the editor's job and sometimes enrages the authors.

 

Doing a fair share of peer reviews should be a recognized and expected part of the job for scientific professionals; it should be written into the job descriptions of salaried scientists and be considered in evaluating junior faculty for tenure. The caution should be "Publish and review, or perish."

 

Unfortunately, stand over tactics is no guarantee the peers you line up are going to be qualified and dedicated to produce a competent assessment.

 

 

Matthew Metz after reading Zucker's and Perrin's letters writes:

 

In the crucible of academic advancement, scientists have staggering demands on their time... Editors will best succeed in getting reviewers not by simply making reviewing a requirement, but by doing their part to see that it is rewarded.

 

The editorial establishment could... encourage willing reviewers at little added expense. They could leverage a primary currency of academic science -- prestige -- and present an award to their best reviewer(s) each year. They could also help make reviewing a component of researchers' competitiveness for funding by encouraging funding agencies to include a count of average manuscript reviews per year on applicant CVs.

 

Of course this begs the question as to who will review the reviewers which leads to Gary Marchioni's recommendations:

 

As the Editor-in-Chief of ACM Transactions on Information Systems and a member of a dozen editorial boards over the years, I resonate with the Letters by W. F. Perrin... and R. S. Zucker..., and I encourage even more vigorous consideration of peer reviewer responsibilities and merits... Traditional peer-reviewed journals are increasingly adopting electronic manuscript management systems, which provide databases of reviews... These systems will increasingly lead to tangible ways to at least count reviewer participation and perhaps assess the quality of reviewer participation in a scientific community over time...

 

Reviewers should be encouraged to actively participate in the scholarly discourse of publication and be rewarded for this participation. Young researchers must especially understand that their participation is not only expected, but that the ability to assess this participation will increase over time.

 

Which brings us to Nature's February 28, 2008 "From the Blogosphere" and brings into stark relief the apparent lack of comprehension by both Senator Carr and Professor Sheil of the complexities of obtaining peer reviews of merit.

 

 

 

Some years back I received a request from one of our granting bodies to review a large research grant request which was not, in my opinion, in my area of competence. I explained this view to the agency, suggested several overseas researchers I believed to be appropriate, and returned the material. A few days later I received a phone call explaining that "time was short", wouldn't I reconsider, and was then given the names of the individuals (all Australian) who had agreed to review the request. Leaving aside the matter of being told the names of the reviewers, which ought to have been treated as confidential, they were in my opinion, though deservedly respected researchers, no more competent to judge the grant application than me. I again refused the request, and I have no idea if the application was successful, but together with the matters raised above, it brings home the difficulty of obtaining competent reviewers.

 

I would suggest that we should expend our resources in improving peer review for our public research funding bodies, revising the structure of awarding on-costs and improving our universities basic research and teaching infrastructures on which properly resourced competitive research grants can be utilised. To layer on top of that an ERA is no more rational that bringing in an RQF or an RAE.

 

In short, it's "Daft"; perhaps we'd be better off teaching a Heuristically programmed Algorithmic Computer such as HAL 9000 to evaluate funding requests.

 

A view of HAL 9000's Central Core

 

 

Alex Reisner

The Funneled Web