Editorial - 01 June 2012
To view previous Editorials click here

 

 

 

 

The Future of Peer Review?

 

 

pdf file-available from Australasian Science

 

On October 18, 2010 Subra Suresh assumed the directorship of the US National Science Foundation (NSF). One of his priorities is to increase the collaboration between US scientists and those of other nations, but among other difficulties, agreeing on common standards for peer review could be onerous when assessing bilateral or multilateral grant applications.

 

As an immediate step to overcome the obstacles impeding international collaborations, Dr Suresh proposed the formation of a Global Research Council (GRC).  44 nations (including Australia) have responded and the GRC as of last month has issued a "Statement of Principles for Scientific Merit Review".

 

The importance of evaluating proposals expertly, transparently, impartially, appropriately, confidentially, ethically and with integrity is considered beyond question by the GRC but also essential will be a set of agreed common standards. In toto what is being proposed is a Herculean undertaking which will require a radical redesign of the methodology of review by ones research peers.

 

To date little progress has been made in improving either the efficiency or the quality of prediction of peer review for assessing the priorities in awarding research grants, but there remains the consensus that peer review is the best means of evaluation. However, over the past several years progress in two areas suggest that now may be the time to seriously support new approaches to peer review: specifically, initial assessment of proposals by machine and assessment of short listed proposals through computer-simulated environments.

 

From February 14–16, 2011, the television program Jeopardy! pitted IBM's Watson computer against two former Jeopardy! champions, Ken Jennings and Brad Rutter, in a two-game match played over three shows. Watson won the grand prize of $1 million, which IBM divided between two charities (World Vision and World Community Grid).

 

Dr. David Ferrucci, IBM Fellow and the principal investigator leading the IBM research team that created Watson has commented: "With the Jeopardy! challenge, we accomplished what was thought to be impossible – building a computer system that operates in the near limitless, ambiguous and highly contextual realm of human language and knowledge."

 

By the second half of the year IBM issued the following statement: "Watson’s Deep Question Answering (QA) technology has already driven progress in new fields such as the healthcare industry. IBM is working with Nuance Communications, Inc. to explore and develop applications to help critical decision makers, such as physicians and nurses, process large volumes of health information in order to deliver quicker and more accurate patient diagnoses. Working with universities and clients, IBM is identifying many potential uses for Watson’s underlying QA technology."

 

Last week, May 25, David Ferrucci fronted PCAST (US President Barack Obama's Council of Advisors on Science and Technology) to describe the design and function of Watson.

 

Click on the image below to access the video which also includes Anthony Lavandowski's account of Google's self-driving car, and the Q&A session with the PCAST members which begins 37'25" into the video.

 

Wouldn't now be an ideal time to begin development of machine evaluation of research grant proposals by initially following the approach being developed for Watson?

 

The second part of a redesigned system of peer review would be to truly internationalise evaluation of the proposals short-listed by a "Son-of-Watson" through a refined  use of computer-simulated environments based on, for example Second Life. Such an approach has been pioneered by the National Science Foundation's William Simms Bainbridge, and during 2010 six NSF panels met on its "island in Second Life, known as IISLand," and Dr Bainbridge says: "Real world panellists are provided with some resources so it was felt appropriate to provide them with the cost of a decent set of virtual clothes," but the format of the meetings followed a traditional schedule, and all of the work was completed on time.

 

In addition to the travel time saved reviewers by working in the computer-simulated environment Dr Bainbridge estimates that $10,000 would be saved for every NSF panel that worked in the virtual world. And there is every likelihood that significant internationalisation of review panels would follow.

 

 

PI David Ferrucci Explains to PCAST  TJ Watson Lab's Jeopardy! Challenge

 

To date the the Australian Research Council, the National Health and Medical Research Council and the federal ministries having responsibility for them have given little more than lip service to improving and modernising peer review of allocations for research. On the other hand Frank Larkins, Professor Emeritus in the School of Chemistry at the University of Melbourne and former Deputy Vice Chancellor at that university as well as Dean of Science and Dean of Land and Food Resources estimated last year the costs of the retrospective vision of the Excellence in Research for Australia (ERA) process to be $100 million.

 

Surely it's well past time for those in positions of responsibility to embrace the technologies becoming available in order to significantly improve the methodology of supporting the national research effort.

 

 

Alex Reisner

The Funneled Web