News & Views item - October 2008

 

 

Journal Impact Factors and the ERA. (October 12, 2008)

  Kai Simons 

The Minister for Innovation, Industry, Science and Research, Kim Carr, rather than abandoning the previous government's Research Quality Framework (RQF) as a mechanism for allocating publically allocated funding for research, and developing a mechanism for more direct support of the best researchers, has morphed the RQF into the awkward and awkwardly named Excellence for Research in Australia (ERA).

 

The minister has repeatedly stated his intention to rely heavily, though not exclusively on citation metrics (however, keep in mind Gresham's Law) as a proxy for evaluating departmental and institutional research prowess. And part of the mix will be journal impact factors.

 

The most recent issue of the journal Science (October 10, 2008) addresses the caveats regarding reliance on journal impact factors when evaluating the quality of research. In his editorial Kai Simons*, current president of the European Life Scientist Organization writes: "One measure often used to determine the quality of a paper is the so-called 'impact factor' of the journal in which it was published. This citation-based metric is meant to rank scientific journals, but there have been numerous criticisms over the years of its use as a measure of the quality of individual research papers. Still, this misuse persists. Why?

     "[They] are increasingly used to assess individual papers, scientists, and institutions. Thus, governments are using bibliometrics based on journal impact factors to rank universities and research institutions. Hiring, faculty-promoting, and grant-awarding committees can use a journal's impact factor as a convenient shortcut to rate a paper without reading it. Such practices compel scientists to submit their papers to journals at the top of the impact factor ladder, circulating progressively through journals further down the rungs when they are rejected."

 

Professor Simons goes on to make the point that there are changes afoot at least in some quarters: "The Howard Hughes Medical Institute is now innovating their evaluating practices by considering only a subset of publications chosen by a scientist for the review board to evaluate carefully [our emphasis]", and by evaluate carefully just taking their citation numbers and journal impact factors is not sufficient.

 

Judge for yourself, does the president of the European Life Scientist Organization have a valid argument?

 

There are no numerical shortcuts for evaluating research quality. What counts is the quality of a scientist's work wherever it is published. That quality is ultimately judged by scientists, raising the issue of the process by which scientists review each others' research. However, unless publishers, scientists, and institutions make serious efforts to change how the impact of each individual scientist's work is determined, the scientific community will be doomed to live by the numerically driven motto, "survival by your impact factors."

_________________

*Since 1998 Director of the Max-Planck-Institute of Molecular Cell Biology and Genetics, Dresden

 

_________________________________

 

In a letter in the same issue, Abner Notkins at the NIH writes: "This situation [of journal impact factors] has become so extreme that in some institutions the impact factor of each published paper in a scientist's bibliography is being requested and/or checked, junior scientists have become reluctant to initiate experiments that may not lead to publication in high-impact factor journals, and candidates for certain positions are being told that their chances are slim if they don't have papers in Science, Nature, or the like. As a result, many scientists are now more concerned about building high-impact factor bibliographies than their science."

 

And in another letter, Paolo Cherubin notes he is editor of a small international scholarly journal and writes: "perhaps the most debilitating illness plaguing the scientific community, [is what] I call the 'impact factor fever.' The exacerbated pressure to publish we all suffer from is induced by an exaggerated reverence for the impact factor.
     Scientific achievement cannot be soundly evaluated by numbers alone. As Albert Einstein reputedly said, "Not everything that can be counted counts, and not everything that counts can be counted." How long must we wait until an antidote against the impact factor fever is developed?"

 

And all the while Senator Carr continues to exhort the researchers he has dragooned into giving up their time to developing a needless and flawed system for evaluating research potential quality.

 

It would be interesting to know if the about to be installed Chief Scientist, Penny Sackett, will be prepared to use her office to critically evaluate the ERA and put forward proposals on evaluation of research, researchers and  peer review that could foster better allocation of research funding.