News & Views item - June 2008

 

 

Citation Statistics -- a Report from the IMU in Cooperation with the ICIAM and the IMS. (June 22, 2008)

 

 

The Joint IMU/ICIAM/IMS Committee on Quantitative Assessment of Research consisted of Robert Adler from the Technion, Israel Institute of Technology, John Ewing (Chair) of the  American Mathematical Society and Peter Taylor, University of Melbourne, Australia have released a critical analysis of citation statistics and journal impact.

 

The defining statement of the report is summarised as:

 

The drive towards more transparency and accountability in the academic world has created a "culture of numbers" in which institutions and individuals believe that fair decisions can be reached by algorithmic evaluation of some statistical data; unable to measure quality (the ultimate goal), decision-makers replace quality by numbers that they can measure. This trend calls for comment from those who professionally "deal with numbers". mathematicians and statistician.

 

As Kim Carr, the Minister for Innovation, Industry, Science and Research, continues his drive for an overwhelmingly citation-based ERA (Excellence for Research in Australia) layer additional to the assessment of research grant applications by the Australian Research Council and the National Health and Medical Research Council the fact is it will utilize no more data than available to the ARC and NHMRC.

 

In fact the information utilized will be less and older and the mechanism of assessment shallower.

 

The joint committee makes the following principal points in its 26-page report but whether or not the the Prime Minister and his Cabinet are prepared to take notice is a moot question.  

 

 

The report continues: "Using citation data to assess research ultimately means using citation-based statistics to rank things, journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused." It then details its viewpoint:

 

 

And the report's authors give us one of those memorable think on this assessments:

 

In her 1988 paper on the meaning of citations Cozzens1 asserts that citations are the result of two systems underlying the conduct of scientific publication, one a "reward" system and the other "rhetorical." The first kind have the meaning most often associated with a citation—an acknowledgment that the citing paper has "intellectual debt" to the cited. The second, however, have a meaning quite different—a reference to a previous paper that explains some result, perhaps not a result of the cited author at all. Such rhetorical citations are merely a way to carry on a scientific conversation, not establish intellectual indebtedness. Of course, in some cases, a citation can have both meanings.

Cozzens makes the observation that most citations are rhetorical.

 

They hammer the point home with: "This mystical belief in the magic of citation statistics can be found throughout the documentation for research assessment exercises, both national and institutional. It can also be found in the work of those promoting the h-index and its variants... [I]n most cases in which citation statistics are used to rank papers, people, and programs, no specific model is specified in advance. Instead, the data itself suggests a model, which is often vague. A circular process seems to rank objects higher because they are ranked higher (in the database). There is frequently scant attention to uncertainty in any of these rankings, and little analysis of how that uncertainty (for example, annual variations in the impact factor) would affect the rankings. Finally, confounding factors (for example, the particular discipline, the type of articles a journal publishes, whether a particular scientist is an experimentalist or theoretician) are frequently ignored in such rankings, especially when carried out in national performance assessments."

 

We have been told by Professor Margaret Sheil, CEO of the ARC, that the Department of Innovation, Industry, Science and Research in consultation with the ARC and the NHMRC will develop the criteria for the ERA.

 

Are we really to believe that the resources that are to be expended on this exercise are going to be worthwhile in raising the quality of Australian research or is Brian Cox spot on when he says:  The notion that scientists will make a more valuable contribution to the economic and social wellbeing of the world if their research is closely directed by politicians is the most astonishing piece of nonsense I have had the misfortune to come across in a long time.

 

Concurrent with the publication of Citation Statistics, Nature published this cameo:

From the blogosphere

Does research need new measuring sticks? The Nature Network group 'Citation in Science' (http://tinyurl.com/6afj8a) hopes to find common ground among researchers, funders, information providers and others concerning the measures of research output.

Allan Sudlow of the British Library lists common ways in which citations are manipulated or otherwise abused. 'The art of counting', a post by Nature product developer Ian Mulvany, is a useful account of how the impact factor and the H-index are calculated, and concludes that there are many growing areas of contribution such as blogs and open data sets that, at present, are ignored by such metrics. Another post explores whether the number of times an article is downloaded from the Internet could be more informative than its citation counts.

Biologist David Colquhoun of University College London argues that publication metrics are inappropriate for assessing people: "The pressure to produce cheap headline-grabbing work will be enormous. The long-term reputation of UK science will surely be damaged by this sort of bean-counting approach."

_______________________________

1Cozzens, Susan E. 1989. What do citations count? The rhetoric-first model. Scientometrics, Vol 15, Nos 5-6, (1989), pp. 437-447.
  http://dx.doi.org/10.1007/BF02017064