Editorial - 30 August 2010
To view previous Editorials click here

 
 

 

 

 


Minister Kim Carr

A Metric Too Far

 

 

pdf file-available from Australasian Science

 

 

A couple of years ago the British particle physicist Brian Cox commented: "The notion that scientists will make a more valuable contribution to the economic and social wellbeing of the world if their research is closely directed by politicians is the most astonishing piece of nonsense I have had the misfortune to come across in a long time," while concurrently a joint committee from the International Mathematical Union, the International Council of Industrial and Applied Mathematics and the Institute of Mathematical Statistics released a critical analysis of citation statistics and journal impact.

 

At the time TFW detailed the findings of the report  and noted that the defining statement of the report is summarised as: The drive towards more transparency and accountability in the academic world has created a "culture of numbers" in which institutions and individuals believe that fair decisions can be reached by algorithmic evaluation of some statistical data; unable to measure quality (the ultimate goal), decision-makers replace quality by numbers that they can measure. This trend calls for comment from those who professionally "deal with numbers", mathematicians and statisticians.

 

After detailing the committee's concerns, the report continues: Using citation data to assess research ultimately means using citation-based statistics to rank things, journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused...

 

   [The] mystical belief in the magic of citation statistics can be found throughout the documentation for research assessment exercises, both national and institutional. It can also be found in the work of those promoting the h-index and its variants... [I]n most cases in which citation statistics are used to rank papers, people, and programs, no specific model is specified in advance. Instead, the data itself suggests a model, which is often vague. A circular process seems to rank objects higher because they are ranked higher (in the database). There is frequently scant attention to uncertainty in any of these rankings, and little analysis of how that uncertainty (for example, annual variations in the impact factor) would affect the rankings. Finally, confounding factors (for example, the particular discipline, the type of articles a journal publishes, whether a particular scientist is an experimentalist or theoretician) are frequently ignored in such rankings, especially when carried out in national performance assessments.

 

Finally, the report took pains to call attention to a 1988 paper by Susan Cozzens (What do citations count? The rhetoric-first model. Scientometrics, Vol 15, Nos 5-6, (1989), pp. 437-447). The committee wrote: In her 1988 paper on the meaning of citations Cozzens asserts that citations are the result of two systems underlying the conduct of scientific publication, one a 'reward' system and the other 'rhetorical.' The first kind have the meaning most often associated with a citation—an acknowledgment that the citing paper has 'intellectual debt' to the cited. The second, however, have a meaning quite different—a reference to a previous paper that explains some result, perhaps not a result of the cited author at all. Such rhetorical citations are merely a way to carry on a scientific conversation, not establish intellectual indebtedness. Of course, in some cases, a citation can have both meanings.

   Cozzens makes the observation that most citations are rhetorical.

 

However, Senator Kim Carr, the Minister for Innovation, Industry, Science and Research, was undeterred and continued inexorably to press for a predominantly metric and citation based assessment of research published between 2003-08 as a basis for multimillion dollar future block grants through the ERA (Excellence for Research in Australia).

 

Now, something over two years later, neuroscientist, Mike Calford, deputy vice-chancellor (research) at the University of Newcastle writes in an opinion piece for The Australian's Higher Education Supplement: "In late July, universities submitted their data for the Excellence in Research for Australia initiative... a structured peer-review system, [will assess their data] using evaluation committees of eight research-field clusters [comprising 157 fields of research (FoR)]."

 

Professor Calford then issues a stinging criticism of the ERA's methodology of assessment: "[T]he real basis of the problem is the inability of the FoR system to capture and collate research outputs of many staff. At Newcastle, about 1900 publications from 230 staff will not be considered by FoR group analysis... there are two problems: FoR groups do not map well on to research disciplines and they unequally divide the research landscape. The ERA process does not take account of either of these... More than 250 of our researchers had publications spread across four or more groups. This is not a multi-disciplinary research-related issue but a problem stemming from the fundamental nature of the FoR classification system... Researchers who address a broad question with multiple methodologies will find their outputs spread across many FoR codes and multiple ERA clusters."

 

On the other hand Professor Calford lists on the credit side of the assessment exercise: "Notwithstanding these issues, ERA will give some individual researchers a chance to shine. Many disciplines do not figure well in the present metrics of total publications, research income and national competitive grant success.

 

But the deputy vice-chancellor concludes: "[T]he fundamentally unequal nature of the FoR groups signifies that the currency that will develop around the number of five [top] ratings achieved will need much scrutiny."

 

Just why the contortionist manoeuvrings entailed in implementing the ERA should be considered an improvement to the fabric of university research has yet to be explained let alone justified. To any reasonable and objective individual the findings of the Joint Committee on Quantitative Assessment of Research would be seen as a damning indictment of the methodology being employed to implement the ERA. Professor Calford's judgement simply adds to that conclusion by noting certain specific difficulties to the particular approach for the ERA.

 

Whatever the current government considers would be the utility of the ERA in comparison with improving the peer review mechanisms of the Australian Research Council and the National Health and Medical Research Council and allocating sufficient oncosts to grantees, improving the quality of university research will certainly not be it.

 

But it is an additional, ill conceived layer of bureaucracy contrived in an environment of short sighted political self-interest. Nevertheless, it is unlikely that a change in government would engender improvement. 

 

 

Alex Reisner

The Funneled Web