News & Views item - August 2006

 

 

Caveats When Using Metrics in a Research Assessment (Quality)  Exercise (Framework). (August 9, 2006)

    Over the past several years the use of citation indices has become increasingly popular in assessing the work of research.

 

Concomitantly various cautionary qualifications have been pointed out but because it has become progressively easier to obtain the information regarding quantitative citation data together with various scaling factors such as journal impact the information has enjoyed increasing significance in regard to judging the worth of research and the individuals undertaking it.

 

On the other hand factoring the importance of such matters as the latency between research and publication and citation means that a significant time period will often have elapsed between when the work was undertaken and when it "rises above the pack". Consequently young researchers will have the cards stacked against them.

 

In addition numbers play an important role so that in extreme examples such as that of the miniscule publication rates of scientists like Barbara McClintock or Richard Feynman appropriate judgements and support would be precluded. Fortunately, the assessment of their contributions as judged by their peers played a key role in judging their worth.

 

And of course there are the clever manipulators of citation indices involving blatant self citation or more subtly groups of researchers citing each other's papers for the purposes of inflating their citations.

 

Then there is the matter that different disciplines can not be judged on a single scale, a matter often pointed out by those working in the humanities, but it is also true within the science, technology, engineering and mathematical disciplines.

 

None of this means that using citation numbers and impact factors as for example based on the journals in which papers are published shouldn't be used but they must be used cautiously and a persuasive case can be made for such data to be used by incorporation into good peer review by the Australian Research Council or the National Health and Medical Research Council when allocating grant funding.

 

But for such an approach to be effective grants must be accompanied by an appropriate on-cost factor.

 

And block funding?

 

Over a period of 10 years increasingly play the researcher. What will follow will be a natural progression in sorting universities which will show strengths in certain fields without precluding adequate representation in others.

 

When there is an increasing awareness of the importance of interdisciplinary research and teaching , do we really want to foster the sort of situation occurring at the University of New England where the mathematics department is being decimated. Or are we really to be awarded by the "guiding hand" of the Minister for Education, Science and Training, a liberal arts university which teachs no science or mathematics while at the same time the Minister extols the virtues of Wellesley Collage with its broad base of science, mathematics and humanities departments.

 

Perhaps is oughtn't to be surprising seeing as the Minister for Health demonises somatic cell nuclear transfer research and the production of human embryonic stem cells for therapeutic research while the Minister for Communications tells us the broadband communications we have is really all that we need so what's all the fuss about and the Minister for Immigration sets out to abuse Radio National Breakfast's anchor woman for daring to ask about the reports of the killing of Afghans who were repatriated to Afghanistan.

 

What a bunch.