Bibliometricians Widely Misunderstood
...I've been intrigued to see the number of times in the recent
discussions on the dumping of journal rankings that bibliometricians
have been identified as their creators.
As one of the three bibliometricians on the group set up by the
Australian Research Council to advise on the development of ERA
indicators, I cannot remember a single instance of any of us promoting
the use of this indicator.
Professor Ton van Raan from Leiden University and Jonathan Adams from
Evidence Ltd in the UK were the other two bibliometricians in the group
and our concern was with citation-based measures. It was on these that
we advised the ARC. Our only interest in journals was the development of
coherent sets that would align as closely as possible to the field of
research classification that the ARC wished to use, and for which
reliable sets did not then exist.
To us the ranks were of mild interest, but largely irrelevant as
citation measures look at the actual impact of articles and don't have
to rely on a journal's prestige as an imperfect surrogate to assess it.
An intriguing question for me is to ask is why the science community
embraced journal rankings in the first instance. They knew that citation
analysis would be used, so why put so much time, effort and money into
creating rankings? Journal sets based on fields of research were
required, but most of the angst and argument has always been around the
rankings.
But bibliometricians make easy scapegoats, and with the recent closure
of the Research Evaluation and Policy Project at the ANU, there is now
no academic focal point for rebutting these beliefs.
Linda Butler
Program visitor,
School of Politics and International Relations,
Australian National University.
(June 29, 2011)
|
Higher Education at the Crossroads
Submission in response to Ministerial Discussion Paper
Linda Butler
26 June 2002
There are several significant problems with the publications component
of the IGS and RTS funding formulas:
►It rewards quantity, not quality. A
university is allocated the same amount of funding whether it’s
publication is a ground-breaking article in Nature or a very pedestrian
piece in the Canberra Journal of Frostbite Studies. The publication must
appear in a refereed journal, but that definition is very inclusive. As
a result, we have seen an explosion of publications from Australian
universities appearing in the lowest impact journals.
►The collection of the information required
for this component of the collection is expensive, both in relation to
auditing the universities’ returns, and their compilation by the
institutions.
►Many universities have adopted the totally
inappropriate practice of using an identical formula to internally
distribute the money obtained through the IGS to the faculties,
departments, and even researchers, that 'earned' it.
It is essential that the funding formulas be amended to overcome these
problems by:
-- incorporating quality into the equation
-- using externally available and verifiable data
The publications component of the formula should be dropped immediately,
and replaced after extensive research and consultation on the efficiency
and efficacy of possible alternatives. Recognising that a number of
institutions rely on this element of the formulae for significantly more
than the notional 10%, distribution of funds under these schemes should
be set at an average of the most recent three years data.
|