News & Views item - May 2011

 

The Tortuous Path of the ERA -- 3˝ Years On and No Resolution. (May 30, 2011)

Last week TFW referred to Eugene Garfield, an early architect of the "science" of citation measurement; Dr Garfield is clear about the misuse of citation data in the evaluation of research which in his view should depend simply on reading articles and making meaningful intellectual judgements.

 

Despite such an admonition, the Australian Research Council -- driven by the Minister for Innovation, Industry, Science and Research, Senator Kim Carr -- is spending many million of dollars and taking up thousands of hours of academics' time in implementing a blatantly flawed ERA (Excellence in Research for Australia).

 Neither the Prime Minister and the cabinet in residence, nor the leader of the opposition and his shadow cabinet show the slightest concern.

 

As if in response the Department of Innovation Industry, Science and Research has released to the media a "Ministerial statement to the Senate Economics Legislation Committee: Improvements to Excellence in Research for Australia (ERA)".

 

Senator Carr has informed the Senate Economic Legislation Committee that: "The exercise has been an overwhelming success in meeting its objective of providing institutions, researchers, industry and students with a sound, evidence-based means of identifying areas of strength and potential, as well as areas where we need to do better."

 

Nevertheless we are told by the Senator: We remain open to suggestions on enhancements to what we know to be a very good scheme, [however], the ARC considers that making a small number of changes to the ERA 2010 methodology could substantially enhance the integrity and acceptance of the ERA 2010 evaluation exercise... [Therefore], I have approved a set of enhancements recommended by the ARC that deal substantially with those sector concerns while maintaining the rigour and comparability of the ERA exercise. These improvements are:

Of course the devil is in the detail. For example what is "a journal quality profile, showing the most frequently published journals for each unit of evaluation", what will be the methodology employed to determine the profile score and by whom, and how will it be meaningfully different from the  A*, A, B and C journal ranking as practiced in 2010?

 

And along with this proposed six point plan for tinkering with a grossly flawed approach the Senator tells us "I have also asked the ARC to continue investigating strategies to strengthen the peer review process, including improved methods of sampling and review assignment."

 

The vast majority of the research that the Senator proposes the ERA examine had been peer reviewed by the ARC or NHMRC prior to being undertaken, and outcomes reported on to the Councils on their completion.

 

Not withstanding that Senator Carr in framing the ERA had proposed maximising citation/journal ranking to minimise cost the ARC is now being directed to investigate "strategies to strengthen the peer review process, [within the ERA] including improved methods of sampling and review assignment".

 

Finally, there is acknowledgement of a reprehensible use of the ERA assessments which in fact had been foretold to the Senator and the ARC's CEO, Professor Margaret Sheil, but ignored: "There is clear and consistent evidence that the rankings were being deployed inappropriately within some quarters of the sector, in ways that could produce harmful outcomes, and based on a poor understanding of the actual role of the rankings. One common example was the setting of targets for publication in A and A* journals by institutional research managers."

 

Are we witnessing the further expenditure of many millions of dollars, as well as the pointless misuse of researchers time and effort, in what looks increasingly to be a counter-productive exercise if the aim is to improve the quality of research undertaken in our universities?

 

Or as Thomas Barlow, now an independent research consultant and formally scientific assistant to Brendan Nelson when he was Minister for Education, Science and Training, put it: "It's been a monumental amount of work involving a huge amount of detail. But it hasn't told us much that we didn't already know."

 

Read Senator Carr's Ministerial Statement in full below and judge for yourself.


 

_________________________________________

 

Monday, 30 May 2011

 

Ministerial statement to the Senate Economics Legislation Committee

Improvements to Excellence in Research for Australia (ERA)

 

After several years of development, the first round of the Excellence in Research for Australia (ERA) initiative was run in 2010, with results published by the Australian Research Council (ARC) earlier this year in the ERA National Report. The exercise has been an overwhelming success in meeting its objective of providing institutions, researchers, industry and students with a sound, evidence-based means of identifying areas of strength and potential, as well as areas where we need to do better. These assessments were made against international benchmarks using the indicators that have been developed over time – in many instances over many decades – by the disciplines themselves. This has underpinned the strong support for the ERA methodology across the higher education research sector.

I have said all along that we are keen to undertake meaningful consultation. We remain open to suggestions on enhancements to what we know to be a very good scheme. I have been aware for some time of concerns within the sector about certain aspects of the exercise, particularly the ranked journal lists. These concerns have been communicated to me directly, reported in the sector media, and voiced in the ARC’s extensive sector consultations ahead of preparations for the second iteration of ERA in 2012. Additional matters that have been raised include the strength of the peer review process and the capacity of ERA to adequately capture applied and interdisciplinary research.

 

The ARC has advised me that consultation has revealed that there is a widespread preference for limited change, to ensure that ERA 2010 and ERA 2012 outcomes can be compared. Overall, however, the ARC considers that making a small number of changes to the ERA 2010 methodology could substantially enhance the integrity and acceptance of the ERA 2010 evaluation exercise, without compromising comparability.

 

As always, we are in the business of making refinements that improve the operation of ERA. I therefore commissioned the ARC to produce an options paper outlining different ways we might be able to utilise these indicators to address these concerns, and to consider any implications arising from the potential adoption of alternatives. I placed particular emphasis on the absolute need to maintain the rigour of the ERA exercise, to ensure the comparability of the results of the next iteration with ERA 2010, and to pay close attention to the detailed concerns of the sector. Within those parameters, however, I wished to explore ways in which we could improve ERA so the aspects of the exercise causing sector disquiet – especially issues around the ranked journals list – could be minimised or even overcome.

 

As the result of this process, I have approved a set of enhancements recommended by the ARC that deal substantially with those sector concerns while maintaining the rigour and comparability of the ERA exercise. These improvements are:

 

· The refinement of the journal quality indicator to remove the prescriptive A*, A, B and C ranks;

· The introduction of a journal quality profile, showing the most frequently published journals for each unit of evaluation;

· Increased capacity to accommodate multi-disciplinary research to allow articles with significant content from a given discipline to be assigned to that discipline, regardless of where it is published (this method was successfully trialed in ERA 2010 within Mathematical Sciences);

· Alignment across the board of the low volume threshold to 50 outputs (bringing peer-reviewed disciplines in line with citation disciplines, up from 30 outputs);

· The relaxation of rules on the attribution of patents, plant breeders’ rights and registered design, to allow those granted to eligible researchers to also be submitted; and

· The modification of fractional staff eligibility requirements to 0.4 FTE (up from 0.1 FTE), while maintaining the right to submit for staff below this threshold where affiliation is shown, through use of a by-line, for instance).

I have also asked the ARC to continue investigating strategies to strengthen the peer review process, including improved methods of sampling and review assignment.

 

As with some other aspects of ERA, the rankings themselves were inherited from the discontinued Research Quality Framework (RQF) process of the previous government, and were developed on the basis of expert bibliometric advice. Patterns of their utilisation by the RECs and detailed analysis of their performance in the ERA 2010 exercise, however, have made it clear that the journal lists themselves are the key contributor to the judgements made, not the rankings within them.

 

There is clear and consistent evidence that the rankings were being deployed inappropriately within some quarters of the sector, in ways that could produce harmful outcomes, and based on a poor understanding of the actual role of the rankings. One common example was the setting of targets for publication in A and A* journals by institutional research managers.

 

In light of these two factors – that ERA could work perfectly well without the rankings, and that their existence was focussing ill-informed, undesirable behaviour in the management of research – I have made the decision to remove the rankings, based on the ARC’s expert advice.

 

The journals lists will still be of great utility and importance, but the removal of the ranks and the provision of the publication profile will ensure they will be used descriptively rather than prescriptively.

 

These reforms will strengthen the role of the ERA Research Evaluation Committee (REC) members in using their own, discipline-specific expertise to make judgments about the journal publication patterns for each unit of evaluation.

 

It is important to note that these changes will be exposed to public comment during July as part of the draft submission guidelines. I am confident that these improvements will strengthen the ERA methodology and minimise the unintended consequences arising from inappropriate external use of the indicators, while maintaining the comparability of future rounds with the ERA 2010 results.

I would like to thank the ARC, led by Professor Margaret Sheil, for the extensive development work that went into producing these improvements, and the ERA 2010 REC members and other key academic leaders for their invaluable advice. I particularly thank the university research sector, whose detailed feedback informed the work, and whose support for ERA overall has been so positive.