News & Views item - April 2011

 

Has the ARC Chief Executive Justified the $35 million ERA Initiative? (April 9, 2011)

The September 30, 2010 issue of  THE, Times Higher Education devoted a page to Paul Jump's "investigation" of "Australia's research assessment programme [Excellence in Research for Australia, which is] causing controversy, especially the rankings".

 

He went on to report: Mark Gibson, a senior lecturer in communications and media studies at Monash University told Mr Jump: "Everyone has a story of egregious injustices in classifications," while Graeme Turner, director of the Centre for Critical and Cultural Studies at The University of Queensland says that the ranking of journals produced a "fetish" culture among academic researchers "to a ridiculous extent"... Dr. Gibson also believes "whether or not it was intended, there is a growing tendency within universities to value researchers according to the grade of journal they are publishing in."

 

Mr Jump also noted: "The committees [8 Research Evaluation Committees] have total discretion over how to use the metrics."

 

Now in THE's April 7, 2011 issue Mr Jump is back with "Now for the post-launch lessons" in which he devotes a page to an interview with the chief executive of the Australian Research Council (ARC), Professor Margaret Sheil, who explains why, as she sees it, the $35 million exercise is a good thing.

 

According to Mr Jump: "Professor Sheil said it [ERA 2010] went as well as she could have hoped and it also helped to re-establish the ARC as an important policy player." She went on to say: "The feedback we are getting is that it was fair and not too unexpected for universities. Where there has been angst, it has been where people haven't really engaged and have woken up to discover that they have an ERA score they didn't know about or understand." Then in something of an apologia she note that this was the ARC's "first go at this and that we can do some things better next time".

 

Top spot was the A*, A, B, C ranking of journals [see ERA or a return to the dark ages]: "We didn't necessarily get it right" in just a small number of cases, "There are only about 200 [of some 22,000 journals] where people are still really upset." However, according to Mr Jump: "She admitted that the ARC had not scrutinised 'as much as we would have liked'".

 

A point that Professor Sheil emphasised in her interview with Mr Jump was that she intended to "educate", prior to the 2012 ERA, academics and administrators against trying to tailor all their research to top-rated journals, noting that journal ranks formed only a minor part of the assessment formula in most of the 157 fields into which research was separated.

 

It would be interesting to see the details on which Professor Sheil bases that rather sweeping statement. Certainly in the area of the hard sciences, mathematics and engineering, citation data and journal rankings as we understand it  do not form a minor part of the assessment formula.

 

Just for argument's sake just what percentage of the 330,000 "research items" from 50,000 researchers "formed only a minor part of the assessment formula", and just how minor is minor?

 

You might have believed that the powers that be, having determined that instituting the ERA was A Good Thing, had also determined just exactly for what the information was to be used. According to Mr Jump, you would have been wrong.

 

A consultation on how to use the ERA results is still under way, but Professor Sheil said their primary purpose will be to help the government, the ARC and the universities make "strategic decisions" about what subjects to invest in. This could mean putting more money into vital and popular underperforming subjects rather than concentrating on areas of international excellence.

 

If you feel bemused now, wait for the tag line. Professor Sheil said: "It will be more of a threat to expensive disciplines that are not performing and don't have high student numbers."

 

Wouldn't it have been far more useful to have improved the peer review systems of the ARC and NHMRC, brought oncosts to a level commensurate with requirements, and increased the proportion of competitive grants awarded by the ARC and NHMRC?

 

Interestingly enough the Chief Scientist and Scientific Engineer for New South Wales, Professor Mary O'Kane, told the annual dinner of The University of Sydney Electrical and Information Engineering Foundation on March 30 that there was no need for them to try to pick winners: "I suggest," she said, "What you need to do is spot not pick them." The former vice chancellor of The University of Adelaide then proceeded to outline a modus operandi:

 

    You watch for these characteristics & watch over time. Where there is:

    These groups seem often to have a knack of causing economy-boosting activities to grow around them.

That of course leaves open that matter of support for the young and brilliant who don't have all that many runs on the board, but then that should be one of the avenues for improvement of peer reviewed competitive grants.

__________________

 

In an October 20, 2010 News and Views TFW wrote in part:

 

By obsessing regarding the ERA, there has been a shameful neglect of worthwhile improvements for long overdue peer review assessments of grant applications from principal investigators, as well as satisfactorily increasing funding for research oncosts and reducing the micromanagement of research both by government and university administrators.

 

According to The Australian's Jill Rowbotham: "The ARC received more than 330,000 research pieces from more than 50,000 researchers.

 

On September 21 TFW reported:

 

149 Named to Evaluate Submissions for Excellence in Research for Australia (ERA). (September 21, 2010)

The academics who will comprise the eight groups (clusters) to evaluate the submissions made to Excellence in Research for Australia have been designated.

 

According to the ERA's Website:

 

The evaluation of data submitted for the 2010 ERA initiative by Australia’s higher education institutions will be undertaken by Research Evaluation Committees (RECs).

 

RECs are established at the discipline cluster level and comprise distinguished and internationally-recognised researchers with expertise in research evaluation. There are 149 REC members in total appointed from Australia and overseas, and broadly representative of the disciplines within each Cluster.

 

If those figures are correct, that ought to keep the evaluators out of mischief, as well as from doing worthwhile research, for some time -- they'll get a mean of 2,215 "research pieces" to assess.

 

Assuming the average evaluator spends four hours a day, seven days a week evaluating the "research pieces" (s)he is assigned and that (s)he spends an average of 6 minutes to evaluate and score a "research piece" that works out to 55.4-days or just on 8-weeks.

 

All things considered a significant proportion may require psychological counselling comparable to that being afforded the 33 Chilean miners rescued from the San José mine.

 

On the other hand perhaps the evaluators have been picked for their aptitude for multitasking.

 

Note: if the meaning of "a piece of research" is ambiguous and the apparent effort required is incorrect,  TFW will publish an update.

 

[As of April 9, 2011 a request to the ARC to confirm or deny the calculation remains unanswered.]