News & Views item - September 2011

 

 

 Brian Martin: ERA: Adverse Consequences. (September 17, 2011)

 

The following opinion piece is reprinted from vol. 53, no. 2, 2011 of the Australian Universities Review.

Brian Martin*

ERA: adverse consequences

Excellence in Research for Australia has a number of limitations: inputs are counted as outputs, time is wasted, disciplinary research is favoured and public engagement is discouraged. Most importantly, by focusing on measurement and emphasising competition, ERA may actually undermine the cooperation and intrinsic motivation that underpin research performance.

 

Excellence in Research for Australia (ERA) has the laudable aim of improving the quality of Australian research. Its approach is straightforward: measure the quality and quantity of research in different fields, with the prospect of funding attached to good outcomes, and this will stimulate better outcomes. However, this approach has many adverse consequences.

 

Misleading journal rankings

 

In the first ERA round, assessments of the quality of research teams were based, in part, on the quality of articles published in journals, assumed to correlate with the journal rankings of A*, A, B or C. On 30 May 2011, Senator Kim Carr announced that these rankings would be dropped and replaced by ‘journal quality profiles.’ How ERA panels will use these profiles is not clear. In any case, it is worth reviewing shortcomings of journal rankings.

 

On the surface, it seems sensible to judge the quality of research by the journals it is published in. However, trouble arises in the steps between journal rankings and the quality of research.

 

The first step is to establish a ranking for each journal, with expert panels relying on input from people in relevant disciplines. Inevitably, subjective factors are involved. For example, panel members might be inclined to rank highly a journal in which they had published and not so favourably inclined towards an unfamiliar journal.

 

Then there is the assumption of a unitary ranking of a journal. Many journals are less than regular in their treatment of submissions, due to invited articles (sometimes published without refereeing), guest editors and special issues filled with papers from conferences.

 

The reputation of a journal often depends on its impact on the field, which in turn is due to a small number of articles that are widely known and cited. Other articles in the journal may be unexceptional. Another problem is that impact factors can be manipulated (Arnold and Fowler 2011).

 

Even if journal rankings were accurate, this doesn’t translate into accurate quality ratings of articles, because journal standards only set minimums. An article’s quality does not go down just because it is submitted to a C or unranked journal rather than an A* journal. Judging quality by where an article appears is like judging a person’s wealth by their address. Moving to a lower-status suburb doesn’t reduce one’s income.

 

Simon Cooper and Anna Poletti (2011) argue that ERA’s journal-ranking process actually undermined the production of high quality research, by discouraging collegiality and international networking and by not recognising the way academics access materials digitally.

 

Many academics saw journal rankings as the most objectionable feature of ERA. Although dropping the rankings may give the impression that the rest of ERA is acceptable, there are plenty of other problems, some of them just as serious.

 

Inputs counted as outputs

 

In Australia, grant successes seem to be treated as a measure of research success more than most other countries (Allen 2010). Peer review of grant applications is one measure of quality, but grant monies themselves are inputs to research, not outputs. ERA continues the emphasis on grants.

An alternative would be to look at output/input ratios. Imagine a scholar who spends one-third of their time on research, valued at $30,000. A scholar who has a $30,000 grant then should be expected to produce twice the outputs, or much higher quality outputs. But this is not how the game is played. Big grants are seen as more prestigious, even when there are no more outputs.

 

Time wasted

 

Preparing and assessing ERA submissions is time-intensive. It involves many of each university’s most productive researchers who are diverted into ERA administration rather than doing more of their own work.

 

Disciplines dominant

 
ERA categories are built primarily around disciplines. Interdisciplinary researchers often publish in a range of journals. Their outputs are spread over several different research codes, thus weakening a university’s claim to have concentrations of excellent research. The result is that more narrowly specialised research is encouraged at the expense of cross-disciplinary innovation.

Many of today’s most pressing issues cut across traditional academic boundaries. By sending a signal that interdisciplinary research is less valued, ERA encourages a retreat from engaging with real-world problems.

 

Misleading narratives

 

ERA rewards the existence of groups of researchers in nominated fields. This provides an incentive to create, on paper, artificial groupings of researchers whose outputs collectively seem significant. Then, to fit ERA expectations, a narrative needs to be composed about how the research of these groupings fits together in a coherent package. Many of these narratives are largely fiction, especially in fields like the humanities where researchers seldom work in teams.

The narratives serve the interests of the ARC. Groups are expected to show high-quality outputs from ARC grants, so outputs are attributed to grant support even when they might have happened anyway. Researchers without grants are downgraded. The result is a self-fulfilling process: in essence, the ARC sets the expectations for ERA reporting that shows how wonderfully effective ARC funding is for research.

Many researchers give misleading pictures of their own research — on their CVs and grant applications — for example by claiming more credit for their work than deserved. ERA institutionalises incentives to create misleading narratives about research groups and concentrations. Creative research managers might be tempted to deceptively reclassify outputs, for example by dumping articles in lower-status journals into a ‘reject’ category in order to boost rankings in other categories.

 

Peers, not the public

 

Because the benchmark for research quality is what impresses other researchers, there is an incentive to be more inward-looking. By default, applied research and public engagement are discouraged (Brett 2011; Shergold 2011).

Public engagement — including writing articles for newspapers, blogs and other online forums — requires a different style than the usual academic journal. Value is placed on accessibility and relevance. Jargon is to be avoided. Public engagement is a vital contribution to society, but is given little or no credit in ERA.

Similarly, applied research useful to outside groups — government, industry or community — receives less kudos than research pitched to peers. ERA gives no formal attention to social impact, which might favour applied research.

 

Susceptibility to misuse

 

ERA is supposed to be used to measure the performance of institutions and research groups, not individuals. However, it did not take long before university managers began enforcing ERA-related measures on individual academics, for example by rewarding those who published in A and A* journals or brought in research grants. Academics are at risk of missing out on appointments or promotions, or even losing their jobs, if their performance falls short in ERA measures, no matter how outstanding they might be otherwise. The psychological effect on those whose outputs are deemed irrelevant to ERA performance can be severe. ERA may inspire better work by some but at the cost of demoralisation of many others.

University managers could be blamed for inappropriate use of ERA measures. On the other hand, the conception of ERA itself is part of the problem, because it is so susceptible to abuse.

 

Competition

 

The ERA system is competitive, with every university and research unit trying to do better than others. However, no one has presented evidence that competition is the most effective way of boosting research quality and output. Alfie Kohn (1986) in his classic book No Contest found competition is the guiding philosophy in education and work despite a lack of supporting evidence.

Competition stimulates some undesirable behaviours. Universities, in their race for status, put considerable effort into bidding for top performers, yet this does not increase overall output in the system. In a highly competitive system, researchers are more likely to hide or disguise their ideas to prevent others from obtaining an advantage. Universities emphasise protecting intellectual property rather than contributing to the public domain, even though few universities make much money from intellectual property. Competition puts an enormous strain on researchers and can lead to excessive and damaging work practices, a type of self-exploitation (Redden 2008).

The alternative is cooperation, well known to be a stimulus for research in collaborations and research teams. Cooperation in producing software has generated some of the highest quality products in the world, such as the Linux operating system. Online tools now enable easy collaboration across continents. MIT has put its course materials on the web, leading a move towards sharing rather than hoarding intellectual outputs.

 

Measurement not improvement

 

The massive effort involved in ERA ends up with assessments of research excellence. That is all very well, but does measurement actually improve either the quality or quantity of research? There is no evidence that it does.

 

The effort and attention given to ERA might be better spent on programmes directly designed to improve research. Collectively, the Australian academic community has immense knowledge and experience concerning research. Sharing this knowledge and experience could be promoted through training and mentoring schemes.


Research suggests that the key attribute of successful researchers is persistence, not intelligence (Hermanowicz 2006). Stories of continued effort despite failure would provide motivation for junior researchers. However, senior researchers seldom tell the full story of their struggles — including rejections of their work — as this might detract from their lustre (Hall 2002). In a more cooperative, supportive research environment, such lessons would be easier to provide.

 

Most experienced researchers are driven by intrinsic motivation, including intellectual challenge, fascination in developing new understandings, and satisfaction in working on something worthwhile. Intrinsic motivation can be undermined by offering external sticks and carrots, which is exactly what ERA does. Too many rules and external incentives can be counterproductive. Barry Schwartz and Kenneth Sharpe in their book Practical Wisdom describe how this can happen in law, education and medicine. They say ‘Rules are set up to establish and maintain high standards of performance, and to allow the lessons learned by some to be shared by all. But if they are too strict or too detailed or too numerous, they can be immobilizing, counterproductive, and even destructive.’ (Schwartz and Sharpe 2010: 255).

 

Schwartz and Sharpe (2010) say that people need opportunities to exercise discretion, balancing rules and circumstances to wisely help achieve the goals of the activity. Arguably, one of the reasons for the vocal opposition to journal rankings was that they removed discretion from academics for deciding where best to publish their research. Although journal rankings have been dropped, the basic incentive system remains. It would be paradoxical if ERA’s apparatus for measuring output and providing incentives for particular types of output actually sabotaged the very thing it is supposed to improve.

 

What to do?

 

Some academics have accepted ERA as a fact of life and seek to comply with directives of university managers, for example to submit papers only to the most prestigious journals. Others, though, think ERA is so flawed that they must resist, either individually or collectively.

One option is to carry on with research as before, ignoring ERA imperatives, for example submitting papers to the most appropriate journals, whatever their academic status. This option is easiest for those who have opted out of the struggle for promotions and status through the research game, or who are senior enough so there is no need to impress others.

Another option is to refuse to participate in ERA exercises, for example declining to lead panels, do peer assessments or contribute statements and publication lists to ERA panel leaders. These forms of individual resistance make a statement but have limited impact unless they become widespread.

A different sort of response is voicing dissent against ERA. This includes careful deconstructions showing its damaging effects and vocal complaints to anyone who will listen, including letters and articles in newspapers and blogs. Academics know a lot of people from different walks of life, which means that informal complaints to friends and critiques in professional forums will filter through to politicians and other decision-makers. As well as rigorous critiques, criticism of ERA can take the form of humour: creativity is needed to generate the most powerful forms of satire. (I wrote this paragraph before journal rankings were dropped from ERA, a change directly reflecting the power of complaint).

Another response is to set up alternative systems for promoting research and assessing performance, systems that address ERA’s shortcomings. This is a big challenge but definitely worth the effort. Critique is all very well, but critics need an answer to the question ‘If not ERA, then what?’

 

Conclusion

 

Some of ERA’s limitations are matters of design, for example counting grants as outputs rather than inputs. Others are matters of orientation, notably the emphasis on disciplinary research. Yet others are deeper: ERA assumes that competition and measurement are worthwhile, though both are questionable.

ERA is all about promoting research, but curiously enough there is little research available to justify the approaches adopted by ERA itself. It is not evidence-based; indeed, there seems to have been no systematic comparison with alternatives. Rather than the government imposing a competitive measurement scheme, a different approach would be to open up space for diverse proposals to improve research.

 

Acknowledgements

 

For useful comments and discussion, I thank John Braithwaite, Judith Brett, Don Eldridge, Anne-Wil Harzing, Tim Mazzarol, Anna Poletti, Guy Redden and others who prefer to remain unnamed.


*Brian Martin is Professor of Social Sciences at the University of Wollongong, Australia.

__________________________

 

References

 

Allen, J. (2010). Down under exceptionalism. University of Queensland Law Journal, 29(1), 143–154.

Arnold, D.N. & Fowler, K.K. (2011). Maths matters: nefarious numbers. Gazette of the Australian Mathematical Society, 38(1), 9–16.

Brett, J. (2011). Results below par in social sciences. The Australian, 9 February, 36.

Cooper, S. & Poletti, A. (2011). The new ERA of journal ranking. Australian Universities’ Review, 53(1), 57–65.

Hall, D.E. (2002). The Academic Self: An Owner’s Manual, Ohio State University Press, Columbus, OH.

Hermanowicz, J.C. (2006). What does it take to be successful? Science, Technology, & Human Values, 31, 135–152.

Kohn, A. (1986). No Contest: The Case Against Competition, Houghton Mifflin, Boston, MA.

Redden, G. (2008). From RAE to ERA: research evaluation at work in the corporate university. Australian Humanities Review, 45, 7–26.

Schwartz, B. & Sharpe, K. (2010). Practical Wisdom: The Right Way to Do the Right Thing, Riverhead, New York.

Shergold, P. (2011). Seen but not heard. Australian Literary Review, 4 May, 3–4.