Editorial - 01 October 2010
To view previous Editorials click here

 

 


The Schlemiel & the Schlimazel

 Senator Kim Carr

 

pdf file-available from Australasian Science

 

 

Traditionally, the schlemiel is the individual who spills the soup; the schlimazel is the one on whom he spills it, and in Australia the result can be a shemozzle, i.e. a state of utter confusion and chaos.

____________________________________________________________

 

US National Academies Finally Release Assessment of U.S. Graduate Research Programs.

A Data-Based Assessment of Research-Doctorate Programs in the United States by the US National Research Council was released earlier this week. It is the product of a 6-year study of data collected for the 2005-06 US academic year. The report consists of a descriptive volume, and a comprehensive Excel data table on characteristics and ranges of rankings for over 5000 programs in 62 fields at 212 institutions.

What is immediately striking is the avoidance of simplistic ranking of programs. For example: "The degree of uncertainty in the rankings is quantified in part by calculating the S- and R-rankings of each program 500 times. The resulting 500 rankings [for a given program] were numerically ordered and the lowest and highest five percent were excluded. Thus, the 5th and 95th percentile rankings -- in other words, the 25th highest ranking and the 475th highest ranking in the list of 500 -- define each program's range of rankings, as shown in the Excel spreadsheet."

The two ranking systems are in turn defined as:

The S (or survey-based) rankings reflect the degree to which a program is strong in the characteristics that faculty in the field rated as most important to the overall quality of a program.

 

The R (or regression-based) rankings are based on an indirect approach to determining what faculty value in a program. In further explanation: "First, a sample group of faculty were asked to rate a sample of programs in their fields. Then, a statistical analysis was used to calculate how the 20 program characteristics would need to be weighted in order to reproduce most closely the sample ratings. In other words, the analysis attempted to understand how much importance faculty implicitly attached to various program characteristics when they rated the sample of programs. Weights were assigned to each characteristic accordingly -- again, these varied by field -- and the weights were then applied to the data on these characteristics for each program, resulting in a second range of rankings." And see chart of two examples below.

 

As Science's Jeffrey Mervis sums it up: "[T]he committee chose 20 characteristics—including research activity, student support and outcomes, and diversity—to measure the quality of any graduate program, and then conducted two separate faculty surveys to figure out what weight to give each characteristic."

 

 

GRE = Graduate Record Examination

 

The chart below indicates the range of rankings within and between the two systems as obtained through the analysis of just two of the 62 fields, and if nothing else, it is a damning indication of the inadequacy of the naive but costly methodology being followed to determine Australian governmental block research funding through Senator Carr's negatively-geared Excellence in Research for Australia (ERA).

 

From the viewpoint of improving research quality a significant reduction of block research funding to university administrations and its redirection through improved peer review by the nation's research councils, free from governmental micromanagement and including appropriate oncosts, would be the much preferred option, but of course it would lessen not only the ministerial grip on academic research but also that of university administrators.

 

Yet, the present government like its predecessor, wastes millions of dollars and thousands upon thousands of hours of academics' time chasing an approach which ultimately is detrimental to the nation's well being. 

 

 

 

 

Alex Reisner

The Funneled Web