News & Views item - March 2011

 

Are Studies Assessing Impact of Undergraduate Science Teaching Methods Poorly Designed? (March 14, 2011)

A paper in the March 11, 2011 issue of  Science* calls into question the quality of studies which attempt to assess the efficacy of innovative methods for the teaching of science subjects to university undergraduates.

 

As the short summary of Ruiz-Primo, et al's paper puts it: "Despite revealing some positive impacts, studies too often suffer from weak design and inadequate reporting."

 

After examining 868 articles on course innovations a final pool of  98, 26, 38, and 148 studies in biology, chemistry, engineering, and physics, respectively was considered useable for the evaluation.

 

The author's come to the following conclusions:

 

This evidence suggests that undergraduate course innovations in biology, chemistry, engineering, and physics have positive effects on student learning.

 

But

 

[a]lmost half of the comparative studies collected for review had to be excluded because they lacked the simple descriptive statistics needed to compute an effect-size estimate.

 

[I]t is difficult to rule out plausible threats to the internal validity of most studies, because there are few examples in which students were randomly assigned to treatment and control conditions.

 

[A] substantial number of studies fail to administer pretests, making it impossible to rule out preexisting differences in achievement between groups...

 

[There was] a lack of attention to technical characteristics of the instruments used to measure learning outcomes. [e.g]  71 physics studies included, (92%) did not provide information about the validity and reliability of the instruments used.

 

And they make the following recommendations:

 

First, all studies need to include descriptive statistics (sample sizes, means, standard deviations) for all treatment and control groups on all testing occasions.

Second, whenever possible, researchers should attempt to randomly assign students to treatment and control conditions.

When this is not possible, efforts should be made to demonstrate that the groups are comparable before the treatment with respect to variables (e.g., prior academic achievement).

Finally, researchers should be attentive to the quality of their outcome measures; if measures are not valid and reliable, subsequent interpretations can become equivocal.

 

As a final caveat: Although the poor quality of some research in this field, and the specific shortcomings that commonly undermine studies, have been discussed before, journals continue publishing these types of papers. We are hopeful that our new analyses provide more simple and straightforward emphasis on these critical issues. Experts in experimental research and methodology in education and experts in educational assessment can contribute a great deal to improve research on instructional innovations in science.

___________________________________________________

 

*"Impact of Undergraduate Science Course Innovations on Learning" by Maria Araceli Ruiz-Primo, et al. Science Vol. 331 no. 6022 pp. 1269-1270, DOI: 10.1126/science.1198976