Statistics and Ed Journals
Ed Rigdon responds to the anonymous posting about the role of statistics in pedagogical journals
Should statistical analysis be required in pedagogical journals for articles describing learning exercises? That is actually a pretty big question. You can argue that a lot of research articles in general are not helped by the statistical analysis contained therein–the statistics are aimed more at getting past the publication hurdle than at advancing knowledge or resolving a substantive question. Psychologists such as Paul Meehl and Gerd Gigerenzer have argued that, that not only novices but regular users of statistical fundamentals don’t understand what they are doing and often misinterpret their own results. It has also been argued lately that journal editors engage in certain editorial practices for the sole purpose of improving the standing of the journals. (That’s understandable, too, as administrators continually raise the bar in faculty evaluations.) In my personal opinion, the review process includes a degree of mere mimicry, where participants ape the behavior which they observe elsewhere, with little or no regard for the genuine value of those practices.
That said–and that’s a lot–at this time we could also make an argument for *more* quantitative analysis specificaly in our teaching materials. Program accreditors such as the AACSB have been increasing their emphasis on demonstrated assessment of learning outcomes. More and more faculty time must be devoted to demonstrating the effectiveness of pedagogical programs. Today, suppliers of teaching materials who can facilitate this process through validating achievement of learning outcomes can earn a major advantage over competitors. Tomorrow, providing that kind of evidence may be just the cost of doing business.
At the same time, budgets are tight and getting tighter. Class time is precious and scarce–instructors can’t afford to waste any of it. Everything that faculty squeeze into the allotted time must earn its place through demonstrated impact on achievement of learning outcomes. Of course, faculty experiment. They try new things, sometimes based on evidence and sometimes based on judgment, recognizing that some innovations will fail. But it seems wrong to me to suggest that providers of materials–whether writing in journals or writing in textbooks–should be excused from providing evidence of effectiveness.
So we may actually need to set higher standards for all teaching materials, including textbooks and supplements as well as pedagogical journal articles, to affirmatively demonstrate their ability to improve achievement of specific learning outcomes. Providing such evidence does not necessarily mean using any particular methodology, like the null hypothesis testing typically practiced. The surviving descendant of the work of Jerzy Neyman and Egon Pearson, on the one hand, and Sir Ronald Fisher, on the other, has enough weaknesses to give anyone pause. But I would have to agree with an editor of a pedagogical journal who insisted that a description of material meant for the classroom is incomplete without evidence regarding that materials’s effectiveness.