Student Evaluations

Introduction

An essay by Herbert Jack Rotfeld

 ARC: Connections: ELMAR: Posting


Starting this year, Auburn is going to use an online method for Student Evaluations of Teaching (SET), and while I’m a bit unclear on the meaning of this, student participation is to be "mandatory" in that they will need to participate to receive grades in classes.

The college of business associate dean sent us the following notice:

 

    …. the College of Business will use the . . . [online SET questions] developed by the University’s Teaching Effectiveness Committee. Later this Fall, the College’s Instruction Committee will consider whether the College should continue to use these ten questions or should develop an alternate set of questions. Any changes to the questions will be implemented in Spring Semester.

As a reply, I sent the following note to him:

    The question many faculty are asking is how the system will weigh or assess responses from students who were not present for classes. In some courses, a number of students receive grades of W or FA, or F from simple non-attendance, and yet they are still listed on rosters for purposes of being able to access this system. Can this be tracked or evaluated in looking at the final data?

    Another larger question is that while this data collection is mandated and required, other forms of teaching effectiveness data are not. So at least in our house, there is a need to see what other forms of data on teaching could be collected on a regular basis. When this passed the Senate in the Summer, the faculty discussions expressed ongoing concerns for how the data would be used, or rather (to track paranoia mode), abused. The final Senate resolution noted how the data should not be used to rank-order faculty. It would be interesting and helpful if your office made an important, but heretofore unique, effort to assess how department chairs and faculty use the data.

We’ve never conducted our own tests of construct validity anywhere on campus.

The issue on the use, non-use of misuse of data is really why this this becomes a hot button issue for so many faculty. The debate on this last Summer at our University Senate was a case in point: when someone in the Senate or other forums says the SET scores are reliable and valid, he or she never explains how or in what ways. More important, it is unclear if they mean the colloquial English usage of the words or the specifica applications in social science, since there is a difference. Is it test-retest reliability, alternative forms reliability or internal consistency reliability? On validity, do they mean content validation (i.e. face validity), criterion validation, construct validity ( be it convergent validity, discriminant validity or nomological validity), internal validity or external validity as a measurement tool. And anyone who isn’t familiar with the terms of this paragraph in a social science research sense should not be asserting before the Senate that the measurements by the SET are "reliable and valid," though the meeting transcript lists professors or history, philosophy and religion doing just that.

More importantly, I have never seen a faculty report that did any tests of reliability or validity of specific instruments we use. I hear of many items included because students want to express concerns on it. I know of administrators and P&T committee members that in the past look at a single sum score on SET with whatever instrument is in use, as if it summarizes all ineffable aspects of teaching effectiveness.

But here is a basic question that I want answered before I could accept any statement of SET as a meaningful tool: Is there evidence that SET scores go down for ALL faculty who never ask students to do any writing outside group projects, who only use multiple choice tests even in classes with 30 or fewer students, or who simply do not seem to impose any strong academic learning standards for students to receive high grades. Unless scores go down for everyone that does not deliver a challenging thinking class, I’d have difficulty ever agreeing as to the SET scores possessing any meaningful validation.

Herb

Herbert Jack Rotfeld
Professor of Marketing, Auburn University
(soon to be ex-) Editor, Journal of Consumer Affairs
President, American Academy of Advertising
rotfeld@auburn.edu
http://www.auburn.edu/~rotfehj