Student Evaluations of Teaching

Introduction

Construct Validity of SET Questions, An essay by Herbert Jack Rotfeld

 ARC: Connections: ELMAR: Posting  

This past Fall, our administration started an online survey for student evaluations of teaching (SET). Afterward, some faculty discuss what questions to use, but one person asked: I’m interested in whether “the deciders” are considering psychometric issues for the questions they retain—If validity and reliability were established for the 10 questions in the survey, . . . ?

This is the really important question, isn’t it? This is not a matter of whether anyone personally found the quantitative or open-ended responses in your campus questionnaires as providing useful information for class improvement, but rather, what others who read the SET could conclude about a teacher’s quality.

Interestingly enough, you sometimes find a question that can illustrate just how well these sorts of questions might work. On our campus, the new forms had one such item when it asked asked, "The instructor provided timely feedback on graded materials" and we have an instructor whose grading practices do not allow for any range of personal interpretations.

One person had 71 total undergraduate students for the Fall. When students handed in their essay tests, they then went to a table at the front of the room where they were required to read a typed answer key of the test that they had just completed. Then all graded tests were returned before the very next class. There is no question of "timely" feedback here, since it’s impossible for faster feedback. Yet, of the 44 students answering this item of the survey, less than half (21) marked "strongly agree" on the question, while 19 students marked "agree" and four said "slightly agree."

Since the fastest possible feedback was not marked in the survey as "timely," students’ responses were obviously unrelated to the substance of what was asked. While this did not specifically address the validity of other responses on the SET questionnaire, or the validity for other faculty in general, it should raise doubts as to whether the surveys on any question or for anyone measured what they claimed to measure.

This is a common test of false positives used in survey research. In one form of old television ratings, a list given to a respondent of possible shows that were on the night before would also include some shows that were not on. These measurement distractors assess the noise in the ratings, the degree to which answers are given because they were possible answers, instead of being answered because they were recalled. In the above case examples on our SET, the responses show that the student scores were not based on the literal questions asked, which should raise doubts about the validity of answers on other specific items in the questionnaire.

Before implementing these online questionnaires, several Auburn faculty members expressed concerns about whether "enough" students would answer the survey. Others raised a concern of whether all of the students who responded attended enough classes to provide meaningful information (e.g. in some business courses, more that 10 percent of students might have failing grades caused by too many classes missed). And in both cases, the faculty concern is that some administrators or P&T committees would base future employment decisions on the average scores for the two SET items asking "overall teaching effectiveness" or the students’ rating of their "learning" in the class. This example questions the validity for use of these surveys as a strong element to evaluate the quality of instruction.

Several years ago, I ran a forum for students on student evaluations of teaching, a focus group discussion of about thirty seniors from a variety of majors. I asked one direct question toward the end: have you, or anyone you know, ever given a lower score on teaching quality for an instructor whose graded course work requirements were mindless, minimal or busy work? The consistent answers were always negative.

In the end, you have to recognize that the student surveys can provide information for teachers, but they are of limited utility as a guide to evaluations of teaching quality.

Herb

Herbert Jack Rotfeld

Professor of Marketing, Auburn University
President-elect, AU chapter of AAUP
Former editor, Journal of Consumer Affairs (2002-2011)
Past President, American Academy of Advertising

 


The Hardware and Software Behind ELMAR Is Paid for with AMA Dues
Please Support ELMAR by joining the AMA or renewing your membership