Discontinuing the inappropriate use of student questionnaires on courses and teaching - Vol. 7, No.4

In the 10 years since Amanda Moehring came to Western as a biology professor, she has learned to take negative student feedback in stride. She expects and welcomes constructive commentary, but every now and then she receives comments about aspects of her personality that are irrelevant to her teaching effectiveness.

“I think if a tool is flawed and has bias, it is the employer’s responsibility to stop using that tool and to come up with a tool that does not have those flaws and that bias, if they want to evaluate their employees,” said Moehring, a Canada Research Chair and member of UWOFA’s Ad Hoc Working Group on Student Questionnaires on Courses and Teaching.

Discontinuing the use of student questionnaires on courses and teaching for personnel decisions such as promotion, tenure, annual review and hiring is one of UWOFA’s goals during this round of faculty collective bargaining.

When used as a means of providing formative feedback to teachers, student questionnaires can help teachers make improvements to their course and their teaching approach. They can also affirm that particular classroom strategies were successful and be a source of motivation for teachers to continue in successful approaches. There is little debate in the literature as to the benefit of student questionnaires for such feedback.

However, the use of student questionnaires for career-shaping decisions raises serious concerns about how faculty teaching is evaluated. Indeed, an arbitrator recently ruled that student questionnaires cannot be used to measure teaching effectiveness in promotion and tenure decisions at Ryerson University. William Kaplan ruled in favour of the Ryerson University Faculty Association and ordered its Faculty Collective Agreement be changed to reflect the decision, which included other measures. Moreover, the literature on faculty evaluation shows that there are problems with student questionnaires in terms of bias. A 2015 study found implicit bias in students of an online course who were asked to rate both a male and female instructor in different sections of the course (MacNell, Driscoll, and Hunt 2015). Both faculty members assumed two different gender identities: male and female. Students gave higher ratings to the male identity, regardless of the instructor’s actual gender.

“This study demonstrates that gender bias is an important deficiency of student ratings of teaching,” the authors note. “Therefore, the continued use of student ratings of teaching as a primary means of assessing the quality of an instructor’s teaching systematically disadvantages women in academia.”

Both male and female students show that bias, Moehring noted, “and indeed we know that implicit bias is not something within men that’s harboured against women – it’s something that we all have the potential to carry.” Each single evaluation may have a subtle bias to it, she added, but that subtle bias at every step along the way can compound across the course of a career. This is particularly troubling for contract faculty members who focus primarily on teaching.

The literature on faculty evaluation also clearly shows that no single data source can allow one to make a reasonable assessment of an individual’s teaching. Other data sources, such as self-reports or ratings by colleagues, are better able to assess aspects of teaching such as the course design, delivery methods, appropriateness of course materials, or grading standards. Without these additional sources, student questionnaires cannot provide a reasonable and accurate reflection of teacher effectiveness and student learning. A 2016 meta-analysis of student questionnaires unequivocally found that “students do not learn more from professors with higher student evaluation of teaching ratings” (Uttl, White, and Wong Gonzalez 2016). Those researchers suggest that universities whose primary focus is student learning should give minimal weight to student questionnaire ratings, but that universities may want to emphasize such ratings if their focus is on student perceptions or satisfaction.

For Moehring, the effectiveness of a professor’s investment in teaching should not be measured using a flawed or biased instrument as it undermines a core devotion of most professors.

“I care deeply about the teaching that I do,” Moehring said. “Like most professors, I invest significant time and effort into my courses in order to make complex material clear and engaging.”

Work Cited

MacNell, Lillian, Adam Driscoll, and Andrea N. Hunt. 2015. “What’s in a Name: Exposing Gender Bias in Student Ratings of Teaching.” Innovative Higher Education. Accessed June 12.
https://link.springer.com/article/10.1007/s10755-014-9313-4.

Uttl, Bob, Carmela A. White, and Daniela Wong Gonzalez. 2016. “Meta-Analysis of Faculty’s Teaching Effectiveness: Student Evaluation of Teaching Ratings and Student Learning Are Not Related.” Studies in Educational Evaluation. Accessed September 21, 2016.
dos:10.1016/j.stueduc.2016.08.007.