At the end of every semester, students fill out a course evaluation form called the USRI, or the Universal Student Ratings of Instruction. The USRI is designed to be objective, but new research suggests that students evaluate female professors harsher than their male colleagues and demonstrate racial bias. This has prompted General Faculties Council to review the student evaluation system this academic year.
The review will cumulate in a final report on USRIs to be released on April 30, 2017, which will recommend General Faculties Council to keep, change, or abandon the current rating system. Sarah Forgie, chair of the Committee on the Learning Environment (CLE), said in an email her committee will research instruction rating mechanisms in university courses, review the U of A’s current evaluation tools, and look into multifaceted assessment methods.
A statistical study published by research network ScienceOpen last January found teaching evaluations are better at showing biases against women than teaching quality. The study, by the Paris Institute of Political Studies and the University of California, Berkeley, prompted English professor and former GFC councillor Carolyn Sale to propose the review.
If the USRI is discriminatory, it could be detrimental to some professors’ careers because teaching quality reported by the surveys affect hiring, promoting, and tenuring decisions. Diversity in academic labour has improved since the initiation of an employment equity plan 22 years ago, but diversity gaps persist in 2016, according to a study by the Academic Women’s Association. Male full professors currently outnumber female professors of all ranks at the U of A.
“The academy is more diverse, racially and ethnically, than it was 20 years ago,” Sale, now the President of the Academic Staff Association of the University of Alberta (AASUA), said. “If a certain kind of person tends to have an advantage in faculty assessments, we need to take account of that.”
Results of the U of A’s USRI have never been analyzed for sexist, racial, or linguistic bias. The design of the original, peer-reviewed USRI system implemented in the 90s was bias-proof, said Heather Kanuka, former Committee on the Learning Environment chair. However, some USRI questions have since been added or changed from the original, and these changes were not peer-reviewed.
Kanuka added that student evaluation research can be challenging to examine — some studies suggest evaluations do more harm than good, while others support the opposite.
“There is no other area in higher education that has been more researched than student evaluations,” she said. “You will find any kind of data that supports your individual view.”
In 2009, Kanuka led a CLE review of teaching evaluations that concluded systems similar to USRIs are not significantly biased.
Student evaluations of teaching were introduced in many post-secondary institutions following student revolts in the 60s. By the 70s, various universities introduced the evaluations to hold professors accountable for their teaching. During those years, the U of A’s faculties and departments administered teaching evaluations for students that did not have to be made public. When the U of A implemented the USRI across all faculties in 1994, the Students’ Union advocated for ratings to be made available to students to help them select courses. Students were given access to an online database of USRI results in 1999, which is still updated after every semester. However, many students aren’t aware of the database and use ratemyprofessors.com instead.
The SU Vice-President (Academic) Marina Banister’s will take part in the USRI review as a member of the Committee on the Learning Environment (CLE) to represent students. She said the SU will be looking to make sure the feedback professors receive from USRIs is “useful, usable, and consistent.” She’ll also push for class time to be given to students for filling evaluations, and for better communication around the importance of USRIs and their role in professors’ careers.
“We need to make sure we’re asking the right questions and (that) the data’s being collected in an appropriate manner,” Banister said.
USRIs aren’t the only tool used to evaluate teaching. In the Faculty of Science, the USRI score is a “number that starts a conversation,” according to Vice-Dean John Beamish. If a professor’s USRI score is low, this can prompt a department chair to suggest ways to help the low-scoring professor improve. The Faculty of Science uses eight other metrics besides USRI to evaluate teaching, including peer assessment and assessment of graduate student supervision. A similar multifaceted approach that includes the USRI is also used in Arts departments, Dean Lesley Cormack said.
Some academic staff are skeptical of the USRI system — 402 academic staff respondents of a 2012 survey revealed concerns about flaws in statistical interpretations of USRIs and the abusive nature of some comments from students. Staff were also concerned that the USRI only measures student experience, not learning.
The 2012 survey led the academic staff association to argue that USRI comments should be confidential and not anonymous in order to hold students accountable for derogatory comments, and that opportunities for training, peer consulting, mentoring, and professional feedback should be offered with USRIs to improve teaching. History professor Andrew Gow said he has been skeptical of USRIs since his time on AAUSA’s teaching and learning committee from 2003 to 2010.
“The (USRIs) reproduce the prejudices of class, gender, ethnicity, and racial prejudices of the student body,” Gow said. “They do so in a way that’s unconscious. The bias is hidden because the evaluations are almost universally believed to be objective.”
For now it’s unclear if the U of A’s USRI is biased or unbiased, Kanuka said. While some research indicates sexist biases, Kanuka said large-scale, peer-reviewed studies support that student evaluations show non-significant differences in gender. And while ideal teaching evaluations would be done with experts sitting in on classes and focus groups for students, Kanuka said large institutions lack the resources to provide such in-depth evaluations for all professors. The U of A has more than 4,000 academics on staff and each one needs to be evaluated in some way, which can be accomplished with the USRI despite its limitations.
“We simply don’t have all of those resources, so we have to rely on an instrument,” Kanuka said. “Is it the best instrument? That would be one thing we don’t know in this institution … if we have a reliable and valid instrument.”
Instead of using the usri for all classes, what about using more intensive review (audit by experts and focus groups) for smaller number of classes chosen randomly or as needed? Or use audit style reviews for classes/instructors the usri flags but don’t count on the usri as the final evaluation in itself?