Making Student Ratings More Useful: Guidelines for Students and Instructors

As we approach the end of the term, students will be asked to provide feedback to instructors using U-M's course evaluation system. At CRLT, we often hear from faculty and GSIs who are discouraged about a number of issues related to student ratings, including the tone of some written comments, relatively low response rates, and uncertainty about how best to use the results productively. This post provides some resources for each of these concerns.

Student Ratings Questionnaire Example

1) Minimizing Unhelpful Comments: Student ratings comments can be unhelpful when vague or irrelevant, whether positive ("Great course!") or negative (e.g., criticism of instructor attributes not linked to the learning environment). To encourage students to avoid rude or personally hurtful comments, CRLT worked with ADVANCE at U-M on a handout that instructors can give to students before they fill out their evaluations. The handout, Course Evaluations: Providing Helpful Feedback to Your Instructors, asks students to keep three key issues in mind when completing their ratings:

  • Comments are intended to provide instructors with feedback to inform future iterations of their courses.
  • Specific constructive feedback is more useful than vague critique or praise (see examples provided in the handout).
  • Comments not related to student learning (especially insults or comments on an instructor’s appearance) are not helpful and actually diminish the value of their feedback.

Recent research is finding that coaching from peers or near-peers can also be effective in helping students to create more effective feedback. This research, provided by the University of California - Merced, includes short videos and a rubric you can share with your class to help students create more effective open-ended feedback statements.

2) Raising Low Response Rates: The Registrar’s Office administers the U-M ratings system, and they suggest a number of strategies for increasing response rates. Examples include letting students know how you have incorporated past feedback into your courses and asking students to bring laptops or other mobile devices to class and providing time for them to complete their ratings during class (usually in the first 10-15 minutes). The new system is quite user-friendly for mobile devices, making this a particularly viable approach.

3) Interpreting the Results of Ratings: CRLT’s webpage on Student Ratings of Instruction includes a section on using ratings data that offers several resources for instructors who wish to get the most from their students' feedback. These include several helpful webpages:

  • Lehigh University’s Center for Innovation in Teaching and Learning offers eight suggestions for interpreting ratings results, including looking for general trends and overlooking outliers, focusing on elements of the evaluation that matter most, and choosing one or two areas for improvement.
  • IUPUI’s teaching center offers guidance for reviewing the numeric scores and the written comments and then thinking through how best to proceed.
  • This edition of Essays on Teaching Excellence outlines a process for sorting written comments by overall rating and then analyzing the results based on patterns identified in the comments.

Finally, a number of studies indicate that ratings are most likely to have a positive impact on teaching when instructors discuss results with a third party, such as a colleague or a member of the teaching center. To set up an appointment with a CRLT consultant to discuss results of your student ratings, call us or complete our consultation request form.

shadow