Grading Exams

We'll now turn our focus to grading exams and discuss some strategies for ensuring that grading is efficient, fair, and consistent and provides useful information for students and instructors about student learning. Please share your successful strategies in the Comments section. 

Grading Efficiently

Obviously, some question types, e.g., multiple choice and matching, can be graded very efficiently, either by hand or using technologies like auto-graded online quizzes or Scantron forms. For open-ended question types, e.g., essays, grading takes considerably more time and effort. Preparing and validating rubrics can decrease the difficulty of grading open-ended questions. 

  • Prepare a rubric as you write the exam. The rubric should outline what constitutes a complete and correct answer. By developing these criteria before giving the exam, you have an opportunity to revise the question to better communicate to students what you expected. You can find more about developing effective rubrics in CRLT's Occasional Paper on writing and grading exams.
  • Make sure the rubric explains what role writing mechanics and other factors play in scoring. In a course that isn't about writing, do organization, clarity, grammar, punctuation, or neatness of figures or drawings matter, independent of the course content being covered in the question? Will students lose points for including extra information that wasn't asked for? 
  • Refine the rubric by using it to provisionally score a subset of completed exams, perhaps 5-10. It's possible that students will answer the question in unexpected ways. Revising the rubric to take into account any unexpected responses will help you to score all exams more efficiently. 

Grading Fairly and Consistently

Grading consistently can be a challenge for an individual grader, working across many exams. Your reaction to each response is easily influenced by prior student responses and by the length of time you have spent grading. When multiple people are grading a single exam, there are additional challenges to ensuring fairness and consistency. Rubrics can help with this, but there are some additional strategies to consider: 
  • If possible, score student responses anonymously to avoid any unconscious bias for or against individual students.
  • Score all answers for one question before moving on to the next question (rather than scoring an entire exam at once). It is easier to remember the criteria for a question if you grade the same question all at once. This improves consistency of grading. 
  • Shuffle the exams between questions. Grading fatigue can influence scoring, so it's best to avoid any one student's exam being always first or last in the pile. 
  • When multiple people are grading the same question or exam, hold a norming session where all graders score a subset of responses and compare notes. If graders differ in how they score certain responses, discuss the reasons for difference and clarify these points on the rubric.
  • Keep open communications among graders so that any questions that come up about how to score responses can be resolved collaboratively and scored consistently among graders. 

Providing Useful Information about Student Learning 

What would it mean if every student received an A on your next exam? As described in CRLT's Occasional Paper on writing and grading exams, in a criterion-referenced grading system, it would indicate that all students had mastered the pre-determined set of skills and knowledge that exam covered. In contrast, a norm-referenced grading system rates each student's performance against that of the class as a whole, so the situation of all A's is unlikely to arise. One common example of norm-referenced grading is grading on a curve. Criterion-referenced grading rewards students for their effort and encourages student collaboration. Norm-referenced grading can be useful to identify exceptional students within a cohort and to combat grade inflation. Consider which system you are using and communicate your reasons clearly to students so they understand what information their grades are communicating to them.  

While norm-referenced grading has some benefits, one important caveat is the impact the low scores can have on student motivation, even if they are doing well relative to their peers. Consider this quote from a female white engineering student: 

"In the weed-out courses, they make the exams so needlessly difficult that people just drop out.  They study as hard as they can, go to class every day, try to get their homework, and then they take the exam and get a 65 on it -- and the class average is 63.  And it makes you feel terrible.  But, even if you get wise, and realize that you’re going to get a B out of it, what’s the point, if I feel like I learned nothing from the course?" (from Seymour and Hewitt, 1997, p. 111) 

Motivation is strongly influenced by an individual's perception of how effective their efforts will be. When a student knows that their success is not determined by their learning alone, but by that of all the other students in the class, it can have a demoralizing effect. Especially in the sciences, this can have a profound impact on student motivation, even causing them to reconsider their major. Although the woman quoted above is a "non-switcher" -- a student who persisted in her chosen major -- many students leave the sciences because of competitive grading or the perception that courses are designed to weed them out. Underrepresented students such as women and racial minorities are especially likely to be affected. 

Before moving on to the next section, consider sharing your responses to any of the strategies mentioned here, or share your own effective strategies for grading, in the Comments below. 

Next Section

Click to go to the next section: Supporting Student Learning and Performance