Design Principles

Course Type:
All

What is acceptable evidence of student learning in your course? How will you know if your students have reached the objectives you laid out? We call the answer to this question our assessment strategy.

In this section, we shift from focusing on the whole course to discussing different types of individual assignments you might choose to implement as part of your assessment strategy. These may include exams and quizzes, projects and papers, low stakes formative assessments, or authentic assessments. We also discuss how to ensure that what we are asking students to demonstrate in our assignments actually matches our goals for student learning in the course (we call this alignment).

Align Assignments with Course Learning Objectives

Without a course design model to rely on, instructors often turn to the norms of their field or department to determine an assessment strategy. That might look like:

When I’ve seen my colleagues teach Introduction to Macroeconomics, they’ve used a three exam approach. I guess I’ll go with that.

It’s an English literature course. Students write papers in English literature!

The backwards design model (Wiggins and McTighe, 2005) encourages a different approach. It encourages instructors to return to the learning objectives and reflect: what kinds of student work would demonstrate they reached the objective? The learning objectives and the assessment strategies should be aligned.

Let’s illustrate misalignment with an easy starting example. You decide you need a break from all work, all the time. You want to develop a new hobby–baking! You sign-up for a community baking class and your instructor says on the first day that by the end of the class, you’ll be able to bake delicious pies at home. You come into the last class full of excitement. You’ll bake pies for sure this week! Instead, you watch the instructor bake a pie and complete a worksheet on what they did well and where they went off-track. Needless to say, you’re disappointed. On day one, you were told you’d be able to bake a pie and by the end you still don’t know if you can do it. Not to mention the instructor doesn’t know whether they accomplished what they set out to do either! (Example from Nestor & Nestor, 2013).

How might misalignment look in a university course? An assessment strategy might be misaligned if a learning objective says that students will be able to analyze or evaluate or synthesize, but the course assessments involve only exam questions asking for knowledge recall. Similarly, if a learning objective says that students will be able to create, build, or design something at the end of the semester, but the major assessments ask students to critique someone else’s work, there may be misalignment.

When you are designing a course assessment strategy, we encourage you to keep an open mind about the types of student work that demonstrate whether a student reaches an objective. We often fall back on traditions—for example, the three major exams approach. Or we assign “junior” versions of academic scholarship in our field. That is, we ask students to produce academic writing. Let’s say, though, that your course learning objective involves students’ ability to analyze. Maybe it’s a piece of music, a cultural artifact, a data visualization, or a public health policy. While your field might traditionally look for this evidence in an academic paper, analytical skills can also be demonstrated through other products, such as a conversation, a letter to the editor, a podcast, or a museum placard. When you expand your understanding of what would constitute evidence, you can help students see themselves as belonging in your course, even when they don’t see themselves as a future academic. You also can remove hidden barriers to their success—you want all students who have the ability to analyze the artifact to succeed with the assessment, regardless of whether they know the hidden curriculum and norms of academic writing in your discipline.

Consider moving from one or two high-stakes assessments to include more regular, lower-stakes assignments

Relying on a small number of high-stakes assignments (e.g., a midterm and final exam, a single end-of-term research paper) can have negative consequences for learning. A substantial body of research suggests that more frequent, lower-stakes assignments and opportunities for frequent feedback can help students avoid cramming, leading to longer-term retention. In addition, having more frequent assignments, and having those count toward the final grade, ensures that poor performances on any given assignment will not have out-sized consequences. This can be accomplished in a variety of ways that do not necessarily need to dramatically increase the grading burden for instructors. For example, single, high-stakes tests can be broken into shorter, regular quizzes that would encourage students to keep up with the course material, while taking little time to administer and grade. Longer writing assignments can be scaffolded, broken into smaller pieces (e.g., a preliminary outline, a lit review, a first draft) over the course of the term (see the section on papers and projects [link]). This approach also has the advantage of increasing equity by ensuring that all students understand the components of successful project completion rather than privileging students from higher resourced backgrounds. In all cases, it is important for students to get feedback on assignments so that they know how they can improve. While this can come from instructor comments, it can also be accomplished by using rubrics [hyperlink] to help students understand their grades, peer review to discuss work with other students, or “exam wrappers” [hyperlink] to promote student self-reflection.

In addition, you can consider incorporating ungraded feedback mechanisms that are designed to provide students with opportunities for practice and to give them and you insight into learning rather than providing a grade. Common versions of this approach include sample

  • exam questions using electronic response systems in class
  • programs such as Problem Roulette for practice outside of class
  • “minute papers” and “muddiest point” exercises that ask students to articulate what they've learned and what is still confusing and give instructors insight into topics that may need to be reviewed.

A description of these and other such strategies can be found in the section on Formative Assessment.

Communicate Transparently About Assignments

Making assessments transparent – ensuring students understand how to succeed on an assignment or exam – can reduce anxiety and ensure that their performance is based on their understanding of course material rather than their ability to navigate the instructions or format of a given assignment or exam. Transparency can apply to the following categories:

  • Assessment purpose: In student-friendly language, share with the students the learning objectives—the knowledge and skills this assignment or exam is designed to assess. Frame this explanation in terms of what they will need to do rather than the topics to be covered.
  • Assessment format: For exams, provide information about the types of questions (e.g., multiple-choice, short answer, essays) and how they will be weighted, including a rubric for any essay questions. For assignments, describe the final product (e.g., a paper or presentation), outlining the necessary steps and their sequence to make the final product. Explain any technical constraints (e.g., for online exams, whether students can move backward and forward among questions), time limits, due date flexibility.
  • Preparation: For exams, this could include practice exam questions in class or as a supplement for studying (for example, see Problem Roulette. For assignments, consider how you can “scaffold” the work so that students have intermediate deadlines and can receive feedback as their work progresses.
  • Feedback: Provide clear feedback so that assessments are not only about earning grades but also about learning. Return exams to students so that they can clearly see the relationship between the questions, their answers, and your feedback. Feedback can include an answer key so that students understand what a correct answer looks like. You might also consider “exam wrappers,” which  asks students to address errors and provides points for doing so (for examples, see this Eberly Center website and this Insider Higher Ed story).