Collecting Data about Student Learning

 

"For assessment to be successful, it is necessary to put aside the question, 'What’s the best possible knowledge?' and instead to ask, 'Do we have good enough knowledge to try something different that might benefit our students?'"

-Blaich, C. F., & Wise, K. S. (2011). From gathering to using assessment results: Lessons from the Wabash National Study (NILOA Occasional Paper No.8). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment.


Key Definitions & Frameworks

Data sources that are useful to consider in assessing student learning are:

  1. Evidence of learning outcomes

    Direct measures of learning

    • These allow students to demonstrate their learning for faculty to assess how well a program's students are meeting the expected level of proficiency for skills or knowledge. Examples include capstone projects, papers, standardized tests, observations of students in a clinical setting, and quiz questions that align with a key area of knowledge needed.
      • Embedded assessments are direct measures of student learning that serve two purposes: as a course requirement for students (i.e., a normal work product like a paper, quiz, or a design project) and for a program's assessment processes. They most frequently are collected from required courses, capstone courses, or key classes where a student must demonstrate mastery of a specific learning objective important to the department.

    Indirect measures of learning

    • These gather students' perceptions of and satisfaction with their learning. Common examples are focus groups and student and alumni surveys.

    Multiple methods of assessment

    • Methods that pair direct and indirect methods are most valuable because:
      • Research indicates that students are not always able to accurately self-assess their learning, so the use of indirect measures alone may be inaccurate.
      • Some outcomes (e.g., attitudes) can be assessed only through surveys, interviews or focus groups.
      • Indirect measures (e.g., student dissatisfaction with some aspect of their learning experience) can help explain results seen through collection of direct measures (e.g., low student performance on a key learning outcome).
  2. Background information about the students in the course or curriculum (i.e., inputs)
    • For example, what academic preparation do students bring into a course or academic program? What are the demographics of students in a class or program? What are their career or post-U-M educational aspirations? 
  3. Documentation of the learning experience
    • In other words, what is the nature of the learning experience for students?

U-M Data Sources on Student Learning in U-M Curriculum and Co-Curriculum

This section provides measurement tools for faculty and departments to assess student learning.

Indirect Evidence of Learning

Large Student Surveys Used at U-M

Data Source Coordination and Distribution of Findings

 

National Survey of Student Engagement (NSSE): The survey is administered to first-year students and seniors and asks students to report on in-class and out-of-class experiences associated with a rich learning experience.

NSSE is nationally coordinated at Indiana University.  The Office of Budget and Planning (contact: Karen Zaruba) houses U-M data, and they make periodic reports on key findings.

The last NSSE implementation was Winter 2012, with a response rate around 15%.

Here is the NSSE instrument.

A summary of 2008-9 data for seniors can be found on the UM Portrait. Because UM has a better-than-predicted NSSE score, it is also profiled in Kuh, et al. (2010, 2nd ed.) Student success in college: Creating conditions that matter.

 

UMAY: This survey is open to all U-M undergraduates, and it is quite wide reaching. Each respondent answers a core set of questions about time use, academic/personal development, academic engagement, evaluation of the major, overall satisfaction, and climate.

UMAY is part of a nationwide study, Student Experience in the Research University (SERU), based at UC Berkeley.The local coordination of this project is through the Office of Budget and Planning (contact: Karen Zaruba).  Some responses from the 2015 survey are presented at the Office of Budget and Planning website.

U-M's 2015 UMAY response rate is ~20%.

Here is the UMAY instrument.

 

 

Cooperative Institutional Research Program (CIRP) Freshman Survey: Examines entering students' preparation, previous activities, expectations of college, confidence levels, intended major, and future goals.

The study is coordinated nationally at UCLA and implemented by U-M's Office of Student Life (contact: Malinda Matney). Here is the CIRP instrument.

Surveys are administered during New Student Orientation, with response rates around 80%.

Student Life distributes periodic reports on these data, and CRLT also distributes summary data at its faculty and GSI orientations.  Additionally, select findings are available online.

CSS: College Senior Survey: This instrument is administered to fourth-year, graduating U-M students and examines their college experience and activities, expectations of next steps, confidence levels, and future goals.

The CSS is coordinated nationally at UCLA and implemented at UM by the Office of Student Life. Student Life distributes periodic reports on these data. A summary of 2008 data is available at the UM Portrait, but the last implementation was in 2012.

Surveys are administered online, with response rates around 15%.

Here is a copy of the CSS instrument.

Destination Surveys: These studies examine LSA alumni’s, medical school applicants’ or law school applicants’ first experiences after college, looking at the "first destination" of job, graduate school, volunteer life, or family, and how the university served in preparing for that step.

 

The LSA study is conducted annually by The Career Center. Select findings are presented on The Career Center website (http://careercenter.umich.edu/article/first-destination-profile).

Alumni and Exit Surveys

  • Summary results from U-M's 2010 university-wide alumni survey are available here:
    http://www.accreditation.umich.edu/reports/2009_alumni_survey.php)
    In preparation for the 2010 accreditation, U-M surveyed six cohorts of undergraduate alumni (1998-2000 and 2004-6). The survey asked respondents about how U-M experiences helped prepared them for their careers/study, overall and by specific skill/attitude (e.g., appreciation for the arts).
  • Sample exit surveys from U-M departments can be found here: http://www.crlt.umich.edu/assessment/lsa-assessment-resources
  • CRLT also consults with many departments on customized exit/alumni survey design and analysis for assessment. To learn more, please contact Mary Wright, Director of Assessment, at mcwright@umich.edu.

Other validated surveys (some are fee-based)

Focus groups

  • Focus groups involve a discussion of 8-10 students to reflect on the curriculum. Focus groups can be useful for allowing students to collectively hear other students' experiences and reflect on achievement of key learning goals for a course, curriculum, or educational innovation. CRLT has conducted numerous focus groups for departments and for postsecondary educational grant evaluation. (For a full list, please see CRLT's recent assessment project list.) To learn more, please contact Mary Wright, Director of Assessment, at mcwright@umich.edu.

Sample assessment project at U-M using indirect evidence of learning

  • In Fall 2009, CRLT collaborated with LS&A to assess its Quantitative Reasoning requirement. The evaluation was based on a survey of LSA first- and second-year students about quantitative reasoning gains they reported making from their QR1 or non-QR Fall Term courses. Most of the survey was derived from a University of Wisconsin assessment study of its QR requirement, which validated a survey about student self-reported learning gains with pre- and post-tests of authentic QR-related problems (Halaby, 2005). The instrument was developed by a study team that included UW’s Director of Testing & Evaluation and other quantitative researchers. In addition to the 14 UW gains items, the U-M survey asked students if they felt that the course met LSA’s goals for the QR requirement, if they could give an example of an application of the course, and what instructional methods helped them learn. Key findings of the survey are presented here: http://www.crlt.umich.edu/assessment/lsaqrassessment.

Direct Evidence of Learning

Rubrics are commonly used as an assessment tool for papers or projects.

Quiz questions, linked to specific key learning objectives

Concept inventories

  • Concept inventories are reliable and valid tests that are designed to test students' knowledge of key concepts in a field. Often, they can be used to make comparisons in student learning over time (e.g., a student's performance at the beginning and end of a course) or between students at different universities. They are most commonly used in science, math, and engineering.
  • Examples of concept inventories from engineering have been collected by the Foundation Coalition: http://www.foundationcoalition.org/home/keycomponents/concept/index.html.
  • A list of concept inventories developed for scientific disciplines has been collected by Julie Libarkin, MSU.
  • An example of a department using this type of assessment data is the Mathematics Department. In Fall Term 2008, the department administered the Calculus Concept Inventory, a nationally validated test designed concepts of differential calculus. The survey was given to all sections of Math 115, with a pre-/post- design. On the post-test, students also were asked to rate the interactivity level of the classroom, and the percentage of time spent on interactively engaged activities. Summary findings are presented here: http://www.math.lsa.umich.edu/news/continuum/ContinuUM09.pdf.

Multiple methods of assessment

ePortfolios

Background information about the students in a course or curriculum

LSA Academic Reporting Toolkit (ART) (Requires authentication)

  • An information service that enables LSA faculty to create course-specific data reports on a range of topics related to student performance, such as: enrollment and grade histories of students in a given course, enrollment and grade connections between courses, and relations of course grades to pre-college measures (ACT/SAT scores and AP exams). Each tool has customizable input and each is designed to return anonymous data (no student names or ID's) in both graphical and tabular form. Access to the site is restricted and requires authentication. To request access, please contact Rob Wilke, LSA Management and Information Stems, Dean's Office.

U-M Data Warehouse

  • The U-M Data Warehouse is a collection of data that supports reporting activity for University business. The M-Pathways Student Records Data Set contains academic data for students who have matriculated at the University of Michigan, Ann Arbor. The data includes students' personal information (demographic data), enrollment, courses, grades, degrees, and transfer credit. For more information on the data available, see the student records data dictionary. To request access to these data, instructors should contact their school/college's data steward or the Office of the Registrar.

Documentation of the learning experience 

Common measures to document learning activities include:

  • Syllabi (LSA Syllabus Archive)
  • Instructor reports of key instructional activities.
  • CTools use data (i.e., how frequently students download a resource on CTools or or view iTunesU lectures).
    To obtain use data, contact Steve Lonn at the USE Lab or Dan Kiskis at ITS. For iTunesU data, contact Cathy Crouch, who manages U-M's iTunesU service.

CRLT staff work with groups of faculty in departments or schools/colleges to collect assessment data that will be useful for educational grant evaluation or curricular decisions. For example, CRLT staff use interviews, focus groups, and surveys of students, faculty, and alumni to provide feedback about:

  • the effectiveness of a curriculum/program as a whole or a particular sequence of courses; 
  • the effectiveness of a unit's training program for graduate student instructors; or
  • a unit's climate for teaching and learning.
shadow