Blog: Idea Exchange

Evolving the Student Course Evaluation Process for Greater Insights

Peter Pravikoff October 1, 2019

Higher education institutions seek to measure course quality, instructional quality, and learning outcomes. So why does the student course evaluation process rarely include measurement of outcomes, and typically limit its focus to student ratings of the course and instructor?

With more than 20 years working with higher ed institutions, Kevin Hoffman, president of EvaluationKIT by Watermark, has collaborated with educators to maximize the insights that institutions gain from the course evaluation process. He sat down with us recently to share his observations.

Student course evaluations have been around for a century. How has technology changed what institutions can learn from them?

The way student course evaluations are conducted is a defining factor in what institutions can capture. The process took its first leap forward with optical mark recognition technology (OMR), which gave rise to the “fill in the bubble” sheets used to collect student feedback since the 1970s. OMR made it easier to get feedback from every student in every course, but it required a “one size fits all” set of questions, usually created by university administration. It was difficult for instructors to ask their own course-specific questions, so instructors, departments, and programs created their own surveys if they wanted more specific student feedback.

There are important downsides to that approach, including the effort and cost required to consistently survey at each of these levels, and the experience for students, who were inundated with a barrage of surveys from a variety of sources at the end of each term.

Describe how course evaluations are evolving, and what information they can now surface.

Throughout higher ed, there’s a growing understanding that capturing the right data can help institutions improve processes and outcomes. Online student course evaluation, which arrived in the 1990s, allows stakeholders across the institution, from administrators to faculty, to easily add relevant questions to course evaluations without having to create a series of separate paper-based surveys.

Student data can now be used to tailor course evaluations to each student. Students receive only relevant questions, and in a consistent, recognizable format, which makes it easier for them to provide their feedback.

In addition, institutions can provide all levels of stakeholders with key student feedback related to their specific needs, whether it’s managing and reviewing faculty, improving instruction, or providing data to administration related to program and institutional outcomes.

What do you see as the next step in the evolution of course evaluations?

I see institutions using data from student course evaluations to inform a wider range of improvement processes on campus. For example, institutions face a challenge in obtaining the range of assessment data needed to evaluate and improve program and institutional outcomes. Student feedback on their achievement is a common and important indirect measure of course and program outcomes.

Historically, gathering this feedback has been separate from student course evaluation, but now the two can be streamlined into a single survey, so institutions can efficiently connect course objectives and curriculum maps to course evaluation. This allows them to answer questions like, “By the end of Biology 265, can students demonstrate that they have mastered the skill of interpreting scientific research defined in the program-level outcomes, according to performance on a key assignment and their own self-reports?”

For example, the Office of Academic Assessment and Office of Institutional Research at Embry Riddle Aeronautical University have implemented a collaborative process to integrate program outcomes measurement into course evaluation. Feedback on courses and instructors flow to instructors for the courses they teach and to administration for courses they oversee, and indirect measures of assessment are gathered for each program outcome tied to a course. This data goes to the Office of Assessment and is stored in their online assessment and planning system where it’s conveniently organized for annual program reviews, among other uses.

Connecting assessment to course evaluation provides a more efficient, streamlined experience to students, who are able to provide more meaningful feedback. Their responses are tied back to assessment and captured in a single database to inform a range of institutional processes including program review, faculty review, and accreditation reporting.

Because the data reside together, they can also be reported on in innovative ways that unlock new insights to inform decision making, support improvement, and better serve students.

Author
Peter Pravikoff
EvaluationKIT by Watermark