At the end of each term, you dutifully distribute course evaluations and collect feedback from students, striving for 100% response rates. Now that you’ve gathered all this data, are you taking the right steps to review and analyze it?
Your campus may pull information from evaluations for specific scenarios – faculty annual reviews, departmental planning sessions, accreditation reporting – but there are greater opportunities to use course evaluation data as part of your data-driven decision making process.
Reporting vs. analysis
First, it’s important to understand the difference between reporting and analysis. Reporting is the process of gathering information and feedback (data points) about what is happening on campus. Analysis requires using the data to generate insights and answer strategic questions.
Reporting and analysis are both essential steps in the data-driven decision making process, but they’re only truly valuable if they drive action. Therefore, it’s important to create reports that gather the specific information you need to uncover opportunities for improvement, spend the time analyzing the data you gather with a critical eye, and then build a plan of action to apply what you learned and make adjustments.
Course evaluations: What’s your goal?
When it comes to course evaluations, there are many ways to build reports to slice and dice the feedback you gather from students each term, including data on instructor performance, course resources and structure, and the overall learning experience.
EvaluationKIT’s standard reports let you review results by course, but you can also aggregate data to more closely examine different areas of the institution, run batch reports to pull multiple and combined reports for offline discussion, and pull detailed feedback for administrators and instructors.
At the end of an evaluation period, faculty and administrators review how the last term went and apply these insights as they plan the next session. Standard reports can be run for each course to review metrics specific to a term. These reports offer a static snapshot of the data which can be downloaded in PDF format.
Throughout the year, it’s also important to review evaluation data at the department, college, and university level to monitor trends and make adjustments. The report builder gives you more flexibility and allows users to create customized reports that pull specific data for review. These reports can be copied and reused in future terms and across departments. By using the same evaluation structure every term, you’re able to create a body of data that allows you to track trends over time. One valuable report in EvaluationKIT is the Instructor Means report, which generates the average scores for all instructors who report up into a department, school, or other area.
Best practices for data collection and analysis
A report is only as good as the data within it, and the data quality is dependent on how you run your course evaluations and the strength of your response rates.
- Don’t change your evaluation or survey every time you run it. While many faculty members review their results right after the evaluation period closes, digest the feedback, and implement changes the next time they teach the course, there is also value in building a longitudinal set of data over time. By keeping your course evaluation questions and structure consistent, you’re better able to spot trends and review shifts both for individual instructors and across the department as a whole.
- Take steps to strengthen your response rates. There are many tactics that can help boost response rates, including integration with your LMS, a thoughtful communication strategy, and strategic evaluation structure. Check out our recent guide for more helpful tips to boost response rates.
- Consider the timing. When you’re launching digital course evaluations, it’s important to think not only about how long to leave evaluations open, but when you start and stop collecting information. Consider when you’re distributing evaluations, be strategic in developing your evaluation structure and question format, and eliminate any points of friction for students (for example, blocking students from taking their final exam in the LMS until their evaluation is complete may lead to skewed results).
Qualitative vs. quantitative data
Write-in questions often generate valuable feedback, but reading through all of the commentary to find gold nuggets can be time consuming. EvaluationKIT’s text analytics feature can help cut through the noise and provide an overview of what students are saying about their learning experience. It uses natural language processing and machine learning to transform open-ended survey responses into useful data and insights. The tool generates a sentiment score based on word choice, highlighting whether the content is positive, negative, or neutral. When you know what students are saying, you can respond more specifically and personally which makes students feel that they are being heard and increases their engagement.
Don’t leave valuable data sitting in your course evaluation system. If you’re looking for ways to take your course evaluations and institutional survey processes to the next level, contact our team. We’re here to help!