Around the time she published her latest book, Five Dimensions of Quality, Linda Suskie led a workshop for us in the Taskstream offices. One of the key points that has stuck with me from that day (something for my “Good Stuff to Keep” #GSTK folder I mentioned in my last blog post) was Linda’s definition for good assessment data: it is useful and used. The most important thing for assessment is that the resulting data informs productive conversations about the quality of teaching and learning. But, as we all know, that isn’t always so easy.
Which is why I was particularly inspired by the stories from two of our client institutions at the AAC&U General Education and Assessment conference in February…
Central Connecticut State University
In the session, “Technology to Advance Faculty-Driven Assessment of Student Work,” Yvonne Kirby, the Director of Institutional Research and Assessment at Central Connecticut State University talked about how her institution has been able to generate usable data for general education in a very quick timeframe by applying the model from the Multi-State Collaborative to Advance Learning Outcomes Assessment (a.k.a. the MSC) to their local general education assessment and using Outcomes Assessment Projects (formerly Aqua) by Taskstream to manage the process.
Combining assessment and IR, Yvonne lives and breathes data. This facility with data has helped her to be particularly effective at advancing CCSU’s assessment efforts. For the MSC, participating institutions need to collect samples of student work from existing course assignments that align with the outcomes of interest (written communication, quantitative analysis, and critical thinking for the first year of the project). The project specifies sampling guidelines for the institutions to follow. For example:
- Work should be from upper-division students (those who have completed 75% of their coursework)
- Institutions should limit the number of artifacts from a given course or faculty member
- Only one piece of work (e.g., paper) per student should be included, and
- Each artifact should be used for assessing only one outcome
Yvonne showed us some of her tricks for collecting an appropriate sample of work for her institution. Given her IR background, she started with data from her institution’s SIS (student information system) that included course enrollments, student demographics, and faculty information. Then through some sleight of hand in Excel (well, just pivot tables, really), identified faculty teaching courses with greater numbers of students who would meet the sampling profile. Now she had a target list of faculty to recruit for the effort. Many “cold calls” later, she had the sample she needed. (Of course, it didn’t hurt to have institutional leadership behind the effort.)
Yvonne shared how CCSU benefited from participating in the MSC last year and how they have applied the model back at their institution for gen ed assessment. One of the key benefits that CCSU saw with participating in the MSC was its potential adaptability for institutional use, seeing it as providing a “sustainable model to assess general education.” She also noted how their participation in the MSC involved greater faculty engagement across campus and can provide evidence for the VSA/College Portrait. Too good to be true?
While the work was not without its challenges, CCSU felt this past year was very successful. They were able to collect artifacts from students representing 75% of undergraduate majors from faculty across 45% of their academic departments. Building on their collection effort for the MSC, they engaged in local scoring efforts – training faculty to score with the VALUE rubrics. With Outcomes Assessment Projects (formerly Aqua), Yvonne was able to manage their scoring retreat by herself and had usable results within the month! The retreat took place in January and she was already in the process of sharing the results with faculty when we met at the conference in February.
Yvonne’s colleague Jim Mulrooney wasn’t listed as a presenter for the session, but his additions and responses to the questions from his perspective as a faculty member at CCSU involved in their general education scoring effort were great. Jim is not new to assessment – I believe he recently reached the end of his tenure as chair of the university’s assessment committee after leading it for six years. He echoed the success of the effort and was optimistic for this approach going forward. Had they been given the right opportunity, I think he and Yvonne would have shouted from the rafters to all attendees, “we have usable data for general education and you can, too!”
Wright State University
Although Renee Aitken from Wright State in Ohio wasn’t able to join us at the meeting, she shared her slides for the presentation and spoke with us in advance of the meeting to share her institution’s story. Wright State adopted Outcomes Assessment Projects (formerly Aqua) by Taskstream to support their gen ed assessment. They began a project several years ago related to their participation at the HLC Assessment Academy to assess their institution’s seven core outcomes.
The approach they took was very similar to that of the MSC and CCSU – sampling student work from courses aligned with the outcome of interest, “deidentifying” the papers (i.e., redacting student identifiers), recruiting faculty to score the work using a common rubric, and bringing those faculty together for calibration and scoring. Prior to adopting Outcomes Assessment Projects (formerly Aqua), they had been doing all of this work manually and scoring the work on paper. It used to take them all summer to do the reporting and feedback. But since they were able to automate much of the process this time around with Outcomes Assessment Projects (formerly Aqua), they will be able to share the results and feedback with faculty before the end of the semester.
In addition to cutting down the time required to process everything, because all of the scoring took place online, Outcomes Assessment Projects (formerly Aqua) made it possible for Wright State to give faculty more time to complete the work: instead of restricting scoring to one in-person meeting on a Saturday, they gave faculty a 3-week window to complete scoring.
I always enjoy talking about Taskstream’s role in the MSC and sharing our story behind Outcomes Assessment Projects (formerly Aqua), but it’s so much more interesting to hear first-hand from institutions that are engaging in meaningful assessment, particularly when they are realizing success in their efforts. It was a pleasure sharing Wright State’s story and hearing from Yvonne and Jim at the meeting. Yvonne will be presenting again at this year’s AIR Conference (again in New Orleans!) at the beginning of June. If you’re planning to be there, I highly recommend you attend her session!