Good Stuff to Keep (#GSTK) from AAC&U Gen Ed 2016

September 18, 2023 Courtney Peagler

“Assessment is messy… when done well, at least.”

– Kate McConnell, Senior Director for Research and Assessment at AAC&U

While many enjoyed the warm weather and good food, there was far more talk about general education and assessment than about beignets and po-boys at this year’s AAC&U General Education and Assessment Meeting in New Orleans!

In the spirit of continuous learning and improvement, I am going to follow Kate McConnell‘s advice and file the following notes under “Good Stuff To Keep” from this year’s conference (or #GSTK for those who were tweeting at #aacugened16).

aacugened2016

AAC&U Survey Shows More Use of Rubrics for Assessment

In the opening plenary, Debra Humphreys from AAC&U shared some of the key findings from the third and final report they released on their most recent survey of Chief Academic Officers. At a high level:

  • More institutions have outcomes and are assessing those outcomes.
  • More institutions are assessing in departments than in general education.
  • Of those institutions assessing in general education, 87% are now using rubrics.

We’re planning a webinar with Debra for later this month on these survey results and more. Check our Events page for more details and to register soon.

Overcoming Language Barriers Around Assessment

In the session on “Overcoming the Faculty Language Barrier,” three presenters from University of Nevada Las Vegas shared some of the things they’re doing to change the conversation on their campus. Nearly all faculty and administrators are concerned about student learning. But, the presenters found that many faculty members on their campus didn’t see the connection between assessment and student learning. Instead, they just thought it was all about compliance. At Taskstream, we talk all the time about how important it is for institutions interested in more systemic, sustainable improvement to move away from a culture of compliance. It’s interesting to think about how “language barriers” contribute to the difficulty of achieving true culture change of this nature.

All too often, when faculty are approached by administrators and assessment professionals about assessment, the faculty “thought bubble,” as the presenters referred to it, goes something like this: “tell me what I have to do to make you go away.” Part of the problem, these presenters surmise, is that these groups aren’t speaking the same language. When they asked faculty on their campus, “how many of you have done assessment that is meaningful to you?” relatively few raised their hands. But when pressed further, some revealed that they had, indeed, identified a deficit in student learning through one means or another, which inspired them to make changes to address that deficit. They just didn’t realize that what they were doing fit the definition of “meaningful assessment” the administrators had in mind when posing the question.

When asked to report on assessment efforts in their programs, faculty may not report the “right” thing because they are not working with the same definition of assessment or don’t know how what they’re doing does or does not fit within the realm of assessment. One of the ways UNLV overcomes this challenge is through a simple review process for assessment reports. I often talk about the value of providing feedback to programs so they don’t feel like their assessment plans and reports are going into a black hole, and how institutions manage this type of review through our AMS product, so it was great to hear this idea reinforced and helpful to see how UNLV structures their review.

From what I gathered, the assistant director of assessment reviews everything and members of the assessment committee complete peer reviews of others’ plans. They use a simple rating scale of Good, Adequate, and Needs Revision and the review form asks reviewers to note the following:

  • Which outcomes were assessed?
  • How were the outcomes assessed (was there at least one direct measure)?
  • Is there a university learning outcome component?
  • What was learned from the results?
  • How did the program respond to what was learned?
  • What is the overall quality of the report?   

It sounds like the process is working well for them, which is always great to hear.

An Interesting Case of Applied Design Thinking and e-Portfolios

I was intrigued by how Philadelphia University applied a “design thinking” approach when revamping their general education program. The way Dr. Thomas Schrand introduced the idea of “design thinking” — focusing on studying a product or process from the user’s perspective to identify and analyze pain points and then engage in a cyclical, iterative process to develop solutions — reminded me of the research-driven, user-centered design approach that our product team has been applying, particularly, for the design of our latest product, Outcomes Assessment Projects (formerly Aqua) by Taskstream. In the case of Philadelphia University, they started with a design approach that helped them identify a common set of key learning goals for all students, regardless of major, and then considered the “value proposition” for general education at the university from the student’s perspective.  

Of course, I was also pleased to hear that this approach led them to a solution that uses e-portfolios to guide and track success of their institution’s eight learning goals. Students post evidence and reflections for each learning goal and are encouraged to revisit them at different points through their education. The university sees the potential for the meta-cognitive benefits that come with students reflecting on their learning and for the e-portfolios to help make students more aware of the expected learning outcomes. Loyola University Chicago has been working with us to implement a similar approach for their students.

Wait, Did You Say You Have Usable Data for Gen Ed?

It was a pleasure meeting and collaborating with Yvonne Kirby and Jim Mulrooney from Central Connecticut State University (CCSU) and, virtually, with Renee Aitken from Wright State University in Ohio on our conference presentation, “Technology to Advance Faculty-Driven Assessment of Student Work.” I’ll share more about the presentation and the ensuing discussion in my next blog post. But, I’ll say here that it was quite refreshing to hear about a campus that has been able to generate usable assessment data for general education in a relatively quick timeframe. As a participating institution in the Multi-State Collaborative to Advance Learning Outcomes Assessment (a.k.a. the MSC), CCSU was able to build on their collection efforts, apply the model for local scoring at their institution, and use Outcomes Assessment Projects (formerly Aqua) to manage the process. They were able to get everything going in the technology in one week and already had data for analysis within the month. Yvonne, their Director of Institutional Research and Assessment, managed the scoring retreat by herself in January and is already having conversations with faculty about the resulting data!  

Unfortunately, Renee Aitken from Wright State University wasn’t able to join us in New Orleans, but I was able to talk with her before the event so that I could share her institution’s story through her slides. They have also been taking a similar approach for their gen ed assessment and have found a number of benefits from using Outcomes Assessment Projects (formerly Aqua) to support the work. Again, more to come in my next blog post….  

Optimism for Continued Success of the MSC

On Friday evening, we hosted a reception for MSC participants at the meeting. It was a nice, informal gathering to celebrate the pilot year success and toast to continued success for this year and beyond. Everyone I spoke with had only positive things to say about their experience with the MSC. I learned how Snow College, a two-year college in Utah, has used the data from the pilot year and created dashboards for faculty to see how their students performed. They have submitted their sampling plan for this year and are looking forward to beginning the next round. I also enjoyed a conversation about data sharing and use at another table with participants from St. Olaf College and Hamline University, both in Minnesota. I mentioned to them that we are planning a webinar series with SHEEO and AAC&U on a variety of topics related to the MSC, including data sharing and use, faculty engagement, and more. Stay tuned for more details about the series soon.  

Competency-Based Programs Hold Promise for Some Students and Some Institutions

In the session on “Competency-Based Practices in General Education,” Debra Humphreys led a great discussion about some of the emerging models and trends around CBE. Debra is working on a white paper on quality dimensions in competency-based education for the Lumina Foundation. Cori Gordon from Northern Arizona University shared how her university developed 3 online, competency-based bachelor’s programs and there were several people in the audience from University of Wisconsin’s Flex program. Cori encouraged others to look to the LEAP initiative as a starting point for developing a general education or liberal arts program, while NAU looked to industry partners when developing competencies for their more applied programs.

There was a good conversation about the definition of “outcome” vs. “competency.” While Debra views outcomes at a higher level than competencies, which she sees as more discrete, the discussion showed that we still don’t have a clear common definition in higher education for these terms and their relationship to each other.  

One thing that is common for all CBE programs is the need for authentic, well-aligned assessments and materials to support competency development. Joe St. John from Western Governors University — the recognized leader in online, competency-based education and Taskstream client for over a decade — was in the session’s audience. Among other things, he also noted the importance of mentoring for all students, which has been a key factor in their success. With over 60,000 graduates, they know a thing or two about running successful CBE programs. 🙂

CBE programs aren’t for all students or all institutions. From Debra’s research, the ideal student is a self-directed learner with some prior, successful college experience. Institutions need to do the research and analysis to see if there’s a market for a CBE program (e.g., is there enough of a population who will get through this) and whether it is appropriate given the institutional context (e.g., open access, geographic location) before “going all out” to develop new programs. Institutions can find more support for their CBE efforts in WGU’s forthcoming Journal of Competency-Based Education, the first issue of which is due to be published later this spring.

Overall Observation: This Work Isn’t Easy, But We’re Up For the Challenge!

It was clear from the conversations I had that the record number of attendees at this year’s meeting wasn’t due to the desire for a free trip to the Big Easy — well, maybe not entirely ;). Despite the acknowledged complexity of the work, I heard far more optimism and creative problem-solving than cynicism or defeat at tackling the challenge. Thanks for the great conference, AAC&U!

The post Good Stuff to Keep (#GSTK) from AAC&U Gen Ed 2016 appeared first on Watermark.

Previous Article
Watermark's Responsible Use of AI Statement
Watermark's Responsible Use of AI Statement

As we harness the power of AI to make our solutions work better, it is our responsibility to our clients an...

Next Article
Evolving the Student Course Evaluation Process for Greater Insights
Evolving the Student Course Evaluation Process for Greater Insights

Higher education institutions seek to measure course quality, instructional quality, and learning outcomes....