Skeptical eyebrows and inquisitive looks rarely aim in my direction when I ask functional area leaders about types of operational assessment and continuous improvement plans. By now, we’re all familiar with these words. We may not have progressed to an overarching sentiment of elation or volunteerism, but we have come a long way from silo-driven assessment towards an equilibrium among naysayers and enthusiasts.
While our motivations for operational assessment may differ (as my favorite Taskstream squeeze ball indicates), we do share a common definition and processes across academic and operational areas that result in timely, consistent, and measurable documentation that demonstrate ongoing mission fulfillment initiatives.
If you paused when you read that last sentence, you’re not alone. I have yet to see any institution -– even my own –- with as much clarity and buy-in. However, we all are moving together in the right direction.
For my fellow Accountability Management System (AMS) administrators or institutional effectiveness leaders, let me take a moment to validate you. If you say any of these statements aloud (or in your head), you are among friends:
- No problem. We can just move those 19 attachments.
- Yes, we have talked about this for the last three years, but I can see how the same process is still confusing.
- Isn’t it great that there are directions under the little drop-down arrow and instructions I have repeatedly sent out?
- That is an improvement, not an assessment.
- How about I just load that for you?
I’ve determined that some of my most simple strategies in training about assessment and continuous improvement are grounded in math. I am here to share a few of those (or, perhaps I’m just here to let you know it is worth it to our students, employees, and other constituents that we keep at it).
1. Force the Order of Operations: If you find your users are posting evidence into requirement areas that shouldn’t contain evidence, remove the potential for error. Like putting parentheses in an equation, you may limit behaviors. At my institution, we use the assessment plan and continuous improvement plan areas primarily for narratives. Yet, I repeatedly discovered users posting substantiating evidence in the plan area rather than in the assessment findings or status report areas. Once I turned off their ability to put attachments in the plans (i.e., saved them from themselves), they looked elsewhere to document evidence. By working with the system capabilities, we adjusted a behavior that was creating inefficiencies and inconsistencies. If this may not meet your institution’s needs, consider exploring other ways in which AMS (or other Taskstream products) may improve your processes by turning off misused, unused, or unnecessary features.
2. Assessment 0s and 1s: Users are often confused about what defines an assessment vs. an improvement. After all, they both include plans and actions, right? “It’s a simple issue of 0s and 1s. It’s binary,” I share. “Is your root statement an assessment verb (0) or an improvement verb (1)?” In general, assessment verbs are actions that have a null effect: the collection of data does not directly create change, but it informs it. Continuous improvement verbs, which arise from insights and analysis drawn from the assessment, elicit change… a +1 to the higher education environment. (See examples.)
Assessment Plan Verb Examples | Continuous Improvement Plan Verb Examples |
Analyze | Add |
Appraise | Acquire |
Assess | Align |
Audit | Change |
Calculate | Consolidate |
Compare | Coordinate |
Determine | Create |
Evaluate | Decrease |
Explore | Develop |
Examine | Document |
Inspect | Enhance |
Measure | Implement |
Review | Improve |
Study | Retain |
3. The Assessment Ordinal Set is 1, 2, 3, 4 (Not 1, 2, 4, 3): To understand and effectively document an assessment and continuous improvement cycle, users must differentiate between distinct activities and the order thereof. In our ideal world, our decisions (continuous improvements) are grounded in sound assessments (data and analysis). Often, confusion lies in how planning and acting each occur twice and in a particular order: 1. Plan 2. Assess 3. Plan 4. Improve (and back to the beginning). Frequently, I see the process roll out like this: 1. Plan, 2. Assess, 4. Improve, 3. Plan (or, pseudo-planning, reflective planning, after-the-fact planning… “Oh, yeah. We meant to do that.”). These models differentiate intentional behaviors from a ready-fire-aim philosophy. The challenge here is both conceptual and behavioral. While institutional effectiveness leaders can educate and inform, we are not always able to enforce. A strong assessment ambassador embedded across departments may be useful in these situations.
4. X ≠ Y: While it would make assessment and continuous improvement processes easier if everyone had the same preferences and learning styles, the reality is that flexibility is essential. For example, the four-step model appears linear, which may be appealing to your finance office. However, a process with one assessment driving multiple improvements (or vice versa) that overlap in timelines may be more appropriate for academic affairs. A linear requirement is an unnecessary constraint for quality assessment and continuous improvement, especially when multiple variables are in place. The key is ensuring that each process and related documentation occur within the assessment and continuous improvement cycle as a whole–with freedom for leaders to identify the best practices for their respective areas. So, although X may not equal Y, both may still be true.
Just as math is universal, so can be assessment and continuous improvement. By consistently utilizing and documenting distinct processes, we expand our capacity to increase effectiveness, enhance mission fulfillment, demonstrate integrity, and serve our students.
Looking to further centralize your planning and assessment processes? Watermark Planning & Self-Study can help unify your planning, assessment, and outcomes in one integrated hub.
Request a demo to see for yourself!