Blog: Idea Exchange

This Little Piggy: Change vs. Improvement and Other Dilemmas of Assessment

Example of a re-imagined open system that incorporates changes, re-assessment, and eventually, improvements

Recently, we attended the Assessment Network of New York (ANNY) Fall Regional Event –  Shifting Cultures: From Assessment to Improvement, which was co-hosted by Lehman College, City University of New York (CUNY).  The event had around 80 participants from institutions across New York, which was a great turnout considering there was a torrential downpour that never seemed to let up! Everyone was a little wet as they arrived, especially us, as we only had one umbrella (thank you, Dara), but the wonderful ANNY board members and Lehman staff were there to greet us with enthusiasm (and croissants, coffee, and paper towels).  

All participants received “stress” pigs

The workshop was led by Dr. Keston Fulcher (Executive Director of the Center for Assessment and Research Studies (CARS) at James Madison University), and Dr. Megan Good (Director of Academic Assessment at Auburn University).  Pre-workshop reading included the NILOA publication, A Simple Model for Learning Improvement: Weigh Pig, Feed Pig, Weigh Pig, of which Dr. Fulcher and Dr. Good were co-authors.  In short, their paper poses that although many more campuses are engaging in assessment work across the country (yay!), this increased activity rarely produces demonstrable improvements in student learning and performance (uh oh?!).  Now, before you scream “hold on a second!” let’s take a moment to dig a little deeper into their Weigh Pig, Feed Pig, Weigh Pig concept – also known as Program Learning Assessment, Intervention, and Re-assessment (PLAIR); which in essence is a very simple assessment cycle (i.e., assess, intervene, re-assess), but often the last part, “re-assess” is fortuitously overlooked. Or, more explicitly as the article details: we must assess (weigh the pig), intervene/implement changes (feed the pig), and then re-assess at a later date (weigh the pig again). (See Figure 1)

Assessment, program-change, reassessment

Figure 1¹


For example, many versions of the traditional assessment cycle graphic always end with something along the lines of “use results” or “make improvements (See Figure 2).”

Example Assessment Cycle

Figure 2: Example Assessment Cycle

As a group, we also discussed the possibilities of why continuous improvement graphics are presented as closed systems and why in using results this would prompt us to (re)define our outcomes. The question emerged, shouldn’t the cycle be more open or spiral in nature (and in keeping with the pig metaphor, almost like the spiral of a pig-tail)? However, you will be hard pressed to find such graphical depictions that also include re-assess, or that contain a more spiral or more iterative representation (see Figure 3).

Example of a re-imagined open system that incorporates changes, re-assessment, and eventually, improvements

Figure 3: Example of a re-imagined open system that incorporates changes, re-assessment, and eventually, improvements

Similarly, during accreditation visits we are often asked “what program improvements have you made in accordance to your assessment results?” Generally, we recite changes that have been made to curriculum or pedagogy (such as offering additional writing workshops to improve written communication), which we presume will naturally result in improvements (why wouldn’t it?!), but we do not always follow-up and re-assess to actually see, mainly because we are not explicitly asked to or in many cases do not have the time.  But are mere changes enough? According to the Weigh Pig, Feed Pig, Weigh Pig model, we must differentiate between change and improvement. This is not just a simple question of using appropriate terminology; change and improvement are indeed not synonymous with one another, yet we commonly assume they are when it comes to assessment.

However, actually getting to the stage where results can be effectively used to suggest and implement changes is by no means easy; it often requires a lot of work, support, negotiation, and follow-up.  But, even when changes are made to pedagogy, curriculum, or even the assessment process/methodology, they are simply just that: changes.  Seldom, do we actually demonstrate improvements in student learning that (may or may not) occur over time from such changes.

So how do we actually know if any of these carefully thought out changes, supported by assessment data, actually lead to increased and improved student performance?  We must assess (weigh the pig), intervene/implement changes (feed the pig), and then re-assess at a later date (weigh the pig again); only then can we know and see evidence of learning improvement.  Attending this ANNY workshop really brought this back to us, challenged us to rethink the differences between “changes” vs. “improvements,” and  wonder if you have faced similar experiences or considered the following reflective questions:

  • Are changes in curriculum or pedagogy always taken for granted as programmatic improvements?
  • Do you think changes naturally lead to improvements?
  • Do you always re-assess after making changes?
  • Do we always need to make sure we can evidence improvements over time?
  • Have you ever made a change, re-assessed, and then realized the change had little or even a negative effect?

As we delve deeper into more longitudinal assessment and implementation practices, perhaps we should consider the Weigh Pig, Feed Pig, Weigh Pig model and who knows… maybe we will go “hog wild” over it! 🙂

We would love to hear from you! Let us know if you have had any of these conversations on campus or reflected on the differences between “changes” vs. “improvements” in assessment activities. Feel free to email us at (Matthew) or (Dara). 


Erbacher, Monica K. Foelber, Kelly J. Harris, Heather D. Horst, S. Jeanne. Pyburn, Elisabeth M. What Goes Around Comes Around: Continuous Improvement and Impacti Measurement Using the Assessment Cycle [Powerpoint Slides]. Retrieved from


Dara Wexler, Ph.D.
Matthew Gulliford