Blog: Idea Exchange
Using Taskstream by Watermark to Help Faculty Close the Loop on Assessment
Colleges and universities, more than ever before in the history of the academy, need to know if students are learning what the faculty have intended them to learn. With increased competition between institutions to get the best students, as well as the expectation that graduates are work ready, the accountability movement is stronger than ever (National Institute for Learning Outcomes, 2016). The purpose of outcomes assessment is to do just that – provide faculty the tools they need in order to decipher student learning. When done correctly and regularly, faculty will uncover much about student learning along with clues as to what to change to either what is taught or how it’s taught. While in 2017 many college/university faculty around the country do take a genuine interest in improving teaching and learning, it has become increasingly evident that our institutions need to take a more proactive role in providing faculty the opportunities to be professionally self-reflective.
The Assessment Learning Cycle above has long been accepted as a standard within the academy. It’s important to note that the assessment process is uniform – identical – across all programs (Banta, 2002; Suskie, 2004). The key here is that faculty must articulate the best possible program-level outcomes and take steps to measure those. The entire process is an opportunity for faculty to be “professionally self-reflective.” The focus of this blog is on step 4 of this process – “Redesign program to improve learning.” Traditionally, this is the step of the process that gets the least amount of attention and focus from faculty – yet, it should get the most attention as it embodies the essence of outcomes assessment. The objective of outcomes assessment is continuous improvement – the data-driven exercise of observing student performance and making adjustments to either what is taught or how that material is taught.
In general, faculty spend much of their time and energy on the front end of assessment, including the articulation of program level outcomes, identification of measurable criteria, and data collection. However, as we will point out, there is still a level of reticence among faculty to “air their dirty laundry” by way of identifying what in their program isn’t working well and may need adjustment. This reticence is understandable given that for nearly two millennia the academy wasn’t required to substantiate education delivered or engage in any institutional effectiveness practices. The present authors submit that this fact is likely the single greatest impediment to faculty desire to engage in effective continuous improvement activities. Moreover, as we begin a deliberate adjustment to the campus culture that encourages faculty professional self-reflection, we should see substantial changes in how outcomes assessment unfolds.
Campus Culture and Outcomes Assessment
For as much as we’ve been discussing and debating about outcomes assessment in higher education since the 1980s, we still have quite a distance to go to be able to show how it’s universally addressed at colleges and universities around the country. Faculty at many institutions still view the process as extra work that is often done in angst and with little thought toward genuine continuous improvement. We use the analogy of a bus – all of the seats in the bus have been reserved for select passengers – “standby” institutional functions, such as syllabi construction, course assignment development, grade submission, financial aid distribution, and student registration, to name but a few. However, due to the newness of outcomes assessment on the scene, it has yet to acquire a seat on the bus. That function has been hanging on the back bumper of the bus with roller blades. Part of the reason for this is the culture of higher education has yet to incorporate effectiveness practices as a featured and automatic function within the academy. There is need to create a seat in the bus for outcomes assessment. How can we begin to re-engineer the culture of our campuses to best accommodate this crucial function?
The initial element that needs adjustment is the cultural climate on campus toward assessment and the identification of what needs to be changed/adjusted based on assessment data. Faculty must feel very comfortable collectively to put forward evidence that either their curriculum or pedagogy should be adjusted. They must feel that they won’t be penalized for assessment targets that aren’t met or anything else that points to the need to adjust. Faculty must (1) see that their assessment efforts are fruitful and that students benefit from it and (2) know that their administration won’t sanction them due to missed targets in assessment. Indeed, a keen focus on student learning outcomes along with strategies to improve instruction and educational experiences should ignite a call for celebration. For this to come to pass, the sentiment and attitude toward continuous improvement among the faculty and administration must be addressed. This is where the cultural shift must take place – our institutions will do well to become more transparent and imbued with a fresh sense of continuous improvement, as well as “appreciative inquiry” (Cooperrider and Srivastva, 1987).
Using Taskstream by Watermark to Enhance Institutional Transparency, Continuous Improvement & Appreciative Inquiry
The use of a good assessment management system, such as Taskstream by Watermark, makes the assessment process more logical and manageable, while also promoting transparency, continuous improvement, and appreciate inquiry. In particular, the Accountability Management System (AMS) is designed to advocate transparency via the process of recording faculty response to collected assessment data (student evaluations). As can be seen in the screen capture below, the “Assessment Findings” template has been well designed to record course-level assessment. It’s true that the majority of assessment findings are based on data gathered at the course-level. Many faculty members find the layout of the AMS to be intuitive, which helps them better organize their thoughts about assessment results. As seen in this screen capture for the Master of Science in Criminal Justice program, faculty are able to easily reflect on the results of the assessment relevant to computer applications in crime analysis. The “Recommendations” and “Reflections/Notes” are ideal for recording the implications of the assessment at both the course and program level. As will be seen, the “Operational Plan” is best equipped to record the implications of the assessment at the program level.
As can be seen in the screen shot below, The “Operational Plan” template is ideal for recording program-level assessment. The “action details” and “measures” make recording program-level implications simple and on point. The layout of the Operational Plan lends itself well to a more holistic look at the assessment and how it impacts decisions and changes at the program level. Continuous improvement is embedded in the assessment process, for faculty must plan and implement change when necessary, gather feedback following the change, and continue to make further adjustments if needed to ensure student mastery of the assigned program learning outcomes (Aggarwal & Lynn, 2012). Through transparency and continuous improvement, ideally, a cultural shift can occur taking assessment even further through the faculty engaging in appreciative inquiry. A culture that embraces appreciative inquiry entails regularly defining what the focus will be in the assessment process, discovering what has worked, dreaming of past achievements and successes, designing the change, and delivering the change (Lehner & Ruona, 2004). AMS has been designed to be a simple, yet pragmatic, template to record data interpretation, as well as other qualitative observations towards the achievement of student mastery.
The Qualitative Part of the Assessment Process
As a part of efforts to adjust campus culture to help faculty embrace more the continuous improvement of their academic programs, an informal campaign was launched at a small, private, teaching university in the Midwest. Faculty were asked to reflect on their programs on a purely qualitative level. Full-time, as well as adjuncts, were asked to prepare brief statements that addressed (1) what worked well in their courses, (2) what didn’t work as well, and (3) what changes they would make to those courses if they taught them again in the future. The design here was simple and straightforward – encourage all faculty to reflect on the programs within which they teach and begin to establish a more holistic perspective of what the program of study is intended to do.
Faculty were reminded that the data collected in assessment courses is only a part of what they should consider in specifying changes to instruction and other educational experiences. As a general rule, faculty do well to consider their observations, effective communication given the modality (seated, hybrid, or online), and how students are putting it all together (going up Bloom’s Taxonomy). A uniquely qualitative approach actually encouraged faculty to take a good, critical look at their programs – does the design of the program enhance student learning so they will be proficient at the baccalaureate or graduate level? How does the program unfold for students on the whole? Are course sequences logical? Are knowledge and skill development properly “ramped up” so that students will come out of the program able to display the mastery of knowledge, skills, and abilities necessary to be work ready? Faculty were also encouraged to take a “deeper dive” into the assessment process, looking at how students are performing at the course level from introducing to the display of mastery of program learning outcomes. In the event students were not performing at the preferred level, faculty were encouraged to collaborate with colleagues in order to make adjustments in the curriculum. Emphasis was also placed on the significance of the continual assessment cycle, always assessing both internal and external components to the university to ensure that students are prepared effectively upon graduation. Well vetted and developed academic programs are much like living organisms. They walk students through information and experiences that ultimately qualify them as being appropriately educated at the baccalaureate, or graduate, level.
This blog contribution considered the current state of outcomes assessment in American higher education. Regional commissions on colleges and universities are keenly focused on faculty actively and regularly “closing the loop” on outcomes assessment. Due to the history of higher education, as well as the culture of our campuses, faculty have been reticent to make transparent the apparent changes needed to update and improve their programs. We would do well to address this campus culture and encourage faculty to be openly professionally self-reflective. AMS is uniquely designed to capture these important faculty reflections on assessment data, as well as qualitative observations of their academic programs.
Aggarwal, A. K., & Lynn, S. A. (2012). Using continuous improvement to enhance an online course. Decision Sciences Journal of Innovative Education, 10(1), 25-48. doi:10.1111/j.1540-4609.2011.0033x
Banta, Trudy W. & Associates. 2002. Building a Scholarship of Assessment. Jossey-Bass. San Francisco.
Cooperrider, D.L. & Srivastva, S. 1987. “Appreciative Inquiry in Organizational Life.” In Woodman, R.W. & Pasmore, W.A. Research in Organizational Change and Development. Vol 1. Stamford, CT. JAI Press. Pp. 129-169.
Lehner, R., & Ruona, W. (2004, March). Using appreciative inquiry to build and enhance a learning culture. Paper presented at the Academy of Human Resource Development International Conference ‘04. Austin, TX.
National Institute for Learning Outcomes, A. (2016). Higher education quality: Why documenting learning matters. A Policy Statement from the National Institute for Learning Outcomes Assessment. Retrieved from http://files.eric.ed.gov/fulltext/ED567116.pdf
Suskie, Linda. 2004. Assessing Student Learning: A Common Sense Guide. Anker Publishing Company. Bolton, Massachusetts.