The Push for Greater Assessment
Throughout most of America’s history, few people questioned the value of a college education. Indeed, the premise that professors were fulfilling higher education’s promise of enabling learning went pretty much unchallenged until the mid-1980s, when intense reexamination of the quality of teaching and learning at all levels of education revealed that there were gaps – sometimes considerable ones – between what was thought to have been taught and what was actually learned.
The decades that followed were rich with attempts to close that gap, as public and political entities demanded that colleges and universities increase and demonstrate their effectiveness. The proliferation of campus Teaching and Learning Centers, the increased focus on high quality teaching in hiring, tenure and promotion policies, the attention to monitoring learning through a ‘culture of evidence’ coupled with the establishment and expansion of a new research discipline called the “Scholarship of Teaching and Learning” (SoTL) are some of the major indicators demonstrating the academy’s push for improved teaching and learning.
And along with these efforts to improve learning have been increased efforts to measure it. Institutions of higher education have added offices of assessment, assessment specialists, assessment committees, all of whom are engaged in the work of articulating what students should learn and determining and providing evidence of whether they have in fact done so.
What About Faculty?
My colleague Elizabeth Barkley and I argue that faculty should be included in this list of individuals. That is, faculty are integral to the process of identifying learning goals and outcomes and determining whether students in their courses have achieved them. Faculty can understand nuances in data and provide an insider perspective about what is going on in a given course that can complement perspectives of those examining data outside of the course. And we believe that they should do so through a seamless process of identifying learning goals, identifying a learning activity to accomplish those goals, and identifying a way to determine whether those goals have been achieved. That is, when faculty do course-level assessment well, students should not be able to tell whether they are being taught or assessed. In this way, then, faculty members can gather course-level data that can inform and contribute to departmental and institutional assessment efforts.
The challenge with this argument is that while enabling learning is one of our primary jobs as college faculty, few of us have had formal instruction on how to do it and do it well; even fewer of us have had training on how to provide evidence of what students are learning in ways that are acceptable to external stakeholders. That’s why Elizabeth and I wrote Learning Assessment Techniques: A Handbook for College Faculty (published with Jossey–Bass). We wanted to support our fellow college professors who strive to be excellent teachers and who need strategies for gauging and reporting results of learning in their classrooms not only to students, but to a variety of other interested parties ranging from hiring, tenure, and promotion committees to department chairs, assessment professionals, and committees gathering data for institutional assessment initiatives.
To provide this support, we have developed a three-stage process to guide course-level learning assessment. We have also identified 50 active learning techniques that involve the creation of assessable learning artifacts. For example, the following techniques illustrate the connection between teaching, learning, and production of data that stand as evidence of learning:
- Guided Reading Notes – LAT 4: Students receive a copy of notes summarizing content from an upcoming assigned reading but that includes blanks. As students read, they provide the missing content and fill the blanks to create a complete set of notes that may be used as a study guide. Thus students must take an active stance toward reading, which also produces a tangible product of their effort.
- Digital Story – LAT 36: Digital storytelling at its most fundamental level is the practice of using computer-based tools, such as video, audio, graphics, and Web publishing, to tell stories. The stories may be personal or academic, but for either focus, students share relevant life experiences as they attempt to connect to an audience about a given issue, which in turn leaves the story as an assessable learning artifact.
- Contemporary Issues Journal – LAT 24: In this LAT, students look for recent events or developments in the real world that are related to their coursework readings and assignments, then analyze these current affairs to identify the connections to course material in entries that they write in a journal, which in turn may be assessed.
- Personal Learning Environment (PLE) – LAT 50: A PLE is a set of people and digital resources an individual can access for the specific intent of learning. Students illustrate these potential connections through the creation of a visible network of the set. Nodes represent the resources, and ties suggest the relationship between the sources. A PLE then is a visual representation of a learner’s informal learning processes, and a concrete demonstration of an individual’s capacity for future learning.
For each of these techniques, we provide specific information about how to implement them, how to collect the learning artifacts, how to analyze data, and how to report results.
On April 26, 2016 at 2:00pm EDT, I will be discussing how to use Learning Assessment Techniques to support significant learning and to gather course-level evidence of student learning. I hope you will join me for this webinar.