The Path to Outcomes Assessment Projects (formerly Aqua)

September 18, 2023 Jeff Reid

Last week Taskstream launched Aqua, our response to the complexities of outcomes assessment. Getting to the launch of a new product that we believe offers a simpler path to more meaningful assessment took a lot of questioning, research, and reflection. It also took a fair amount of redo along the way (the fancy word is “iteration”). It has been a lot of fun and a lot of work. We should all count ourselves lucky when the two intersect.

I would not say it has been easy by any stretch, but those of us on the product and user experience team benefit every day from three critical ingredients that make our jobs easier and increase our chances for success:

  1. We have passionate customers who are dedicated to their work and more than happy to talk to us about not just their use of Taskstream but their goals and challenges in their jobs.
  2. Inside Taskstream, our colleagues include former faculty members, assessment professionals, and, I’ll say it – assessment junkies – who have otherwise been supporting the mission of higher education for years.
  3. We get to work side-by-side in our NYC office with talented engineers who are eager to move the needle in education.

Why we ultimately built Outcomes Assessment Projects (formerly Aqua) and how we did it says a lot about Taskstream. When our marketing team invited me to take over our blog for a few days to write about the work and thinking that went into Outcomes Assessment Projects (formerly Aqua), I jumped at the chance. OK, they asked for one post. But once I finally sat down to write it, it turned out I had so much to share I had to break it up into two posts.

In Part One, I will try to answer perhaps the biggest question:

Why did we build a new product?

Last fall, we found ourselves at Taskstream exploring opportunities to make giant leaps forward in the experiences we could provide our users. We were looking for transformative, not incremental changes – the kind of changes that could not be easily met by simply adding new features to our existing products. Given the increasing interest in meaningful outcomes assessment across campuses, we were especially interested in finding solutions that could make it easy for an institution to start quickly, yet scale broadly.

To get inside our users’ heads and frame the problem with a fresh eye, the product team took over our largest conference room and began mapping out key user workflows and supporting activities on the wall in a story map. And we reached out to assessment leaders on campuses to understand the work that was going on in their world, outside of our products. Doing this put some definition and structure around the most common activities we support for institutions and shed light on those we believed were most critical to the work of assessing student learning.

Image1

Organizing the work by actual human activities and workflows and not system features can be eye opening. We found many areas that were well covered by our current products and others that were not. What we were trying to trace in our map was the simplest path that would allow institutions to collect direct evidence of student learning outcomes.

If our research did anything, it confirmed one obvious reality: Software doesn’t do assessment. People do. And those people, faculty members and assessment coordinators, have multiple jobs and pressures upon them. They are committed to the work of collecting meaningful evidence about learning, but often face the seemingly quixotic task of engaging others in the process.

As we interviewed faculty and assessment professionals from a number of institutions, we looked for the common challenges we could best address with our software. Before getting to defining features, we synthesized what he heard into a few principles that would guide our design work:

  1. We could help institutions build a positive assessment culture by achieving an appropriate level of faculty and student involvement. It may not be realistic to expect faculty to participate at all steps of an assessment initiative. Institutions need to engage them only where indispensable. And if faculty and students touch the system, the experience has to be remarkably simple and straightforward.
  2. The product should offer a guided path, or narrative that reflects best practices and raises a user’s expectations of success. In our interviews, we found a common narrative that guided most assessment efforts followed a series of simple questions:
    a. What are we measuring?
    b. Where are we collecting the evidence?
    c. How are we measuring it?
    d. Who is evaluating the evidence?
    e. What did we learn?
  3. Make it easy to use. Users have a goal in mind every time they log in. Our job is to figure that goal out and remove any obstacle in their way. We should help users avoid cognitive overload and keep them from falling down “rabbit holes.”

Image 2

Sr. User Experience Lead Monste Lobos, with Product Owners Sairah Anwar, Janine Fusco and Andrea Costa

This process produced a sort of charter for our work, but it did not tell us what to build first. Where could we quickly deliver the most value without waiting the year it might take to build a nearly “full” system?

Like most companies, Taskstream has made the shift over the years toward an agile software development methodology, which assumes you can never get it all right up front and pushes you to ship a product or feature set to customers as soon as it is “viable.” This allows for faster feedback loops and forces development teams to be fairly ruthless about prioritizing features. The main goal is to not build features users will not need or use.

We landed on a starting point: the process of scoring student work using rubrics. That may seem counter-intuitive, especially to users of our LAT product, where we reliably support millions of evaluations per year. In most other products, scoring for assessment is buried relatively deeply into a course and grading workflow. But scoring is such a critical point of engagement in direct assessment, we believed it deserved our attention. Leveraging newer technologies, we were confident we could offer a great experience catered to users who participated in scoring efforts.

The opportunity to demonstrate that we could provide a simple, intuitive scoring experience was in front of us: in February, we would have hundreds of college faculty from around the country, most of whom had never used or seen Taskstream before, log in to score dozens of student artifacts each as part of the Multi-State Collaborative to Advance Learning Outcomes Assessment (MSC).

Earlier in 2014, Taskstream was selected as the technology partner for this exciting pilot project, supporting AAC&U and SHEEO in their effort to see how institutions might scale efforts to get meaningful evidence of how students are achieving important learning outcomes. It was a project of remarkable ambition: Nearly 70 two- and four-year institutions from 9 states would collect student work for direct assessment using a selection of AAC&U’s VALUE rubrics.

We had already built a simple utility we released that fall for MSC institutions to upload work as they began collecting. Given that these users would only need to score in the system, and do so with little time for training, this felt like a perfect test for our new approach. Could we make the scoring of student work even more efficient, engaging and failsafe?

We challenged ourselves with one very hard question: How easy can we make this?

In part two, I will try to answer that question!

The post The Path to Aqua appeared first on Watermark.

Previous Article
Watermark's Responsible Use of AI Statement
Watermark's Responsible Use of AI Statement

As we harness the power of AI to make our solutions work better, it is our responsibility to our clients an...

Next Article
Evolving the Student Course Evaluation Process for Greater Insights
Evolving the Student Course Evaluation Process for Greater Insights

Higher education institutions seek to measure course quality, instructional quality, and learning outcomes....