How Easy Can We Make This?

September 18, 2023 Jeff Reid

Aqua2_1

How easy can we make this? Janine Fusco, Andrea Costa, Sairah Anwar and Montse Lobos from the Taskstream Product and UX team digests information from user interviews.

In Part 1 of my blog post, “The Path to Outcomes Assessment Projects (formerly Aqua),” I wrote about how we approached the decision to build Outcomes Assessment Projects (formerly Aqua) by Taskstream, the product that provides a simpler path to more meaningful assessment. In Part 2 here, I’ll try to shed more light on how we worked to make our ideas reality in the product.

Once we made the decision to support the Multi-State Collaborative (MSC) with a new scoring experience, we challenged ourselves with the question: “How easy can we make this?” Our research and our own experience with customers were telling us that to make direct assessment scalable and sustainable, the system had to be remarkably simple for users each time they touched it.

We got to work prioritizing our requirements (we call them user stories), long-term goals (the things we thought would be useful and cool some day) then zeroed in on the critical pieces we would need to provide a great experience for the scorers from 69 colleges & universities distributed across the country that were each expected to score approximately 75 student assignments in a system they had never used before.

I had some clever ideas (or so I thought) about what the interface and experience should look like, then Montse Lobos, our Sr. User Experience Lead and our product team got to work upending those ideas as we conducted interviews with a set of “power” evaluators who spend the better part of their days scoring student work. We sought first to understand how they view student work and use and view the rubrics, but also learned about a host of other tips, tricks and tools they use to manage productivity.

Aqua2_2

Early Sketches

These insights made their way rapidly into paper prototypes, which then evolved into electronic versions that we tested with more users.

By early February, just over two months later, we delivered the first iteration of the scoring application we called the “VALUE Rubric Scoring System.” The first finished product blew me away. For a product manager, the experience is like being an architect and having your designs come to life in the hands of skilled master builders (in this case, our engineering team). “Beautiful” was a comment we heard more than once from users, and it made us smile each time. But accolades for aesthetics are not meaningful success metrics. A highly usable interface can be dull (think Google and Craigslist), and beautiful screens can still send users down paths to failure. What we remain concerned with is knowing if our users accomplished what they set out to do. Could they stay on task? Was the workflow intuitive?  And was it enjoyable? Were they engaged?

Aqua2_3

Janine Fusco presenting an overview of the scoring experience during an AAC&U training event.

So, how easy did we make it?

Later in February, we were invited to join a scorer calibration and training session AAC&U hosted for Multi-State Collaborative participants in Kansas City. Most of the time was spent in groups on a review of sample student work and engaged discussion about the three VALUE Rubrics being used – Critical Thinking, Written Communications and Quantitative Literacy. On day two we provided a 30-minute presentation and Q&A session for scorers about our system, after which the participating scorers went home to their campuses and began scoring a few weeks later.

The feedback at the end of the scoring period was remarkably positive and gratifying. When asked, on a 5-point scale (very hard to very easy), the overwhelming majority (over 90%) indicated that key tasks around reading, scoring and interacting with the rubrics were “easy” or “very easy.” But it was the comments we received that told us we were on the right path to achieve our goal.

“I was impressed by the design of this process. Functions were intuitive, everything was easy to find and use. The interface made the most of screen real estate for easy reading and referencing of criteria.”

“It’s way more convenient and a lot less stressful (stacks of paper on my desk = stress) . It allowed me to stay organized and keep track of my progress easily. It also saved a lot of trees! I really liked it!”

Most important of all, AAC&U and SHEEO considered the pilot a success and have been able to expand participation for the upcoming “Demonstration Year.”

Aqua emerges

The feedback we received and the mounting interest from institutions both inside and outside of the MSC helped sharpen our vision for a product that would allow institutions to make the daunting work of assessing student learning outcomes more achievable.

While we continued to collect feedback from MSC users (we surveyed everyone and conducted countless follow-up interviews), we had also begun developing the additional components we believed could help institutions scale their own assessment efforts on campus.

The scoring functionality was only one phase, or “slice” (in our geek-speak), of an assessment project lifecycle. To make this useful for a campus to apply to local efforts, we needed to support more of the overall workflow. We wanted to give those accountable for assessment efforts (such as assessment coordinators and directors) a similarly simple set of tools they could use to easily manage and track their initiatives. Our idea for encapsulating this workflow was to package it inside a “project” that could support a single phase or measure in  a general education or program assessment plan. The project would allow them to configure the outcomes for measurement and the courses and evaluators that would participate.

We wanted to create a simple workflow to map to these key questions: “What are we measuring? Where are we collecting the evidence? How are we measuring it? Who is evaluating the evidence?”

Aqua2_4

The work of managing evidence (collecting the assignments and work artifacts) for assessment coordinators and faculty had to be not just simpler but more connected to the context of the project.
We had a great foundation for the scoring experience, but iterated on it to support additional use cases, like scoring an assignment with more than one rubric, and scoring multiple files with a single assignment.

Aqua2_5

All of those phases – the project setup, the collection of work and the scoring – are critical points of engagement, but are ultimately means to an end: steps to get to meaningful data about student learning outcomes.

Aqua2_6

So we sought to remove the friction at each of those steps and offer a payoff at the end, in the form of intuitive, interactive reports about student performance.

Aqua2_7

What comes next?

The work to get to the first release of Outcomes Assessment Projects (formerly Aqua) spanned several cycles of designing, testing our designs with users, then rapidly iterating on those designs. If you saw some of our prototypes earlier in the summer (we offered a sneak peek at our CollabEx Live user conference), you know that the product we released looks radically different from even a few months ago. This is because we continued to ask ourselves, “how easy can we make this?”

As more institutions adopt Outcomes Assessment Projects (formerly Aqua), and we get more feedback from our clients, we will keep asking that question and try to find innovative ways to make assessment easier to scale and sustain.

The post How Easy Can We Make This? appeared first on Watermark.

Previous Article
Watermark's Responsible Use of AI Statement
Watermark's Responsible Use of AI Statement

As we harness the power of AI to make our solutions work better, it is our responsibility to our clients an...

Next Article
Evolving the Student Course Evaluation Process for Greater Insights
Evolving the Student Course Evaluation Process for Greater Insights

Higher education institutions seek to measure course quality, instructional quality, and learning outcomes....