This provocative statement anchored our panel discussion on institutional strategy featuring Dr. George Still, Associate Vice President of Institutional Effectiveness at Fresno State; Dr. Matthew Winn, Senior Analyst at The Tambellini Group; and Dr. Leeshawn Cradoc Moore, Director of Institutional Research and Assessment and ALO at Pitzer College, moderated by Dr. Glenn Phillips, Senior Insights Consultant at Watermark. The takeaway was clear: Assessment should not be a post-mortem performed to explain why a project failed or succeeded. Instead, it must be a vital sign, a real-time indicator that actively shapes the next move.
Too often, assessment lives inside accreditation cycles instead of everyday decision-making. Strategic planning processes operate separately from outcomes reporting. Institutions often say they want to be data-informed, yet the structures that shape culture and governance lag behind.The panel made an evident case that institutional effectiveness is not about producing more reports. It is about sustainable alignment across strategy, evidence, and action. For leaders looking to implement high-quality education improvement strategies, this shift is both necessary and urgent.

From compliance to continuous improvement
For many institutions, assessment still feels like a compliance exercise.
Faculty and administrators gather evidence, compile documentation, and prepare reports primarily to satisfy accreditation requirements. While these processes support self-study and continuous improvement in higher education accreditation, they often occur only during major review cycles.
That timing creates a problem.
When institutions wait until year-end reviews or accreditation milestones to examine outcomes, opportunities for improvement are already lost. Data becomes historical documentation rather than a tool for mid-course correction.
Assessment should function as an ongoing feedback system that helps institutions adjust strategy while programs are still unfolding. When outcomes data is reviewed regularly, institutions can identify emerging issues earlier, refine programs faster, and better support students in real time.
The discussion also emphasized that relying on a single metric, such as retention, limits institutional insight. Retention remains important, but it captures only part of the student experience.
A broader approach to outcome-based strategy considers multiple dimensions of success, including:
- Classroom learning outcomes
- Co-curricular engagement
- Skill development and workforce readiness
- Program alignment with institutional goals
True progress requires moving from episodic reporting to high-quality improvement strategies for education that embed assessment into ongoing review cycles.This isn’t just an operational change; it is a philosophical one. It’s the difference between collecting evidence to satisfy a requirement and using evidence to guide a vision.

Culture is the hardest (and most critical) shift
While tools and frameworks matter, the panel emphasized that the biggest barrier to change is not technology. It is perception.
In many institutions, faculty feel they are being asked to supply data rather than participate in shaping strategy. When assessment feels disconnected from teaching, learning, or institutional priorities, engagement naturally declines.
Building a culture of evidence requires addressing this disconnect.
Leaders must clearly communicate why outcomes data matters and how it informs real institutional decisions. Many institutions are now exploring how to build a culture of assessment in higher education to better connect teaching, evidence, and strategic planning.
The panel referenced a common framework for change: the 30–50–20 model.
- 30% are early adopters, ready to innovate.
- 50% are the “persuadables,” waiting for proof of value.
- 20% are the skeptics, often due to “initiative fatigue.”
This dynamic reinforces why culture change cannot rely on one-time initiatives. It requires sustained conversation, visible leadership commitment, and consistent follow-through.
Equally important is what panelists described as relational capital.
Before institutions can have productive conversations about difficult data, such as program outcomes or performance gaps, leaders must build trust across departments and governance groups. Without that trust, evidence may trigger defensiveness rather than progress.
Transparency is the currency here. When leadership collects data but fails to act on it, credibility erodes quickly. People disengage if evidence appears to disappear into reports without influencing decisions.In contrast, when leaders demonstrate that data leads to tangible improvements, participation grows. Over time, that pattern lays the foundation for a continuous improvement culture in higher education.
Technology enables alignment — it doesn’t create it
The conversation also highlighted the rapid expansion of data infrastructure across higher education.
Institutions are investing in dashboards, analytics platforms, data lakes, and near-real-time reporting systems. These tools promise greater visibility into institutional performance, but technology alone cannot deliver strategic alignment.
Panelists emphasized a critical shift in how institutions think about data access.
In the past, many organizations focused on simply providing raw data to stakeholders. Today, the goal is to design insights around specific decision points. Instead of overwhelming users with information, institutions are beginning to structure data around questions leaders actually need to answer. This approach supports strategic planning in higher education by connecting evidence directly to planning processes.
At the same time, many institutions face another challenge: system fragmentation.
Over time, different departments adopt specialized tools to meet local needs. The result is a growing patchwork of platforms that do not communicate easily with one another. This fragmentation increases operational friction and makes it harder to connect planning, assessment, and outcomes reporting. Modernization must harmonize four critical pillars: People, Processes, Data, and Technology.
Technology should strengthen collaboration and shared governance rather than simply automate compliance tasks.
When systems are connected and aligned, institutions gain the ability to link planning priorities directly to outcomes data. That connection is essential for sustaining institutional effectiveness over time.
Data as a strategic lever, not just a reporting mechanism
The panel concluded with examples of how institutions are using evidence to guide forward-looking decisions.
In several cases, leaders used assessment data to realign academic tracks with updated learning outcomes. In others, labor market insights helped institutions launch new programs or reshape existing ones to better serve regional needs.
These examples highlight the growing importance of outcome-based strategy.
Enrollment shifts, demographic changes, and workforce transformation are forcing institutions to evaluate program viability more carefully than ever. Evidence helps leaders understand which programs support long-term institutional goals and which may require redesign.
At the same time, institutions must balance market demand with their core mission, which panelists described as the organization’s “DNA.”
The conversation also acknowledged the growing influence of artificial intelligence on learning environments. As AI reshapes expectations around knowledge, assessment practices will continue to evolve. Institutions will need to rethink how learning is measured and how skills are demonstrated.
Another key lesson emerged during this portion of the discussion: data should not be used to confirm decisions that have already been made.
Confirmation bias undermines the entire purpose of evidence-based decision-making. Instead, data should challenge assumptions and open new strategic possibilities. Even well-informed decisions sometimes fail. What matters is the ability to learn quickly, adapt strategy, and move forward.
In this sense, evidence becomes most powerful when it supports courageous, forward-looking leadership.
Sustaining a culture of evidence
The Tambellini panel made one idea clear: building a strategy-led institution requires more than improved reporting.
It requires sustained alignment between evidence, planning, and action.
When institutions embed assessment into everyday decision-making, they transform it from a compliance task into a core driver of improvement. The institutions leading this shift ensure that:
- Evidence informs decisions at every level.
- Planning and assessment operate as a connected system.
- Leadership demonstrates visible follow-through on insights.
Together, these practices support the creation of a continuous improvement culture in higher education, one in which outcomes inform strategy and strategy drives measurable impact.
Ultimately, the goal of assessment is not simply to document results. It is to guide better decisions about what comes next.When institutions fully embrace that principle, institutional effectiveness becomes more than a reporting function. It becomes a strategic capability that supports sustainable progress and lasting student impact.



































































































































































































































































































































































