The Problem with Aiming for a Culture of Assessment

In recent years I’ve heard a lot of higher ed talking heads imploring colleges and university to adopt a “culture of assessment.” As far as I can tell (at least from a couple of quick Google searches), the phrase has been around for almost two decades and varies considerably in what it actually means. Some folks seem to think it describes a place where everyone uses evidence (some folks use the more slippery term “facts”) to make decisions, while others seem to think that a culture of assessment describes a place where everyone measures everything all the time.

There is a pretty entertaining children’s book called Magnus Maximus, A Marvelous Measurer that tells the story of guy who gets so caught up measuring everything that he ultimately misses the most important stuff in life. In the end he learns “that the best things in life are not meant to be measured, but treasured.” While there are some pretty compelling reasons to think twice about the book’s supposed life lesson (although I dare anyone to float even the most concise post-modern pushback to a five year old at bedtime and see how that goes), the book delightfully illustrates the absurdity of spending one’s whole life focused on measuring if the sole purpose of that endeavor is merely measuring.

In the world of assessment in higher education, I fear that we have made the very mistake that we often tell others they shouldn’t make by confusing the ultimate goal of improvement with the act of measuring. The goal – or “intended outcome” if you want to use the eternally awkward assessment parlance – is that we actually get better at educating every one of our students so that they are more likely to thrive in whatever they choose to do after college. Even in the language of those who argue that assessment is primarily needed to validate that higher education institutions are worth the money (be it public or private money), there is always a final suggestion that institutions will use whatever data they gather to get better somehow. Of course, the “getting better” part seems to always be mysteriously left to someone else. Measuring, in any of its forms is almost useless if that is where most or all of the time and money is invested. If you don’t believe me, just head on down to your local Institutional Research Office and ask to see all of the dusty three-ring binders of survey reports and data books from the last two decades. If they aren’t stacked on a high shelf, they’re probably in a remote storage room somewhere.

Measuring is only one ingredient of the recipe that gets us to improvement. In fact, given the myriad of moving parts that educators routinely deal with (only some of which educators and institutions can actually control), I’m not sure that robust measuring is even the most important ingredient. An institution has no more achieved improvement just because they measure things than a chef bakes a cake by throwing a bag of flour in an oven (yes I know there are such things as flourless tortes … that is kind of my point). Without cultivating and sustaining an organizational culture that genuinely values and prioritizes improvement, measurement is just another thing that we do.

Genuinely valuing improvement means explicitly dedicating the time and space to think through any evidence of mission fulfillment (be it gains on learning outcomes, participation in experiences that should lead to learning outcomes, or the degree to which students’ experiences are thoughtfully integrated toward a realistic whole), rewarding the effort to improve regardless of success or failure, and perpetuating an environment in which everyone cares enough to continually seek out things that might be done just a little bit better.

Peter Drucker is purported to have said that “culture eats strategy for lunch.” Other strategic planning gurus talk about the differences between strategy and tactics. If we want our institutions to actually improve and continually demonstrate that, no matter how much the world changes, we can prepare our students to take adult life by the horns and thrive no matter what they choose to do, then we can’t let ourselves mistakenly think that maniacal measurement magically perpetuates a culture of anything. If anything, we are likely to just make a lot more work for quantitative geeks (like me) while excluding those who aren’t convinced that statistical analysis is the best way to get at “truth.” And we definitely will continue to tie ourselves into all sorts of knots if we pursue a culture of assessment instead of a culture of improvement.

Make it a good day,

Mark

 

3 thoughts on “The Problem with Aiming for a Culture of Assessment

  1. John Nugent says:

    The new book Using Evidence of Student Learning to Improve Higher Education does a decent job of framing assessment work as something that has to be done w/ a eye firmly on using the results. To that end, they write, faculty have to be deeply involved at every stage, it has to be motivated by internal goals and interests rather than compliance with external entities (accreditors, etc.) It’s a pretty decent read.

    JN

  2. PM says:

    What’s interesting is how few people have commented on this reflection in the last 2 years. Unfortunately this provides a sad testimony to the way educators have drunk the Kool-Aid prepared by what can only be described as the “artificial climate” of assessment (I don’t even want to dignify the movement by using the term “culture”) This bizarre obsession with trying to atomistically quantify learning is nothing more than a bureaucratic trope. Not only do we lose sight of the ultimate goal and mistaken the means for the end, but by embracing this assessment ideology, we destroy the authentic, existential relationship between the teacher and student, as well as eliminate any possibility of spontaneity, creativity, free thinking, or genuine learning. The last in this list I loosely define as acquiring the rational ability to see the simple universal among the almost infinite particulars, allowing for a true architectural understanding of the classical university disciplines and the knowledge within their respective provinces. For centuries this criterion of universality has been the determining factor in establishing the ordo cognoscendi. Rather than see the correspondence between universality and scientific value, the universities have become a bastion of intellectual particularism (particularsities). This paradigm shift is nowhere more visible than in the trade school quality that universities have adopted over the past few decades; and this particularism plays right into the wheelhouse of the so-called assessment culture.

Comments are closed.