Many of my insecurities emerge from a very basic fear of being wrong. Worse still, my brain takes it one step further, playing this fear out through the infamously squeamish dream in which I am giving a public presentation somewhere only to discover in the middle of it that my pants lie in a heap around my ankles. But in my dream, instead of acknowledging my “problem,” buckling up, and soldiering on, I inexplicably decide that if I just pretend not to notice anything unusual, then no one in the audience will notice either. Let’s just say that this approach doesn’t work out so well.
It’s pretty hard to miss how ridiculous this level of cognitive contortionism sounds. Yet this kind of foolishness isn’t the exclusive province of socially awkward bloggers like me. In the world of higher education we sometimes hold obviously contradictory positions in plain view, trumpeting head-scratching nonsequiturs with a straight face. Although this exercise might convince many, including ourselves, that we are holding ourselves accountable to our many stakeholders, we actually make it harder to meaningfully improve because we don’t test the underlying assumptions that set the stage for these moments of cognitive dissonance. So I’d like to wrestle with one of these “conundrums” this week: the ubiquitous practice of benchmarking in the context of a collective uncertainty about the quality of higher education – admitting full well that I may well be the one who ends up pinned to the mat crying “uncle.”
It’s hard to find a self-respecting college these days that hasn’t already embedded the phrase “peer and aspirant groups” deep into its lexicon of administrator-speak. This phrase refers to the practice of benchmarking – a process to support internal assessment and strategic planning that was transplanted from the world of business several decades ago. Benchmarking is a process of using two groups of other institutions to assess one’s own success and growth. Institutions start by choosing a set of metrics to identify two groups of colleges: a set of schools that are largely similar at present (peers) and a set of schools that represent a higher tier of colleges for which they might strive (aspirants). The institution then uses these two groups as yardsticks to assess their efforts toward:
- improved efficiency (i.e., outperforming similarly situated peers on a given metric), or
- increased effectiveness (i.e., equaling or surpassing a marker already attained by colleges at the higher tier to which the institution aspires).
Sometimes this practice is useful, especially in setting goals for statistics like retention rates, graduation rates, or a variety of operational measures. However, sometimes this exercise can unintentionally devolve into a practice of gaming, in which comparisons with the identified peer group too easily shine a favorable light on the home institution, while comparisons with the aspirant group are too often interpreted as evidence of how much the institution has accomplished in spite of its limitations. Nonetheless, this practice seems to be largely accepted as a legitimate way of quantifying quality. So in the end, our “go-to” way of demonstrating value and a commitment to quality is inescapably tethered to how we compare ourselves to other colleges.
At first, this seems like an entirely reasonable way to assess quality. But it depends on one fundamental assumption: the idea that, on average, colleges are pretty good at what they do. Unfortunately, the last decade of research on the relative effectiveness of higher education suggests that, at the very least, the educational quality of colleges and universities is uneven, or at worst, that the entire endeavor is a fantastically profitable house of cards.
No matter which position one takes, it seems extraordinarily difficult to simultaneously assert that the quality of any given institution is somewhere between unknown and dicey, while at the same time using a group of institutions – most of which we know very little about beyond some cursory, outer layer statistics – as a basis for determining one’s own value. It’s sort of like the sixth grade boy who justifies his messy room by suggesting that it’s cleaner than all of his friends’ rooms.
My point is not to suggest that benchmarking is never useful or that higher education is not in need of improvement. Rather, I think that we have to be careful about how we choose to measure our success. I think we need to be much more willing to step forward and spell out what we think success should look like, regardless of what other institutions are doing or not doing. In my mind, this means starting by selecting a set of intended outcomes, defining clearly what success will look like, and then building the rest of what we do in a purposeful way around achieving those outcomes. Not only does this give us a clear direction simply described to people within and without our own colleges, but gives us all the pieces necessary to build a vibrant feedback loop to assess and improve our efforts and our progress.
I fully understand the allure of “best practices” – the idea that we can do anything well simply by figuring out who has already done it well and then copying what they do. But I’ve often seen the best of best practices quickly turn into worst practices when plucked out of one setting and dropped wholesale into a different institutional culture. Maybe we’d be better off paying less attention to what everyone else does, and concentrate instead on designing a learning environment that starts with the end in mind and uses all that we already know about college student development, effective teaching, and how people learn. It might look a lot different than the way that we do it now. Or it might not look all that different, despite being substantially more effective. I don’t know for sure. But it’s got to be more effective than talking, albeit eloquently, out of both sides of our mouths.
Make it a good day,
Mark