“We all want to belong, yeah …”

I just watched a wonderful TEDx talk by Terrell Strayhorn, Professor of Higher Education at (the) Ohio State University, called “Inalienable Rights: Life, Liberty, and the Pursuit of Belonging.” With enviable ease, Dr. Strayhorn walks his audience through the various factors that impede college persistence and demonstrates why a sense of belonging is so important for student success. He concludes his talk with his remarkably smooth singing voice, crooning, “We all want to belong, yeah . . .”

If you’ve been following my blog over the last year you’ve seen me return to our student data that reveals troubling differences in sense of belonging on campus across various racial and ethnic groups. The growing body of research on belongingness and social identity theory continues to demonstrate that the factors that shape a sense of belonging are extensive. While these complicated findings might gratify the social scientist in me, the optimistic activist part of me has continued to beg for more concrete solutions; things that individuals within a community can do right away to strengthen a sense of membership for anyone in the group who might not be so sure that they belong.

So here are a couple of ideas that poured some of the best kind of fuel onto my fire over the weekend: Micro-Kindness and Micro-Affirmations. Both terms refer to a wonderfully simple yet powerful idea. In essence, both concepts recognize that we live in an imperfect world rife with imperfect interactions and, if we want the community in which we exist to be better than it is (no matter how good or bad it is at present), then individual members of that community have to take action to change it. Applied to the ongoing discussion of microaggressions and their potential impact on individuals within a community (particularly those from traditionally marginalized groups), both ideas assert that there are things that we can do to emphasize to others that we welcome them into our community and reduce the existence of microaggressions. These actions can be as simple as opening a door for someone and smiling at them, making eye contact and saying hello, or engaging in brief but inclusive conversation. Instructors can have a powerful micro-affirmative impact by taking the time to tell a student who might be hesitant or struggling that you know that he or she can succeed in your class.

Researchers at the Higher Education Research Institute at UCLA have found that validating experiences, much like the micro-kindnesses and micro-affirmations described above, appear to have a significant impact in reducing perceptions of discrimination and bias. In fact, after accounting for the negative impact of discrimination and bias on a sense of belonging, interpersonal validations generated by far the largest positive effect on a sense of belonging.

Research on the biggest mistakes that people can make in trying to change behavior has found that trying to eliminate bad behaviors is much less effective than instituting new behaviors. Since individuals often perceive microaggressions to come in situations where a slight was not intended, eradicating everything that might be perceived as a slight or snub seems almost impossible. But if each of us were to make the effort to enact a micro-kindness or a micro-affirmation several times each day, we might set in motion a change in which we

  1. substantially improve upon the community norms within which microaggressions might occur, and
  2. significantly increase a sense a belonging among those most likely to feel like outsiders.

Make it a good day,

Mark

 

Applying a Story Spine to Guide Assessment

As much as I love my assessment compadres, sometimes I worry that the language we use to describe the process of continual improvement sounds pretty stiff. “Closing the loop” sounds too much like teaching a 4 year-old to tie his shoe. Over the years I’ve learned enough about my own social science academic nerdiness to envy those who see the world through an entirely foreign lens. So when I stumbled upon a simple framework for telling a story called a “Story Spine,” it struck me that this framework might spell out the fundamental pieces of assessment in a way that just makes much more sense.

The Story Spine idea can be found in a lot of places on the internet (e.g., Pixar and storytelling), but I found out about it through the world of improv. At its core, the idea is to help improvisers go into a scene with a shared understanding of how a story works so that, no matter what sort of craziness they discover in the course of their improvising, they know that they are all playing out the same meta-narrative.

Simply put, the Story Spine divides a story into a series of sections that each start with the following phrases. As you can tell, almost every story you might think of would fit into this framework.

Once upon a time . . .

And every day . . .

Until one day . . .

Because of that . . .

Because of that . . .

Until finally . . .

And ever since then . . .

These section prompts can also fit into four parts of a cycle that represent the transition from an existing state of balance (“once upon a time” and “every day”), encountering a disruption of the existing balance (“until one day”), through a quest for resolution (“because of that,” “because of that,” and “until finally”), and into a new state of balance (“and ever since then”).

To me, this framework sounds a lot like the assessment loop that is so often trotted out to convey how an individual or an organization engages assessment practices to improve quality. In the assessment loop, we are directed to “ask questions,” “gather evidence,” “analyze evidence,” and “use results.” But to be honest, I like the Story Spine a lot better. Aside from being pretty geeky, the assessment loop starts with a vague implication that trouble exists below the surface and without our knowledge. This might be true, but it isn’t particularly comforting. Furthermore, the assessment loop doesn’t seem to leave enough room for all of the forces that can swoop in and affect our work despite our best intentions. There is a subtle implication that educating is like some sort of assembly line that should work with scientific precision. Finally, the assessment loop usually ends with “using the results” or, at its most complex, some version of “testing the impact of something we’ve added to the mix as a result of our analysis of the evidence.” But in the real world, we are often faced with finding a way to adjust to a new normal – another way of saying that entering a new state of balance is as much a function of our own adjustment as it is the impact of our interventions.

So if you’ve ever wondered if there was a better way to convey the way that we live an ideal of continual improvement, maybe the Story Spine works better. And maybe if we were to orient ourselves toward the future by thinking of the Story Spine as a map for what we will encounter and how we ought to be ready to respond, maybe – just maybe – we will be better able to manage our way through our own stories.

Make it a good day,

Mark

Some comfort thoughts about mapping

I hope you are enjoying the bright sunshine today.  Seeing that we might crack the 70 degree mark by the end of the week makes the sun that much more invigorating!

As you almost certainly know by now, we have been focusing on responding to the suggestions raised in the Higher Learning Commission accreditation report regarding programmatic assessment. The first step in that response has been to gather curricular and learning outcome maps for every major.

So far, we have 32 out of 45 major-to-college outcomes maps and 14 out of 45 courses-to-major outcomes maps.  Look at it as good or look at it as bad – at least we are making progress, and we’ve still got a couple weeks to go before I need to have collected them all. More importantly, I’ve been encouraged by the genuine effort that everyone has made to tackle this task. So thank you to everyone.

Yet as I’ve spoken with many of you, two themes have arisen repeatedly that might be worth sharing across the college and reframing just a bit.

First, many of you have expressed concern that these maps are going to be turned into sticks that are used to poke you or your department later. Second, almost everyone has worried about the inevitable gap between the ideal student’s progress through a major and the often less-ideal realities of the way that different students enter and progress through the major.

To both of those concerns, I’d like to suggest that you think of these maps as a perpetually working document instead of some sort of contract that cannot be changed. The purpose of drawing out these maps is to make explicit the implicit only as a starting point from which your program will constantly evolve. You’ll change things as your students change, as your instructional expertise changes, and as the future for which your program prepares students changes. In fact, probably the worst thing that could happen is a major that never changes anything no matter what changes around it.

The goal at this point isn’t to produce an unimprovable map. Instead, the goal is put a map together that is your best estimate of what you and your colleagues are trying to do right now. From there, you’ll have a shared starting point that will make it a lot easier to identify and implement adjustments that will in turn produce tangible improvement.

So don’t spend too much time on your first draft. Just get something on paper (or pixels) that honestly represents what you are trying to do and send it to me using the templates I’ve already shared with everyone. Then expect that down the road you’ll decide to make a change and produce a second draft. And so on, and so on. It really is that simple.

Make it a good day,

Mark

Transparency Travails and Sexual Assault Data

The chill that dropped over campus on Monday seems like an apt metaphor for the subject that’s been on my mind for the past week. Last spring, Augustana participated in a multi-institutional study focused on sexual assault campus climate that was developed and administered by the Higher Education Data Sharing Consortium (HEDS). We hoped that the findings from this survey would help us, 1) get a better handle on the nature and prevalence of sexual assault and unwanted sexual contact among our students, and 2) better understand our campus climate surrounding sexual assault and unwanted sexual contact. We actively solicited student participation in the survey, collaborating with student government, faculty, and administration to announce the survey and encourage students to respond. The student response was unusually robust, particularly given the sensitivity of the topic. Equally important, many people across campus – students, faculty, administrators, and staff alike – took note of our announced intentions to improve and repeatedly asked when we would have information about the findings to share with the campus community. You saw the first announcement of these results on Sunday in a campus-wide email from Dean Campbell. If you attended the Monday night screening of The Hunting Ground and the panel discussion that followed, you likely heard additional references to findings from this survey. As Evelyn Campbell indicated, the full report is available from Mark Salisbury (AKA, me!) in the IR office upon request.

It has been interesting to watch the national reporting this fall as several higher ed consortia and individual institutions have begun to share data from their own studies of sexual assault and related campus climate. While some news outlets have reported in a fairly objective manner (Inside Higher Ed and The Chronicle of Higher Education), others have tripped over their own feet trying to impose a tale of conspiracy and dark motives (Huffington Post) or face-planted trying to insert a positive spin where one doesn’t really exist (Stanford University). Moreover, the often awkward word-choices and phrasing in the institutional press releases (e.g., Princeton’s press release) announcing these data seem to accentuate the degree to which colleges and university aren’t comfortable talking about their weaknesses, mistakes, or human failings (not to mention the extent to which faculty and college administrators might need to bone up on their quantitative literacy chops!).

Amidst all of this noise, we are watching two very different rationales for transparency play out in entirely predictable ways. One rationale frames transparency as a necessary imposition from the outside, like the piercing beam of an inspector’s flashlight pointed into an ominous darkness to expose bad behavior and prove a supposition. The other rationale frames transparency as a disposition that emanates from within, cultivating an organizational dynamic that makes it possible to enact and embrace meaningful and permanent improvement.

For the most part, it seems that most of the noise being made in the national press about sexual assault data and college campuses comes from using transparency to beat institutions into submission. This is particularly apparent in the Huffington Post piece. If the headline, “Private Colleges Keep Sexual Assault Data Secret: A bunch of colleges are withholding sexual assault data, thanks to one group,” doesn’t convey their agenda clearly enough, then the first couple of paragraphs walks the reader through it. The problem in this approach to transparency is that the data too often becomes the rope in a giant tug-of-war between preconceived points of view. Both (or neither) points of view could have parts that are entirely valid, but the nuance critical to actually identifying an effective way forward gets chopped to bits in the heat of the battle. In the end, you just have winners, losers, and a lifeless coil of rope that no one cares about anymore.

Instead, transparency is more likely to lead to effective change when it is a disposition that emanates within the institution’s culture. The folks at HEDS understood this notion when they designed the protocol for conducting the survey and conveying the data. The protocol they developed specifically prohibited institutions from revealing the names of other participant institutions, forcing institutions to focus the implications of their results back on themselves. Certainly, a critical part of this process at any institution is sharing its data with its entire community and collectively addressing the need to improve. But in this situation, transparency isn’t the end goal. Rather, it becomes a part of a process that necessarily leads to an improvement and observable change. To drive this point home, HEDS has put extensive efforts into helping institutions use their data to create change that reduces sexual assault.

At Augustana, we will continue to share our own results across our community as well and tackle this problem head-on. Our own findings point to plenty of issues that will likely improve our campus climate and reduce sexual assault. I’ll write about some of these findings in more detail in the coming weeks. In the meantime, please feel free to send me an email requesting our data. I’ll send you a copy right away. And if you’d like me to bring parts of the data to your students so that they might reflect and learn, I’m happy to do that too.

Make it a good day,

Mark

Welcome back to a smorgasbord of ambiguity!

Every summer I get lonely.  Don’t get me wrong, I love the people I work with in Academic Affairs and in Founders Hall . . . probably more than they love me sometimes.  But the campus just doesn’t feel right unless there is a certain level of manageable chaos, the ebb and flow of folks scurrying between buildings, and a little bit of nervous anticipation in the air.  Believe it or not, I genuinely missed our student who sat in the trees and sang out across the quad all last year!  Where are you, Ellis?!

For those of you who are new to Augustana, I write this column/blog every week to try to drop a little dose of positive restlessness into the campus ether.  I first read the phrase “positive restlessness” in the seminal work by George Kuh, Jillian Kinzie, John Schuh, and Liz Whitt titled Student Success in College. This 2005 book describes the common threads the authors found among 20 colleges and universities that, no matter the profile of students they served or the amount of money squirreled away in their endowment portfolio, consistently outperformed similar institutions in retention and graduation rates.

More important than anything else, the authors found that the culture on each of these campuses seemed energized by a perpetual drive to improve. No matter if it was a massive undertaking or a tiny little tweak, the faculty, staff, and students at these schools seemed almost hungry to get just a little bit better at who they were and how they did what they do every day.  This doesn’t mean that the folks on these campuses were some cultish consortium of maniacal change agents or evangelical sloganeers. But over and over it seemed that the culture at each of the schools featured in this study coalesced around a drive to do the best that they could with the resources that they had and to never let themselves rest on their laurels for too long.

What continues to strike me about this attribute is the degree to which it requires an optimistic willingness to wade into the unknown. If we were to wait until we figured out the failsafe answer to every conundrum, none of us would be where we are now and Augustana would have almost certainly gone under a long time ago.  Especially when it comes to educating, there are no perfect pedagogies or guaranteed solutions. Instead, the best we can do is continually triangulate new information with our own experience to cultivate learning conditions that are best suited for our students. In essence, we are perpetually focused on the process in order to increase the likelihood that we can most effectively influence the product.

The goal of this blog is to present little bits of information that might combine with your expertise to fuel a sense of positive restlessness on our campus.  Sometimes I point out something that we seem to be doing well.  Other times I’ll highlight something that we might improve.  Either way, I’ll try to present this information in way that points us forward with an optimism that we can always make Augustana just a little bit better.

By a lot of different measures, we are a pretty darn good school.  And we have a healthy list of examples of ways in which we have embodied positive restlessness on this campus (if you doubt me, read the accreditation documents that we will be submitting to the Higher Learning Commission later this fall).  We certainly aren’t perfect, but frankly that would be a fool’s errand because perfection is a static concept – and maintaining an effective learning environment across an entire college campus is by definition a perpetually evolving endeavor.

So I raise my coffee mug to all of you and to the deliciously ambiguous future that this academic year holds.  Into the unknown we stride together.

Make it a good day!

Mark

 

So after the first year, can we tell if CORE is making a difference?

Now that we are a little over a year into putting Augustana 2020 in motion, we’ve discovered that assessing the implementation process is deceptively difficult. The problem isn’t that the final metrics to which the plan aspires are too complicated to measure or even too lofty to achieve. Those are goals that are fairly simple to assess – we either hit our marks or we don’t. Instead, the challenge at present lies in devising an assessment framework that tracks implementation, not the end results. Although Augustana 2020 is a relatively short document, in actuality it lays out a complex, multi-layered plan that requires a series of building blocks to be constructed separately, fused together, and calibrated precisely before we can legitimately expect to meet our goals for retention and graduation rates, job acquisition and graduate school acceptance rates, or improved preparation for post-graduate success. Assessing the implementation, especially at such an early point in the process, by using the final metrics to judge our progress would be like judging a car manufacturer’s increased production speed right after the company had added a faster motor to one of the assembly lines. Of course, without having retrofitted or changed out all of the other assembly stages to adapt to this new motor, by itself such a change would inevitably turn production into a disaster.

Put simply, judging any given snapshot of our current state of implementation against the fullness of our intended final product doesn’t really help us build a better mousetrap; it just tells us what we already know (“It’s not done yet!”). During the process of implementation, the focus of assessment is much more useful if it identifies and highlights intermediate measures that give us a more exacting sense of whether we are moving in the right direction. In addition, assessing the process should tell us if the pieces we are putting in place will work together as designed or if we have to made additional adjustments to make sure the whole systems works as it should. This means narrowing our focus on the impact of individual elements on specific student behaviors, testing the fit between pieces that have to work together, and tracking the staying power of experiences that are intended to permanently impact our students’ trajectories.

With all of that said, I thought that it would be fitting to try out this assessment approach on arguably the most prominent element of Augustana 2020 – CORE. Now that CORE is finishing its first year at the physical center of our campus, it seems reasonable to ask whether we have any indicators in place that could assess whether this initiative is bearing the kind of early fruit we had hoped. Obviously, since CORE is designed to function as a part of a four-year plan of student development and preparation, it would be foolhardy to judge CORE’s ultimate effectiveness on some of the Augustana 2020 metrics until at least four years has past. However, we should look to see if there are indications that CORE’s early impact triangulates with the student behaviors or attitudes necessary for improved post-graduate success. This is the kind of data that would be immediately useful to CORE and the entire college. If indicators suggest that we are moving in the right direction, then we can move forward with greater confidence. If the indicators suggest that things aren’t working as we’d hoped, then we can make adjustments before too many other things are locked into place.

In order to find data that suggests impact, we need more than just the numbers of students who have visited CORE this year (even though it is clear that student traffic in the CORE office and at the many CORE events has been impressive). To be fair, these participation patterns could simply be an outgrowth of CORE’s new location at the center of campus (“You’ve got candy, I was just walking by, why not stop in?”). To give us a sense of CORE’s impact, we need to find data where we have comparable before-and-after numbers. At this early juncture, we can’t look at our recent graduate survey data for employment rates six months after graduation since our most recent data comes from students who graduated last spring – before CORE opened.

Yet we may have a few data points that shine some light on CORE’s impact during its first year. To be sure, these data points shouldn’t be interpreted as hard “proof.” Instead, I suggest that they are indicators of directionality and, when put in the presence of other data (be they usage numbers or the preponderance of anecdotes), we can start to lean toward some conclusions about CORE’s impact in its first year.

The first data point we can explore is a comparison of the number of seniors who have already accepted a job offer at the time they complete the senior survey. Certainly the steadily improving economy, Augustana’s existing efforts to encourage students to begin their post-graduate planning earlier, and the unique attributes of this cohort of students could also influence this particular data point. However, if we were to see a noticeable jump in this number, it would be difficult to argue that CORE should get no credit for this increase.

The second data point we could explore would be the proportion of seniors who said they were recommended to CORE or the CEC by other students and faculty. This seems a potentially indicative data point based on the assumption that neither students nor faculty would recommend CORE more often if the reputation and result of CORE’s services were no different than the reputation and results of similar services provided by the CEC in prior years. To add context, we can also look at the proportion of seniors who said that no one recommended CORE or the CEC to them.

These data points all come from the three most recent administrations of the senior survey (including this year’s edition, to which we already have 560 out of a 580 eligible respondents). The 2013 and 2014 numbers are prior to the introduction of CORE, and the 2015 number is after CORE’s first year. I’ve also calculated a proportion that includes all students whose immediate plan after graduation is to work full-time in order to account for the differences in the size of the graduating cohorts.

Seniors with jobs accepted when completing the senior survey –

  • 2013 – 104 of a possible 277 (37.5%)
  • 2014 – 117 of a possible 338 (34.6%)
  • 2015 – 145 of a possible 321 (45.2%)

Proportion of seniors indicating they were recommended to CORE or the CEC by other students –

  • 2013 – 26.9%
  • 2014 – 24.0%
  • 2015 – 33.2%

Proportion of seniors indicating they were recommended to CORE or the CEC by faculty in their major or faculty outside their major, respectively –

  • 2013 – 47.0% and 18.8%
  • 2014 – 48.1% and 20.6%
  • 2015 – 54.6% and 26.0%

Proportion of seniors indicating that no one recommended CORE or the CEC to them –

  • 2013 – 18.0%
  • 2014 – 18.9%
  • 2015 – 14.4%

Taken together, these data points seem to suggest that CORE is making a positive impact on campus.  By no means do these data points imply that CORE should be ultimately judged as a success, a failure, or anything in between at this point. However, this data certainly suggests that CORE is on the right track and may well be making a real difference in the lives of our students.

If you’re not sure what CORE does or how they do it, the best (and probably only) way to get a good answer to that question is to go there yourself, talk to the folks who work there, and see for yourself.  If you’re nice to them, they might even give you some candy!

Make it a good day,

Mark

How many responses did you get? Is that good?

As most of you know by now, the last half of the spring term sometimes feels like a downhill sprint. Except in this case you’re less concerned about how fast you’re going and more worried about whether you’ll get to the finish line without face-planting on the pavement.

Well, it’s no different in the IR Office.  At the moment, we have four large-scale surveys going at once (the recent graduate survey, the senior survey, the freshman survey, and the employee survey), we’ve just finished sending a year’s worth of reports to the Department of Education, and we’re preparing to send all of the necessary data to the arbiter of all things arbitrary, U.S. News College Rankings. That is in addition to all of the individual requests for data gathering and reporting and administrative work that we do every week.

So in the midst of all of this stuff, I wanted to thank everyone who responded to our employee survey as well as everyone who has encouraged others to participate. After last week’s post, a few of you asked how many responses we’ve received so far and how many we need. Those are good questions, but as is my tendency (some might say “my compulsion”) the answer is more complicated than you’d probably prefer.

In essence, we need as many as we can get from as many different types of employees as we can get. But in terms of an actual number, defining “how many responses is enough” can get pretty wonky with formulas and unfamiliar symbols. So I shoot for 60% of an overall population. That means, since Augustana has roughly 500 full-time employees, we would cross that threshold with 300 employee survey responses.

However, that magic 60% applies to any situation where we are looking at the degree to which a set of responses to a particular item can be confidently applied to the overall population. What if we want to look at responses from a certain subgroup of employees (e.g., female faculty)?  In that case, we need to have responses from 60% of the female faculty, something that isn’t necessarily a certainty just because we have 300 out of 500 total responses.

This is why I am constantly hounding everyone about our surveys in order to get as many responses as possible. Because we don’t know all of the subgroups that we might want to analyze when we start collecting data; those possibilities arise during the analysis. And once we find out that we don’t have enough responses to dig into something that looks particularly important, we are flat out of luck.

So this week, I’m asking you to do me a favor.  Ask one person who you don’t necessarily talk to every day if they’ve taken the survey. If they haven’t, encourage them to do it. It might end up making big difference.

Make it a good day,

Mark

The Problem with Aiming for a Culture of Assessment

In recent years I’ve heard a lot of higher ed talking heads imploring colleges and university to adopt a “culture of assessment.” As far as I can tell (at least from a couple of quick Google searches), the phrase has been around for almost two decades and varies considerably in what it actually means. Some folks seem to think it describes a place where everyone uses evidence (some folks use the more slippery term “facts”) to make decisions, while others seem to think that a culture of assessment describes a place where everyone measures everything all the time.

There is a pretty entertaining children’s book called Magnus Maximus, A Marvelous Measurer that tells the story of guy who gets so caught up measuring everything that he ultimately misses the most important stuff in life. In the end he learns “that the best things in life are not meant to be measured, but treasured.” While there are some pretty compelling reasons to think twice about the book’s supposed life lesson (although I dare anyone to float even the most concise post-modern pushback to a five year old at bedtime and see how that goes), the book delightfully illustrates the absurdity of spending one’s whole life focused on measuring if the sole purpose of that endeavor is merely measuring.

In the world of assessment in higher education, I fear that we have made the very mistake that we often tell others they shouldn’t make by confusing the ultimate goal of improvement with the act of measuring. The goal – or “intended outcome” if you want to use the eternally awkward assessment parlance – is that we actually get better at educating every one of our students so that they are more likely to thrive in whatever they choose to do after college. Even in the language of those who argue that assessment is primarily needed to validate that higher education institutions are worth the money (be it public or private money), there is always a final suggestion that institutions will use whatever data they gather to get better somehow. Of course, the “getting better” part seems to always be mysteriously left to someone else. Measuring, in any of its forms is almost useless if that is where most or all of the time and money is invested. If you don’t believe me, just head on down to your local Institutional Research Office and ask to see all of the dusty three-ring binders of survey reports and data books from the last two decades. If they aren’t stacked on a high shelf, they’re probably in a remote storage room somewhere.

Measuring is only one ingredient of the recipe that gets us to improvement. In fact, given the myriad of moving parts that educators routinely deal with (only some of which educators and institutions can actually control), I’m not sure that robust measuring is even the most important ingredient. An institution has no more achieved improvement just because they measure things than a chef bakes a cake by throwing a bag of flour in an oven (yes I know there are such things as flourless tortes … that is kind of my point). Without cultivating and sustaining an organizational culture that genuinely values and prioritizes improvement, measurement is just another thing that we do.

Genuinely valuing improvement means explicitly dedicating the time and space to think through any evidence of mission fulfillment (be it gains on learning outcomes, participation in experiences that should lead to learning outcomes, or the degree to which students’ experiences are thoughtfully integrated toward a realistic whole), rewarding the effort to improve regardless of success or failure, and perpetuating an environment in which everyone cares enough to continually seek out things that might be done just a little bit better.

Peter Drucker is purported to have said that “culture eats strategy for lunch.” Other strategic planning gurus talk about the differences between strategy and tactics. If we want our institutions to actually improve and continually demonstrate that, no matter how much the world changes, we can prepare our students to take adult life by the horns and thrive no matter what they choose to do, then we can’t let ourselves mistakenly think that maniacal measurement magically perpetuates a culture of anything. If anything, we are likely to just make a lot more work for quantitative geeks (like me) while excluding those who aren’t convinced that statistical analysis is the best way to get at “truth.” And we definitely will continue to tie ourselves into all sorts of knots if we pursue a culture of assessment instead of a culture of improvement.

Make it a good day,

Mark

 

A little thing happened while you were away . . .

Welcome back to campus! I hope you enjoyed a restful winter break. Although I was able to find a few days of legitimate relaxation (I actually read fiction for fun!), a little thing happened at the end of last week that yanked me back into focus and kept my mind spinning over the weekend.

Friday morning’s big reveal from the higher ed press was the announcement from President Obama that he is proposing a program to make community college free.  The details and the obligatory range of reactions was dutifully reported here and here, and by this morning it seems that almost every news outlet with an education beat has polled the usual suspects for comment, analysis, and knee-jerk reaction. The chatter about this policy proposal doesn’t need any more faux smart people to weigh in, so I’ll refrain from adding an unfocused “thumbs-up” or “thumbs-down” to the mix.  However, I think that the mere emergence of this policy proposal holds a couple of important implications that could matter a lot for those of us at Augustana College (as well as other small liberal arts colleges).

First, a big part of this proposal turns on the caveat that “Community colleges will be expected to offer … academic programs that fully transfer credits to local public four-year colleges and universities.”  This sounds great, except for the fact that the destination institution is the one that determines whether academic programs or credits transfer fully, not the individual community college from whence the student originates.  Whether or not the President’s policy proposal comes to fruition, I think it reflects a increasingly common belief that students should be able to move seamlessly between higher education institutions, no matter if they are moving between two-year or four-year institutions (not to mention the individual online courses, degree programs, or prior learning credits).

If I’m right here, then we will continue to see more and more students transfer credits to and from Augustana as they become less associated with a particular institution and more connected to the degree they are intending to earn or career they intend to pursue.  Again, if I’m right, that will make it even more difficult for us to know, a) if our graduates have learned everything that we believe an Augustana degree represents, and b) if the students sitting in front of us on the first day of the term already possess the prerequisite knowledge and skills to succeed in each class. However we respond to this issue (for example, offering remediation services for students who struggle, signing articulation agreements with individual community colleges to assure some degree of vetting prior coursework for transfer students, or designing competency-based assessments for students to demonstrate their readiness for advanced academic work and graduation), the challenges that emerge when students increasingly enter and depart colleges and universities at times other than the beginning and the end of that institution’s designed educational experience are, as a 2012 study suggests, likely to become more prevalent.

Second, if this proposal does in fact signal that earning credits from multiple institutions to complete a degree is gaining in both numbers and legitimacy, then we would be smart to take a hard look at all of the ways in which our institutional practices might subtly dissuade transfer students from considering Augustana.  Since our study of transfer students’ experience a couple of years ago, we’ve already made some changes to make Augustana a better destination for transfer students. But we still have some work to do – not because we have dropped the ball in responding to our findings, but because this kind of work is just plain hard.

Third, it seems to me that this trend further emphasizes the degree to which we need to be able to show that the totality of the Augustana experience – not just the academic coursework – produces the critical learning that we intend for our students.  Otherwise, we are likely to fall victim to the external framing of what constitutes a college education (aka an accumulation of academic credits that are equally valuable as a whole or a sum of their parts), making it even more difficult to differentiate ourselves in product or perception.

I’m sure that you can think of specific issues that we ought to examine if transfer students are going to become an increasingly large segment of the college-going public.  As the number of high school graduates in the Midwest continues to decrease over the next decade or so, it seems that this question becomes that much more important.  If you have some thoughts, please feel free to post them in the comment section below.  Maybe we can have a conversation without having to brave the frigid temperatures outside?

Make it a good day,

Mark

What is your definition of a “Plan B?”

I often get pegged as “the numbers guy.” Even though the words themselves seem pretty simple, I’m never really sure how to interpret that phrase. Sometimes people seem to use it to defer to my area of expertise (and that feels nice). But sometimes it seems vaguely dismissive, as if they’re a little surprised to find that I’ve escaped from my underground statistical production bunker (that doesn’t feel so nice).

With data points, it’s not the numbers by themselves that make the difference; it’s the meaning that gets assigned to them. The same is true with phrases that we all too often toss around without a second thought. I stumbled into a prime example of this issue recently while talking to several folks about the way that they think about helping students prepare for life after college. It turns out that we can run ourselves into a real buzzsaw of a problem if we don’t mean the same thing when we talk to students about developing a “Plan B.”

Essentially, a Plan B is simple – it’s a second plan if the first plan doesn’t work out. But underneath that sort of obvious definition lies the rub. For what purpose does the Plan B exist? Is it to get to a new and different goal, or is it to take an alternative path to get to the original goal?

For some, helping a student construct a Plan B means identifying a second career possibility in case the student’s first choice post-graduation plan doesn’t work out. For example, a student who intends to be a doctor may not have the grades or references to guarantee acceptance into med school. At this point, a faculty adviser might suggest that the student investigate other careers that might match some of the student’s other interests (maybe in another health field, maybe not). This definition of a Plan B assumes a career change and then begins to formulate a plan to move toward that new goal.

But for others, helping a student construct a Plan B doesn’t mean changing career goals at all. Instead, this definition of a Plan B recognizes that there are often multiple pathways to get into a particular career. For the aspiring med school student who may not have slam-dunk grades in biology or chemistry but still wants to be a doctor, one could envision a Plan B that includes taking a job at a hospital in some sort of support role, retaking specific science courses at a local university or college, then applying to medical school with stronger credentials, potentially better references, and more experience. In this case the end goal didn’t change at all. The thing that changed was the path to get there.

In no way am I suggesting that one definition of a Plan B is better than another. On the contrary, both are entirely appropriate. In fact, the student would probably be best served by laying out both possibilities and walking through the relevant implications. But the potential for a real disaster comes when two people (maybe a faculty member and a career adviser) are separately talking to the same student about the need to devise a Plan B, yet the faculty member and the adviser mean very different things when they use the same phrase.

As you can imagine, the student would probably feel as though he or she is getting conflicting advice. In addition, she might well think that the person encouraging a different career choice just doesn’t believe in her (and that the person suggesting an alternate path to her original career goal is the one who really cares about her). Moreover, the person encouraging the student to explore another career choice might feel seriously undermined by the person who has suggested to the student an alternative way to continue toward the original career goal. In the end, a student’s trust in our ability to guide them accurately and effectively is seriously eroded and a rift has likely developed between the two individuals who both genuinely care about the student in question.

Absolutely, there are times when we have to tell students that they need to explore alternative career plans. We do them no favors by placating them. At the same time, we all know students who, although they seemed to lack motivation and direction when they were at Augustana, kicked it in after graduation and eventually found a way into the career they had always wanted to pursue.

I’m certainly not suggesting that we should adopt one official definition of the phrase “Plan B.” Rather, my suspicion is that this is one of those phrases that we use often without realizing that we might not all mean the same thing. If our goal is to collectively give students the kind of guidance that they need to succeed after graduation, we probably ought to make sure that in each case we all mean the same thing when we talk to a student about a Plan B.

Make it a good day,

Mark