Talking, albeit eloquently, out of both sides of our mouths

Many of my insecurities emerge from a very basic fear of being wrong.  Worse still, my brain takes it one step further, playing this fear out through the infamously squeamish dream in which I am giving a public presentation somewhere only to discover in the middle of it that my pants lie in a heap around my ankles.  But in my dream, instead of acknowledging my “problem,” buckling up, and soldiering on, I inexplicably decide that if I just pretend not to notice anything unusual, then no one in the audience will notice either.  Let’s just say that this approach doesn’t work out so well.

It’s pretty hard to miss how ridiculous this level of cognitive contortionism sounds.  Yet this kind of foolishness isn’t the exclusive province of socially awkward bloggers like me.  In the world of higher education we sometimes hold obviously contradictory positions in plain view, trumpeting head-scratching nonsequiturs with a straight face.  Although this exercise might convince many, including ourselves, that we are holding ourselves accountable to our many stakeholders, we actually make it harder to meaningfully improve because we don’t test the underlying assumptions that set the stage for these moments of cognitive dissonance.  So I’d like to wrestle with one of these “conundrums” this week: the ubiquitous practice of benchmarking in the context of a collective uncertainty about the quality of higher education – admitting full well that I may well be the one who ends up pinned to the mat crying “uncle.”

It’s hard to find a self-respecting college these days that hasn’t already embedded the phrase “peer and aspirant groups” deep into its lexicon of administrator-speak.  This phrase refers to the practice of benchmarking – a process to support internal assessment and strategic planning that was transplanted from the world of business several decades ago.  Benchmarking is a process of using two groups of other institutions to assess one’s own success and growth.  Institutions start by choosing a set of metrics to identify two groups of colleges: a set of schools that are largely similar at present (peers) and a set of schools that represent a higher tier of colleges for which they might strive (aspirants). The institution then uses these two groups as yardsticks to assess their efforts toward:

  1. improved efficiency (i.e., outperforming similarly situated peers on a given metric), or
  2. increased effectiveness (i.e., equaling or surpassing a marker already attained by colleges at the higher tier to which the institution aspires).

Sometimes this practice is useful, especially in setting goals for statistics like retention rates, graduation rates, or a variety of operational measures.  However, sometimes this exercise can unintentionally devolve into a practice of gaming, in which comparisons with the identified peer group too easily shine a favorable light on the home institution, while comparisons with the aspirant group are too often interpreted as evidence of how much the institution has accomplished in spite of its limitations.  Nonetheless, this practice seems to be largely accepted as a legitimate way of quantifying quality.  So in the end, our “go-to” way of demonstrating value and a commitment to quality is inescapably tethered to how we compare ourselves to other colleges.

At first, this seems like an entirely reasonable way to assess quality.  But it depends on one  fundamental assumption: the idea that, on average, colleges are pretty good at what they do.  Unfortunately, the last decade of research on the relative effectiveness of higher education suggests that, at the very least, the educational quality of colleges and universities is uneven, or at worst, that the entire endeavor is a fantastically profitable house of cards.

No matter which position one takes, it seems extraordinarily difficult to simultaneously assert that the quality of any given institution is somewhere between unknown and dicey, while at the same time using a group of institutions – most of which we know very little about beyond some cursory, outer layer statistics – as a basis for determining one’s own value.  It’s sort of like the sixth grade boy who justifies his messy room by suggesting that it’s cleaner than all of his friends’ rooms.

My point is not to suggest that benchmarking is never useful or that higher education is not in need of improvement.  Rather, I think that we have to be careful about how we choose to measure our success.  I think we need to be much more willing to step forward and spell out what we think success should look like, regardless of what other institutions are doing or not doing.  In my mind, this means starting by selecting a set of intended outcomes, defining clearly what success will look like, and then building the rest of what we do in a purposeful way around achieving those outcomes.  Not only does this give us a clear direction simply described to people within and without our own colleges, but gives us all the pieces necessary to build a vibrant feedback loop to assess and improve our efforts and our progress.

I fully understand the allure of “best practices” – the idea that we can do anything well simply by figuring out who has already done it well and then copying what they do.  But I’ve often seen the best of best practices quickly turn into worst practices when plucked out of one setting and dropped wholesale into a different institutional culture.  Maybe we’d be better off paying less attention to what everyone else does, and concentrate instead on designing a learning environment that starts with the end in mind and uses all that we already know about college student development, effective teaching, and how people learn.  It might look a lot different than the way that we do it now.  Or it might not look all that different, despite being substantially more effective.  I don’t know for sure.  But it’s got to be more effective than talking, albeit eloquently, out of both sides of our mouths.

Make it a good day,

Mark

 

Who are the students who said that no one recommended the CEC to them?

Last week I wrote about the way that our seniors’ responses to the question “Who recommended the Community Engagement Center to you?” might reflect the values that we communicate through out actions even if they aren’t necessarily the values that we believe we have embraced.  At the end of my post I promised to dig deeper into our senior survey to better understand the students who said that no one recommended the Community Engagement Center to them.  During the past several days my students and I have been peeling the data back in all kinds of ways.  Based on prior findings on students’ experiences with major advising and its connection to post-graduate planning, we thought that we might be able to identify some pattern in the data that would give us some big answers.  So we laid out a couple of hypotheses to test for the students who said no one recommended the CEC to them.  We thought:

  • These students would be more likely to intend to go to graduate school
  • These students would be more likely to major in a humanities discipline
  • These students would be less involved in co-curricular activities
  • These students would be generally less engaged in their college experience

Here is what we found.

First, these students weren’t more likely to be headed to graduate school.  This hypothesis was based on an earlier finding that students who intended to go to grad school were more likely to work with a professor to guide them through the application process while students planning to get a full time job would be referred to Career Services.  But our  students were distributed across the post-graduate plan options of grad school and work just like everyone else.  So this first hypothesis was a total bust.

Genius IR Shop : 0  —  Data : 1

Second, these students were not significantly more likely to major in humanities disciplines.  This hypothesis evolved from some earlier conversations with students that suggested less of a natural connection between the career center and the more “pure” liberal arts disciplines.  In the end, while some of the humanities disciplines did seem to appear slightly more often than most pre-professional degrees, there were plenty of students from the natural and physical sciences who also said no one recommended the CEC to them.  So even though there was an initial glimmer of possibility, the reality is that this second hypothesis was also a flop.

(Aspiring to be but clearly not yet) Genius IR Shop : 0  —  Data : 2

Third, we couldn’t find much in our data to support our assertion that these students were less involved in co-curricular activities.  Our originating hypothesis was based on the idea that students who are less social might not end up in situations where the CEC would be recommended as often.  Although these students found slightly fewer student groups that matched their interests, they were still involved in at least one student organizations and clubs as often as other students.  Despite looking at this data through the most friendly lens, we just couldn’t say that this group of students’ responses was a function of their lack of co-curricular involvement.

(Nothing but a bumbling shadow of a) Genius IR Shop : 0  —  Data : 3

At this point in the story, you ought to suspect some stress on my part.  It’s not all that much fun to be wrong repeatedly.  Furthermore, our last hypothesis about a general passivity is qualitatively more difficult to test than simply looking at differences across one particular question.  Nonetheless, my minions and I soldiered on.  We looked across all of the questions on the senior survey, identifying significant differences and looking for trends.  Thankfully, we found a host of evidence to support our last hunch.

We found that the students who said no one recommended the CEC to them were less plugged in to their college experience across the board.  Their responses to every one of the advising questions were significantly lower, their responses to many of the co-curricular experiences questions were significantly lower, and their responses to a number of curricular experience questions both in the major and across the curriculum were significantly lower.

(Salvaging the crumbling remains of my) Genius IR Shop : 1  —  Data : 3

What jumps out to me as a result of this exercise is the importance of our informal educational efforts.  There will always be a subset of students who simply, magically do the things we hope they would do, take the initiative to ask the next question, and get themselves ahead of the curve simply because they are the cream of our crop.  However, there will always be a subset of students who stumble out of the gate, drift passively into the fog, and avoid choices simply because they are . . . human.  Because we have many cream of the crop types, its all too easy to miss those who suffer from being, well, normal.  So to me, this is why we must take the initiative to ask students if they’ve done the things that might seem completely obvious to us, like recommending to them that they should check out the services at the CEC early in their college career – and tell them exactly why this can matter in the broader scheme of their life’s journey.  If we want all of our students – no matter if they are already perfectly formed adults or if they are bumbling, stumbling, grumbling prepubescents masquerading as undergraduates on the cusp of adulthood – to wring every developmental drop out of their college learning experience, then we have to take on a proactive role to ensure that no one gets left out in the cold, especially those who are more susceptible to float off with the current du jour.

Remember, this study isn’t about who did and did not use the CEC.  The question we examined asks “who recommended the CEC to you.”  We asked the question this way specifically to give us feedback on the nature of the experience we are delivering to our students – not just to find out what our students did.  And as it turns out, the degree to which we are proactive educators may be one of the most crucial ways in which we might purposefully guide our more passive students.  Not rocket science?  Maybe.  Worth remembering as we bustle through our own madcap world?  Absolutely.

Make it a good day,

Mark

Recommending Students to the Community Engagement Center

Sometime I worry that I tend to look at our student data through an overly quantitative lens.  I’ll look for significant predictors of specific outcomes or statistically significant differences between two groups.  And as trained, I instinctively take the steps necessary to avoid the most common statistical stumbling blocks such as claiming significant when there is none or mistaking correlation for causation.  But there are times when this propensity to immediately dive deep into the data means that I miss a critical point that sits in plain view, screaming at the big nosed, bearded face looming over it, “Hey, you idiot!  I’m right here!”

One such moment came recently while I was revisiting the simple distribution of seniors’ responses to the question, “Who recommended the CEC (Community Engagement Center) to you?”  Student could select as many options as might apply in their case: faculty within my major(s), faculty outside my major(s), my major adviser, my first year adviser, residential life staff, student activities staff, my parents, another student, other administrators, and finally, no one recommended the CEC to me.

As I stared at the percentages under each response option, I began to think that this question might be the type that holds within it an array of discoveries.  First, the distribution of responses appeared to reflect a set of values that we communicate to students about 1) the role of the CEC on campus and, 2)  the way in which we see our educational efforts as a process of preparing students for life after Augustana.  Second, since the CEC often functions as a student gateway to all sorts of other important educational experiences, I began to wonder if students who indicate that no one recommended the CEC to them might also score lower on a host of other experiences that either might follow from an initial interaction with the CEC or might suggest a broader degree of disengagement.

So here is the question followed by the distribution of students’ responses:

Who recommended the CEC (Community Engagement Center) to you? Check as many as might apply.

  • Faculty within my major(s) – 41.5%
  • My major adviser – 28.1%
  • No one recommended the CEC to me – 23.4%
  • Another student – 21.4%
  • Faculty outside my major(s) – 17%
  • Other administrator – 14%
  • My first year adviser – 11.4%
  • My parents – 9.6%
  • Student Activities staff – 5.2%
  • Residential Life staff – 1.6%

These numbers alone tell us something pretty interesting.  Clearly, recommendations to the CEC tend to come out of students’ academic experience in their major.  First, this suggests that these recommendations probably come later in one’s college career – junior or senior year (sophomore year at the earliest).  Further, these recommendations rarely come from the co-curricular side of the student experience.  Thus, it appears that in general we conceive of the role of the CEC as either, a) a means of resolving an absence of post-graduate career purpose (students in their later years who still don’t seem to know what they want to do after college or students who in the midst of searching for a career plan “B”), or, b) a support service to help students bundle the totality of their college experience in preparation for the job or grad school search.  Either way, the role we see for the CEC seems more retroactive than proactive. It doesn’t appear that we have generally thought of the CEC as a students’ compass with which they might plot out  – from the moment they arrive on campus – their college experience in a way that allows them to move forward with intentionality.  Nor do we appear to have thought much about linking our students’ co-curricular experiences – one of Augustana’s true, albeit often under-appreciated strengths – with the role of the CEC.  All of this doesn’t seem to comport with our belief that a liberal arts college experience is holistic, developmental, and fully integrated; one that starts, from the very beginning, with the end in mind, and one that believes the whole must be greater than the sum of the parts.

Now there may be lots of lengthy explanations for this particular distribution of responses; some of them might even be entirely legitimate.  But it doesn’t change the nature of the values that we appear to be expressing – or not expressing – as portrayed through student-reported experiences.  In addition, 23.4% of our seniors indicated that no one recommended them to the CEC.  Given the array of services that originate out of the CEC, I’d suggest that we would like that number to be much lower than effectively one-quarter of a graduating class.

Admittedly, there were some interesting anomalies in the data and caveats that we should consider.  A few students indicated that no one recommended the CEC to them AND indicated that another student recommended the CEC to them.  And it was during this cohort of students’ career at Augustana that the CVR (Center for Vocational Reflection) merged with a variety of other services including Career Services to create the CEC – making it possible that some students might not have considered their earlier recommendations to the CVR when responding to this question.  But even in the presence of these caveats, we should be willing to ask ourselves whether our students’ experience mirrors the values that we purport to hold.

The other aspect of this particular question that I find interesting is the degree to which the difference in responses to this question (no one recommended the CEC to me vs. someone recommended the CEC to me) might mask statistically significant differences on many other questions in the senior survey.  Now I’m not claiming that there is a direct relationship between this question and all of the others on which student responses also differed.  However, it seems to me highly possible that, like many other situations in life where one unique opportunity correlates with or begets a series of other opportunities that ultimately separates a person from the pack, interaction with the CEC may indeed open up pathways and ways of thinking about the college experience in the same way that color changes the fundamental nature of black and white film.

It turns out that students who said no one recommended the CEC to them differed significantly (in a statistical sense) on many items on the senior survey that involve the advising experience, the broader curricular experience, and the co-curricular experience.  Next week I’ll talk more about what we might learn from this array of differences.

Make it a good day,

Mark

How Greek Membership Shapes Our Students’ Experience

Listening to some faculty talk, you’d think that fraternities and sororities at Augustana are a deadly concoction of Sodom and Gomorrah, Mardi Gras, Las Vegas, and Carnival, whipped up in a blender and chugged through a fire hose from a second story beer bong.   Yet, we all know of greek organizations – at Augustana and elsewhere – that make important contributions to the local community and the development of their members.  Thankfully, we don’t have to settle for dueling anecdotes.  We have plenty of data on students in Augustana’s greek organizations that allow us to test this clash of narratives.  So, since I’m on a bit of a mythbuster’s kick lately . . . let’s see what we can find out.

When the entering class of 2008 arrived at Augustana, little did they know that they would be studied like no class before.  They provided data three times as a part of the Wabash National Study (beginning of freshman year, end of freshman year, and end of senior year).  They were also the first class to complete the new senior survey in the spring of 2012.

From the data gathered at the end of the freshmen year (spring, 2009), we found one set of troubling results among first year greek members.  Freshmen who joined greek organizations reported larger increases than their independent (non-greek member) peers on three items during the first year.

  • The number of times in a week that they drank alcohol
  • The number of times in a week that they had five or more alcoholic drinks
  • The number of days in the week that they felt sleep deprived

In addition, greek members, on average, earned a lower spring GPA – even after accounting for students’ incoming ACT score and academic motivation.  Unsurprisingly, being male exacerbated each of these differences, while being female minimized them.  Interestingly, despite these potentially negative effects, greek membership did not decrease the likelihood of retention, probably because students don’t join greek organizations until the spring term, and the primary driver of persistence or withdrawal – academic performance – has already culled the herd during the previous winter and fall terms.

Fast-forward to the end of the senior year.  At this point, what initially seemed a more negative picture becomes more complicated.  While greek members’ average GPA still trail that of non-greek members, the gap noted in the spring of the first year has shrunk by about 25%.  Again, being female mitigates further, likely making the difference in average GPA between female greek and non-greek members insignificant.

However, in numerous cases greek students’ scores on several senior survey items suggests that this experience provided some important benefits.  On average, greek members’ responded more positively (defined by differences that proved statistically significant) to these statements:

  • My co-curricular experiences provided numerous opportunities to interact with students who differed from me in race/ethnicity, sexual orientation, religious beliefs, or social/political values.
  • My co-curricular involvement helped me build a network of healthy lasting friendships.
  • My co-curricular involvement helped me develop a better understanding of my leadership skills.
  • I felt a strong sense of belonging on campus.
  • The college emphasized an atmosphere of ethnic and cross-cultural understanding.
  • Augustana faculty and staff welcomed student input on institutional policy and handbook decisions.
  • If you could relieve your college decision would you choose Augustana again?

Taken together, these findings spell out much of the good and the bad of greek life.  On one hand, during the first year it appears that some behaviors emerge among greeks that could – and sometimes do – negatively impact students’ success.  On the other hand, by the time this group of students graduates, at least one of those deficits has been legitimately reduced, and the educational efforts of the college – particularly on the co-curricular side – appear to have produced a series of benefits that match our own educational intentions.

Of course, one important question – and a longstanding one – is how we might eliminate the bad without losing the good.  Our student affairs staff continually works to counter the impact of pledging on student success, even in the face of stiff pushback from many greek members and alumni.  Might there be a role for faculty to play in this endeavor?  Probably.  Does that role include railing against a stereotype of greeks that actually perpetuates a stereotype of faculty among students and, in so doing undermines the very trust necessary to influence students’ behavior outside of class?  Probably not.

But the question that jumps out at me is slightly different.  While it’s great to see graduating seniors from greek organizations respond so positively to all of these questions, should we actually be celebrating this?  What is it about NOT belonging to greek organizations that produces systematically lower scores on so many important markers of the college experience we are trying to deliver?  For example, I’m not comfortable with finding that the greek members’ sense of belonging on campus score was more than half a point higher than non-greek members (4.26 vs. 3.71); not because I begrudge greek organizations, but because I’m not sure I see a compelling reason for greek membership on our campus to produce such a stark difference.

It’s easy to point to anecdotes of the college experience at its best; and we have many wonderful tales of students – greek and non-greek – who have changed fundamentally during their four years at Augustana.  But as I look at these findings, my concern tends toward the students who experience less than our best.  I’d be curious to figure out what we might do to minimize, or even eliminate, the statistically significant differences between greek and non-greek members across all of these senior survey experience questions.

Answers?  You wanted answers?  Oh, grasshopper . . .

Have a great Homecoming week – and let’s not leave anyone on the outside looking in.

Make it a good day,

Mark

 

 

The myth of the vanishing humanities professor

As much as I try to be a kind, sensitive, and empathetic institutional researcher (group hugs every fifth Tuesday – no, not really!), I can’t resist salivating just a little bit whenever word of a new uber-explanatory claim pops up on my radar.  Part of my interest comes merely from a persistent drive to apply evidence to better understand what we do.  Sometimes, we make decisions that produce unintended consequences – and many times the impact of those decisions rises to the surface inductively, through the observations of some who, thankfully, are uniquely predisposed to see it.  However – and I fully own up to my dark side here – the chance to test a claim that has already gotten itself a bandwagon, a theme song, and the specter of pitchforks and torches storming the Bastille is an institutional researcher’s dream chance to “speak truth to power.”  It’s bratwurst to a Bear’s fan, grog to a Viking, a soy latte to an NPR member . . . you get the picture.

For many, the recent decision to merge the German and Scandinavian programs has felt like another body blow to the core values on which Augustana was founded.  Moreover, this decision all too easily feeds into a larger narrative that Augustana, like many traditional liberal arts colleges before it, has long since abandoned its commitment to the liberal arts even as it has disingenuously held on to the relative prestige of claiming to be something that it is not.

So . . . have we gutted our commitment to the liberal arts?  I purposefully choose this inflammatory language because it is exactly the wording that was used when the claim was made to me – complete with raised intonation and eyebrows.  While there are many ways to unpack this question; I’m writing a blog, not a book.  However, there are a couple of ways that we might examine our data to test this claim.  To that end, I’d like to introduce a couple of data points and one observation that might flesh out this story just a little bit.

One way that an institution might shift its commitment away from the liberal arts would be to move faculty positions away from core liberal arts disciplines like the humanities, foreign languages, and fine arts and add faculty lines to new or existing pre-professional programs.  While this by no means should be consider “smoking gun” evidence, if this were indeed the case, it would provide strong evidence to support the claim that Augustana had given up its commitment to the liberal arts.

So I decided to look for any evidence of a shift in faculty distribution over the past ten years. (Whether we should have gone back further to the late sixties or early seventies is an entirely valid critique).  Nonetheless, we started by building a baseline from 2001.  Thanks to Sarah Horowitz and Jamie Nelson in Special Collections, we tracked down a 2000-01 college directory and manually counted the number of faculty in each discipline.  As best as we can tell (it’s possible that some faculty were not listed in the directory for some reason), there were 78 faculty FTE (full time equivalent) employed by Augustana in the humanities, foreign languages, and fine arts ten years ago.  To put that in context, these 78 faculty FTE made up 49.6% of the 157 total faculty FTE.

So how does the 2000-01 distribution compare to today?  Last year, 2011-12, 114 faculty FTE were employed in humanities, foreign languages, and fine arts disciplines – 53.3% of our 214 total faculty FTE.  In the particular case of foreign languages, in 2000-01 there were 18 faculty FTE teaching in foreign language departments.  In 2011-12, there were 20.33 faculty FTE teaching in foreign language departments (we included classics in this analysis to be sure that Latin and Greek weren’t left out).

This evidence hardly supports the assertion that Augustana is gutting the liberal arts.  Just as a reminder, I am not suggesting that this is “smoking gun” evidence to dismiss the aforementioned claim. There might be evidence that other academic departments have lost positions to the pre-professional programs or that the relative distribution of full-time and part-time instructors has shift away from the core liberal arts disciplines;  although a cursory glance suggests to me that neither of these possibilities are likely.  So, at least in terms of overall faculty distribution in the traditional liberal arts, the trend over the last ten years suggests an increased investment in the most traditional liberal arts disciplines.

But this data doesn’t mean that there hasn’t been a shift in students’ academic behavior patterns that might translate into a different distribution of majors and minors.  In this context, there certainly might be some perceived winners and losers.  Our institutional data does show some changes in student academic interests over ten years, but the totality of these shifts merely complicates the story.  While the proportion of students declaring their “primary” major in the humanities has declined, the proportion of students declaring a “secondary” major or minor in the humanities has remained strong and maybe even ticked up slightly.  Some of this is due to an overall increase in the number of second majors and additional minors that students now obtain.  So even thought this data might reflect a modest shift in student priorities, its a long way from suggesting that the college is gutting the liberal arts.

So where does this leave us?  That isn’t my question to answer.  My goal here was only to test the veracity of a claim that seems to be a popular rallying cry in some circles at the moment.  Based on this evidence, and if the degree to which our investment in and distribution of faculty lines across the college represents our educational philosophy, it’s pretty hard to make the case that Augustana has abandoned its commitment to the liberal arts.

However, this evidence doesn’t address the question of whether or not our collective emphasis on an interdisciplinary, liberal arts education has waned in the face of increasingly siloed major requirements, a growing belief in the perceived value of a double major and/or a second minor, and institutional policies that waive course requirements fundamental to the liberal arts (e.g., foreign language competency).  But that conversation is a very different one – one that probably involves an examination of our espoused values, a hard look at the ramifications of our actual curricular and co-curricular policies, and a mirror.

Make it a good day,

Mark

 

Hiding under the “average” blanket

Higher ed folks often toss around numbers that supposedly describe the quality of a given college or university.  But a funny thing happens on the road to an “average” score.  Although it might approximate everyone, it rarely describes anyone in particular.  So unless a college hires an Institutional Psychic to predict the individual path of each new student (The Nostradamus Endowed Chair of Student Success?), metrics like an average retention rate or a student-faculty ratio don’t tell us as much about a place as we might like – or want – to think.

But this doesn’t mean that the data is useless.  In fact, we can learn a lot about ourselves by looking for differences between subsets of students on a variety of such metrics.  For example, an overall retention rate could – and often does – mask stark differences in persistence between high and lower ability students, high and lower income students, or men and women.  Identifying the nature of those differences could point us toward the solutions that would help us improve what we do.

Over the last several years we’ve increasingly employed this approach to squeeze more useful information out of our student experience data.  Many of you have already seen the way that the experiences of your majors might differ from other Augustana students in your senior survey departmental reports.  Taking the same approach that we use to better understand student retention (dividing students by gender, race/ethnicity, socioeconomic status, and academic preparation/incoming ACT score) reveals a layer of nuance that I believe deepens our understanding of the Augustana experience across diverse students types.  It also helps us use evidence to think about how we might engage specific types of students in specific moments to more carefully mitigate these differences.

As an aside, the differences that we spend most time considering are those that cross a threshold of statistical significance – meaning that there is less than a 5% chance that the likelihood of the observed difference is coincidental (the formula that we used is called a t-test).  In this post I am going to focus on differences between low income and middle/upper income students.  Future posts will consider the differences the emerge across a range of variables including gender, race/ethnicity, and academic preparation.

Comparing low income students with middle/upper income students presents a great example of the complexities this kind of analysis can provide.  We used Pell Grant eligibility as the marker of lower income – its an easy way to categorize financial need, even as it probably over-simplifies the impact of socioeconomic status (SES).  As you look through the items on which differences emerged, think about the possible factors that might produce a statistically significance difference between the two groups’ responses.

Lower income students scored higher than middle/upper income students on several items.

  • My co-curricular activities provided numerous opportunities to interact with students who differed from me in race/ethnicity, sexual orientation, religious beliefs, or social/political values.
  • My out-of-class experiences have helped me connect what I learned in the classroom with real life events
  • In your non-major courses, about how often were you asked to put together ideas or concepts from different courses when completing assignments and during class discussions?
  • My interactions with the librarians helped me improve my approach to researching a topic.
  • Augustana faculty and staff welcomed student input on institutional policy and handbook decisions.
  • When you had questions or concerns about financial issues, were the offices you contacted responsive and helpful?

Conversely, lower income students scored lower than middle/upper income students on one item.

  • My out-of-class experiences have helped me develop a deeper understanding of myself.

The scope of these differences is fascinating.  In some instances low income students’ responses seem comparatively more positive than the rest of the student body.  While this might suggest that some of our efforts are indeed providing a compensatory impact, I think these findings highlight the relative lack of pre-college opportunity that lower income students often must overcome (fewer communal resources like libraries or access to technology, less exposure to some of the ideas fundamental to the liberal arts, etc.).  In other cases, these findings might be evidence of the quality of our effort to be sensitive and inclusive to these students (e.g., the relatively more positive interactions for low income students when asking for help with financial issues).  Understanding the nature of these differences could play an important role in shaping our daily interactions with students who may, unbeknownst to us, come from a lower socioeconomic background.

At the same time, sometimes these differences in responses suggest that some of these students’ experiences are less positive.  Given the small numbers of lower income students at Augustana, it seems likely that they would recognize the extent to which they interact across socioeconomic difference more often than middle/upper income students.  In some cases this might contribute to a sense of marginalization for low income students.  Finally, the difference in responses on the question about “out-of-class experiences develop a deeper understanding of myself” is particularly intriguing.  I’d like to know more about the underlying factors that might influence that difference.

Taken together, these findings replicate the results of many recent studies regarding the impact of social class on college students – an impact that extends far beyond financial constraints.  What have you observed in your interactions with low income students?  Are there things you have done that seem to help these students succeed at Augustana?  As you interact with students this week, I hope these findings expand your sense of the ways in which our students experience Augustana differently, and how our sensitivity to these differences can improve our educational impact.

Make it a good day,

Mark

 

Complicating the extrinsic motivation and getting good grades narrative

Faculty often cringe when students ask, “what do I have to do to get an “A” on this assignment?”  For most educators, this question feels more like an unsolicited back alley proposition than a genuine expression of intellectual curiosity.

Yet from the student’s perspective, grades may represent a very different kind of negotiation.  Not only have grades dictated their access to future educational opportunities, extra-curricular experiences, and sometimes even cash(!) since elementary school, but the categories of “A” student, “B” student, and “C” student have all too often come to represent individual worth and long-term potential – not just the quality of one’s work on a particular assignment.  Sadly, we’ve done a pretty good job of validating this conception.  Remember the “My kid is an honor student at ____ school” bumper stickers that still adorn many a late model mini-van or SUV?

Luckily, disentangling the relationship between our students’ perception of grades and their motivational orientations can be approached as an empirical question.  Last year we began a four-year study of the experiences that shape our students’ intrinsic motivation.  As a part of this study, we included a measure of extrinsic motivational orientation and a question that asked students to indicate the importance they place on getting good grades.

This summer, we tested the relationship between extrinsic motivation and the importance of getting good grades at the end of the first year.  We assumed we’d find a significant relationship between these two variables.  So we were quite surprised to find no significant correlation between extrinsic motivation and importance of getting good grades.  However, we found a statistically significant positive – and moderately sized (.332) – correlation between students’ intrinsic motivational orientation and the importance of getting good grades.  Hmmm . . .

At the very least, this suggests that we might need to think more carefully about the assumptions we make when students ask how they can earn an ‘A’ from us.  One student inquiry about earning a high grade might be an indication of the degree to which we simply have not communicated our expectations for an assignment clearly.  Another inquiry might reflect the degree to which a student considers the entire educational enterprise to be about jumping through hoops and collecting credentials.  Still another inquiry might only mean that the student has too many irons in the fire and is simply triangulating their available time, the expectations they perceive that you hold, and the grade they can afford to live with.

There are two additional considerations about grading practices and their relationship to student motivation that are worth noting.  First, letter grades emerged during a time in which the learning expected of students was primarily about content knowledge.  But as content has shifted from an end to a means – with colleges now focused on developing more complex skills and dispositions in addition to content knowledge, we have done very little to think about whether the traditional metric for assessing student performance might benefit from some reconsideration.

In addition, at Augustana we don’t impose a single definition of what a grade represents.  Does an ‘A’ mean that a student has met an externally defined threshold of competence?   Or does it mean that a student has improve substantially over the course of a term?  Or is it some combination of the two that shifts as the course progresses?  Or maybe it should depend on the role of the course within the larger curriculum to determine whether grading should be about improvement or competence.

Faculty employ varying iterations of these conceptions across the array of courses that they offer, and all three approaches seem entirely appropriate for different situations.  But from the students’ perspective, unless they actually understand that there are different approaches to grading, and that these approaches can (and probably should) vary depending upon the course, they are likely to feel blindsided when the conception chosen by the instructor differs from that expected by the student.  Any one of us would likely be frustrated by such a realization, and in that moment it seems entirely reasonable to ask the question, “How DO I get an ‘A’ in this class?”  Moreover, I think we would have good reason to be offended if someone responded to our question by challenging our motives for learning.

Since a large proportion of our students understand the impact of grades on their future prospects for graduate school or the job market, it is likely that many place great importance on getting a high grade regardless of their motivational orientation.  So, it appears that maybe – just maybe – the implications of a student asking, “How do I get an ‘A’ on this paper?” are, let’s just say . . . complicated.

Make it a good day,

Mark

 

 

 

 

The faculty adviser as a student’s GPS

At Augustana, we have always believed in the importance of faculty advising.  And we have solid evidence to support this approach.  In addition to the many proud stories of students who have blossomed under faculty tutelage, our recent senior survey data and our prior NSSE data both suggest that overall, our students are satisfied with the quality of our advising.  In fact, other NSSE data suggests that faculty ask students important advising questions about career aspirations more often than faculty at similar institutions.

Yet many of us share a gnawing sense that we can, and need, to do better.  Because even though these average scores roughly approximate general satisfaction, the degree of variability that lurks beneath them hides an uncomfortable truth.  For each advising relationship that inspires a student to excel, there are students for who gain little substantive benefit from their advising interactions.

One way to strive for improvement with some measure of confidence is to collectively apply a theoretically grounded framework of advising with a formative assessment feedback mechanism to guide our advising conversations and hone them over time.  One theory of advising, often called developmental advising, positions the adviser as a guide to help students both select and weave together a set of curricular and co-curricular experiences to attain important learning outcomes and post-graduate success.  In many ways, it harkens back to the artisan/apprentice model of learning placed in the context of the liberal arts.  In our senior survey, we included a set of questions informed by this framework to assess the degree to which students experience this kind of advising.  The table below reports the average responses to these questions among students who graduated in 2012.

Question

Mean

St.Dev.

My adviser genuinely seemed to care about my development as a whole person.*

4.13

1.003

My adviser helped me select courses that best met my educational and personal goals.*

3.98

1.043

How often did your adviser ask you about your career goals and aspirations?**

3.55

1.153

My adviser connected me with other campus resources and opportunities (Student Activities, CEC, the Counseling Center, etc.) that helped me succeed in college.*

3.44

1.075

How often did your adviser ask you to think about the connections between your academic plans, co-curricular activities, and your career or post-graduate plans?**

3.31

1.186

About how often did you talk with your primary major adviser?***

3.47

1.106

The response options are noted below.

*1=strongly disagree, 2=disagree, 3=neutral, 4=agree, 5=strongly agree
**1=never, 2=rarely, 3=sometimes, 4=often, 5=very often
***1=never, 2=less than once per term, 3=1-2 times per term, 4=2-3 times per term, 5=we communicated regularly throughout the term

 

First, I think it’s useful to consider the way that each question might enhance student learning and development.  In addition, it is important to note the relationship between questions.  It seems that it would be difficult for a student to respond positively to any specific item without responding similarly to the previous item.  Taken together, this set of questions can function as a list of cumulative bullet points that advisers might use to help students construct an intentionally designed college experience in which the whole is more likely to becomes qualitatively greater than the sum of the parts.

Second, the data we gather from these questions can help us assess the nature of our efforts to cultivate our students’ comprehensive development.  Looking across the set of mean scores reported above, it appears that our students’ advising experiences address optimal course selection more often than they help students connect their own array of disparate experiences to better make the most out of college and prepare for the next stage of their lives.

Yet, if we were to adopt this conception of advising and utilize future senior survey data to help us assess our progress, I am not sure that continuing to converting each question’s responses to a mean score helps us move toward that goal.  The variation across students, programs, student-faculty relationships, and potential pathways to graduation doesn’t lend itself well to such a narrowly defined snapshot.  Furthermore, suggesting that we just increase an overall mean score smells a lot like simply adding more advising to all students instead of adding the right kind of advising at the just the right time for those who need it the most.

A more effective approach might be to focus on reducing the percentage of students who select particular responses to a specific item.  For example, in response to the question, “How often did your adviser ask you to think about the connections between your academic plans, co-curricular activities, and your career or post-graduate plans?” 25% of the 2012 graduating students indicated “never” or “rarely.”  It is entirely possible to reduce that proportion substantially without markedly increasing an average score.  For example, if we were to find a way to ask every student to consider the questions outlined in the senior survey once per term while at the same time focusing less on whether students indicate “often” or “very often,” we might find that the proportion of students indicating “never” or “rarely” drops considerably while the mean score remains about the same.  More importantly, I would suggest that at the end of the day we might have become more effective (and dare I say more efficient) in making the advising relationship a positive and influential piece of the educational experience without exhausting ourselves in the process.

As we embark on our HLC Quality Initiative to improve our advising efforts, I hope we will think carefully about the way that we utilize our data to understand our progress.  Our goal is to improve our educational effectiveness – not just move a number.

Make it a good day,

Mark

 

 

 

 

 

Faculty impact on students’ preparation for life after college

In our new senior survey we included two items that serve as useful measures of an Augustana education.

  • “If you could relive your college decision, would you choose Augustana again?”  (five response options from “definitely no” to “definitely yes;” scored 1 to 5)
  • “I am certain that my post-graduate plans are a good fit for who I am right now and where I want my life to go.”  (five response options from “strongly disagree” to “strongly agree;” scored 1 to 5)

While these two items should not be misconstrued (and were not intended to function) as a comprehensive accounting of our success or failure, they do provide some sense of 1) the value our students place on their Augustana experience, and 2) the impact of that experience on their immediate future.  Here are the average scores from our May 2012 graduates.

Question

# of Responses

Mean

Std. Deviation

Certainty of fit regarding post-graduate plans

497

4.06

0.888

Likelihood of choosing Augustana again

491

4.19

0.895

Even though there might be an audience for whom these scores are particularly important (perspective students, board members, accrediting organizations, etc.), for those of us engaged in the rough-and-tumble work of educating, these numbers don’t tell us much about how we might improve what we do.  For this purpose we need a different type of analysis that tests the relationship between experiences and outcomes.  Furthermore, we must keep in mind that the implications of whatever we find are inevitably going to be nuanced, complicated, and potentially booby-trapped.  One of the critical and maddening mistakes that folks often make in translating student-derived data into actionable change is that they too easily succumb to the belief that there exists a magic wand – or worse still, that they are the magician.

It turns out that among this most recent group of graduates there were four specific student experiences that increased the likelihood of a student saying that they “definitely” would choose Augustana again and that they “strongly agree” that their post-graduate plans are a good fit.  I’ll spare the technical stuff for the sake of the statistophobic (you know who you are!); let me just note that we utilized several statistical procedures to give us a legitimate degree of confidence in the validity of our findings.  Of course, if you really want to know all of the gory details, just shoot me an email or post a comment below.

One of those influential experiences involves the role of faculty in helping students achieve their post-graduate plans. Students were asked to respond to this statement:

“Faculty in this major knew how to help me prepare to achieve my post-graduate plans.”  (five response options from “strongly disagree” to “strongly agree;” scored 1 to 5)

Students’ responses to this question produced a statistically significant positive effect on both outcome questions.  In other words, as students’ belief that “faculty in their major knew how to help them prepare to achieve their post-graduate plans” increased, the likelihood of 1) definitely choosing Augustana again, and 2) being very certain that their post-graduate plans were a good fit went up.

So how do we translate this into plausible and sustainable improvement?  It would be easy to resort to naive platitudes (“Hey everyone – prepare your students better, ok?”).  Instead, I’d like to suggest three interconnected ideas that might help us deconstruct the nature of faculty influence on students’ post-graduate preparations and maybe identify a few simple ways to enhance your students’ preparation for life after college.

Different Strokes for Different Folks

The lines connecting a given major to a particular career vary and blur considerably across disciplines.  For example, the array of post-graduate plans among humanities majors might seem almost infinite compared to those among business or education majors.  However, we can help students think about connecting their career aspirations to their day-to-day and term-by-term actions so that they are better positioned to seek out the right experiences, ask the right questions, and make the right impression when the opportunity arrives.  Simply asking students to articulate these connections at the very least encourages them to seek their own answers; and in the process increases your impact on their successful preparation.

You Don’t Have to Know Everything Yourself

Just as the message can get lost in the delivery, so too can our delivery accentuate the spirit of our message.  When we take the initiative to direct students to other campus offices that can help them think about and prepare to achieve their post-graduate plans (even when students haven’t overtly asked for help), we express the degree to which we want to help our students succeed.  Not only does this effort help students find practical and individualized information, it also increases the likelihood that students will see faculty as go-to resources for other connections that will help them make the most of their college experience.

From Whence Do Your Students Come?

We all have stories of discovering the extent to which students don’t know what they don’t know.  In many cases, this knowledge gap is shaped by prior assumptions that students bring with them to college.  Maybe they’re from a small town.  Maybe they’re a veteran.  Maybe they’ve just transferred from another school.  Knowing these kinds of details about our students’ background can provide important insights into why someone might not pursue participation in an experience that would seem to be ideal for them.  This knowledge can also help us identify the combination of experiences that best fits each student – but we will never be able to help those students connect the dots if we don’t understand the context from whence they come to college.

As you work with your students this week, remember that they don’t just see you as professor – they see you as a guide.

Make it a good day,

Mark

What’s in a name?

When I first floated the idea of a weekly column, everyone in the Dean’s office seemed to be on board.  But when I proposed calling it “Delicious Ambiguity,” I got more than a few funny looks.  Although these looks could have been a mere byproduct of the low-grade bewilderment that I normally inspire, let’s just say for the sake of argument that they were largely triggered by the apparent paradox of a column written by the measurement guy that seems to advocate winging it.  So let me tell you a little bit about the origins of the phrase “Delicious Ambiguity” and why I think it embodies the real purpose of Institutional Research and Assessment.

This particular phrase is part of a longer quote from Gilda Radner – a brilliant improvisational comedian and one of the early stars of Saturday Night Live.  The line goes like this:

“Life is about not knowing, having to change, taking the moment and making the best of it, without knowing what’s going to happen next.  Delicious Ambiguity.”

For those of you who chose a career in academia specifically to reduce ambiguity, this statement probably inspires a measure of discomfort.  And there is a part of me that admittedly finds some solace in the task of isolating statistically significant “truths.”  I suppose I could have named this column “Bland Certainty,”  but – in addition to single-handedly squelching reader interest – such a title would suggest that my only role at Augustana is to provide final answers – nuggets of fact that function like the period at the end of a sentence.

Radner’s view of life is even more intriguing because she wrote this sentence as her body succumbed to cancer.  For me, her words exemplify intentional – if not stubborn – optimism in the face of darkly discouraging odds.  I have seen this trait repeatedly demonstrated in many of you over the last several years as you have committed yourself to help a particular student even as that student seems entirely disinterested in  learning.

Some have asserted that a college education is a black box; some good can happen, some good does happen – we just don’t know how it happens.  On the contrary, we actually know a lot about how student learning and development happens – it’s just that student learning doesn’t work like an assembly line.  Instead, student learning is like a budding organism that depends on the conduciveness of its environment; a condition that emerges through the interaction between the learner and the learning context.  And because both of these factors perpetually influence each other, we are most successful in our work to the degree that we know which educational ingredients to introduce, how to introduce them, and when to stir them into the mix.  The exact sequence of the student learning process is, by its very nature, ambiguous because it is unique to each individual learner.

In my mind, the act of educating is deeply satisfying precisely because of its unpredictability.  Knowing that we can make a profound difference in a young person’s life – a difference that will ripple forward and touch the lives of many more long after a student graduates – has driven many of us to extraordinary effort and sacrifice even as the ultimate outcome remains admittedly unknown.  What’s more, we look forward to that moment when our perseverance suddenly sparks a flicker of unexpected light that we know increases the likelihood – no matter how small – that this person will blossom into the life-long student we believe they can be.

The purpose of collecting educational data should be to propel us – the teacher and the student – through this unpredictability, to help us navigate the uncertainty that comes with a process that is so utterly dependent upon the perpetually reconstituted synergy between teacher and student.  The primary role of Institutional Research and Assessment is to help us figure out the very best ways to cultivate – and in just the right ways – manipulate this process.  The evidence of our success isn’t a result at the end of this process.  The evidence of our success is the process.  And pooling our collective expertise, if we focus on cultivating the quality, depth, and inclusiveness of that process, it isn’t outlandish at all to believe that our efforts can put our students on a path that someday just might change the world.

To me; this is delicious ambiguity.

Make it a good day,

Mark