Designing semesters bit by bit – Look what we can do!

In the midst of all the inevitable end-of-spring-term craziness, the thought of contemplating one more semester design vote doesn’t seem all that appealing. Arguable, the question of whether or not to include advising within our calculus of faculty load is the most complicated of the many decisions we’ve made this year. I don’t fault anyone one bit for feeling overwhelmed, or even a little crabby, about this last vote – no matter what you think we ought to do.

But in the midst of all this, I think it is worthwhile to step back a little and have a look at what we’re on the verge of accomplishing. You might not be in the mood for hyperbole at the moment, but the truth is that we are about to complete something that almost no other institution has done. We’ve actually designed an entire semester calendar and curriculum framework out in the open, step by step, modeling the implications of all the competing issues from the very beginning and then remodeling the implications of each decision on the larger picture at each step along the way. That isn’t to say that we’ve done everything perfectly – after all, we are no more than a bunch of imperfect yokels trying to pull off extraordinary, something that few schools have ever done in a way that most wouldn’t dare to try. Call me a Pollyanna, but after zooming out and having a look back at what we’ve accomplished this year, you’d be hard pressed not to be impressed.

What have we done since September?  Here are the decisions faculty have made that set each of the major elements of the new semester design in place.

  • Voted for an immersive term 140 to 26
  • Voted for a 4-credit course base instead of a 3-credit course base 126 to 37
  • Voted for the immersion term to occur in January instead of May 136 to 35
  • Voted for 124 credits to graduate instead of 120 credits to graduate 92 to 71
  • Approved the structure proposed for General Education 109 to 31
  • Approved the second language requirement unanimously
  • Approved a framework for major design and footprint unanimously

Other than the vote about the total number of credits to graduate, each vote seems to reflect a clear sense among the community about the direction that is best for us.

In addition, two faculty votes have provided advisory positions to the Board of Trustees, the body that makes the final decision on these two specific issues.

  • Voted for a pre-Labor Day start to the academic year 67 to 59
  • Voted that tuition should cover a relatively higher number of credits per year rather than a relatively smaller number of credits 98 to 62

All in all, the amount of intellectual and emotional work that we have successfully sorted through to accomplish all of these decisions is truly extraordinary. It’s hard to imagine anyone NOT feeling at least a little bit more tired than normal these days.

So even if you’re feeling like you are running on fumes these days, try to take a second, breathe deeply, and look at how much we have accomplished. I, for one, am truly amazed and humbled. It’s an honored to be able to call myself a member of the Augustana community.

Make it a good day,

Mark

Improving our first-year advising: sometimes structure does matter

If you’ve been reading this blog for a while, you’ve almost certainly seen some of my posts about the data we’ve collected to assess and guide our advising practices at Augustana College (here, here, and here). However, those posts only get at part of the story. Since all of those posts drew from senior survey data, we can be almost sure that those findings primarily reflect our students’ advising experiences in their major(s). But we also know that first-year advising matters a lot. Many would argue it matters at least as much as major advising. So I’d like to dive into some of the advising data from our first-year students and see if there’s anything that we can learn from it.

In this post I’d like to focus on two items that we know are important for a successful first-year experience. First-year students answered these questions late in their fall term.

  1. My first year adviser connected me with other campus offices, resources, or opportunities (offices like Student Activities, the Community Engagement Center, the Counseling Center) to help me succeed during my first year.
  2. My first-year adviser made me feel like I could succeed at Augustana.

The table below presents the average response scores to these items over the last four years. The response options were strongly disagree, disagree, neutral, agree, and strongly agree. These responses were converted to a 1-5 scale where 1 equals strongly disagree.

Question 2013-14 2014-15 2015-16 2016-17
My first-year Adviser connected me with other campus offices, resources, or opportunities to help me succeed during my first year.    3.55    3.62    3.83    3.89
My first-year adviser made me feel like I could succeed at Augustana.    4.07    4.20    4.25    4.21

You can see that we’ve improved on both measures since 2013-14. I know that our first-year advising program has emphasized the importance of connecting students with the campus offices that can best help them, and it’s heartening to see that this effort may be producing results. With that said, it looks like we might still need to improve since our average score hasn’t quite surpassed “agree” yet. By contrast, in each of the last four years, on average our student’s “agree” that we have made them feel like they could succeed at Augustana.

Interestingly, while the improvement in referring students to other campus resources seems fairly consistent, the improvement in making students feel like they could succeed seems to have plateaued over the last couple of years. But digging a little deeper, there is a wrinkle in our 2016-17 data that both seems to explain this plateau and may further emphasize the value in moving to the first-year advising structure that the faculty has now approved to implement next year.

This year (i.e., during the fall of 2016), about a third of our first-year student advising groups were enrolled in an FYI-100 course instead of merely meeting informally with their adviser throughout the term. For the students who were enrolled in this class, the average response score to the statement “My first-year adviser made them feel like they could succeed at Augustana” was 4.34. For the students who were not enrolled in this class (about 2/3rds of the whole group), the average response score was 4.17.

Many long time advisers said that the FYI 100 format helped them develop stronger relationships with their advisee. These advisers indicated that the stronger relationships allowed them to engage in more substantive conversations that, in turn, helped the students think more deeply about the nature of their college experience and the ways in which they could make the most of it.

As wonderful as it is to hear that we seem to be making improvements in our advising practices, It is even more exciting to see data confirming these bold strides toward even better first-year advising.

Make it a good day,

Mark

Wondering about fall-to-spring retention? Well, guess what?!

Even though most of us only wonder about retention in the fall, down here in the belly of the data beast we’ve been paying closer attention to our term-to-term retention rates for each cohort of students. Although those numbers can fluctuate more, they can also give us an early hint about whether we might be trending in the wrong direction, in the right direction, or if we are just holding steady. More specifically, in the context of last year’s excitement over a record high fall-to-fall retention rate for first year students, it makes some sense to have a look and see if our broader retention efforts are continuing to hold strong … or if all of last year’s hubbub was just that.

So now that we’ve locked in our enrollment numbers for the spring term we can calculate fall-to-spring retention rates for each cohort. Although we also record winter-to-spring retention rates, it seems like it’s a little easier to make sense of fall-to-spring numbers since winter-to-spring rates are, in essence, a percentage of a proportion (i.e., the number of students enrolled in winter term is only a percentage of those who enrolled in the fall, so a winter-to-spring retention rate by itself can be deceiving).

To put our present numbers in context, the table below shows last year’s fall-to-spring (2015-16) retention rates, the prior three-year average (Academic Years 2013, 2014, 2015) fall-to-spring retention rates, and finally our most current fall-to-spring retention rates.

Cohort 2015-16

Fall/Spring

Prior 3-Year Average

(13/14, 14/15, 15/16)

2016-17

Fall/Spring

1st year 94.2% 93.7% 93.8%
2nd year 96.8% 95.9% 97.2%
3rd year 97.5% 97.0% 98.1%
4th year 92.9% 93.4% 95.2%

A couple of things jump out. First, our first-year fall-to-spring retention rate is down slightly from last year’s high. To put this difference in real terms, we would have needed to retain three additional students to match last year’s percentage. However, we did manage to beat the prior three-year average by a hair, which is often a good way to tell if we are headed in the right direction. It’s also good to remind ourselves that a few years ago, we estimated that if everything went perfectly with a first-year class, the best retention rate we could hope for would be 90%. Last year we hit 88.9%. So we are already close to banging our heads on the proverbial ceiling. We will just have to wait to see how this translates into a fall-to-fall retention rate for the first year cohort.

Retention within the 2nd, 3rd, and 4th year cohorts is a different story. In each case, our fall-to-spring retention rate clearly beats both last year’s rate as well as the prior three-year average. Some folks have rightly suggested that we should be careful not to lose touch with the needs of upperclass students as we strive to bring up our first-year retention rate. These numbers seem to suggest that we might have managed to maintain that balance pretty well.

And if you are wondering if all of these increased retention rates translate into more students on campus this spring, indeed they do. Last year at this time, we had a student FTE of 2345 (FTE stands for “full-time-equivalent” and is calculated by taking the number of full-time students and adding a third of the total number of part-time students – supposedly, three part-time students roughly equals one full-time student). Coincidentally, the prior three-year average spring FTE is also 2345.

But this spring, our FTE is 2399. That’s the largest spring FTE we’ve ever recorded.

Congratulations to everyone for your hard work on behalf of our students! In the face of all the budget pressures that we can’t control, it’s really heartening to see us so successful on one metric that we can influence.

Make it a good day,

Mark

Additional evidence that our first generation students might need more explicit guidance

Sometimes social science researchers get too excited about testing new hypotheses and forget about the importance of retesting old ones. Although it’s understandable (why drive a used car when you could drive a new car?), this tendency is exceedingly detrimental to the body of knowledge we claim to know. Because no matter how perfect the study design or how fantastic the results, one set of findings just doesn’t mean that much – a reality that often gets lost in the hype.

In recent years, the tendency to overhype a single set of findings has become the subject of much hand-wringing. In 2010, the New Yorker published a longer piece about a phenomenon called the decline effect where efforts to replicate prior studies are increasingly producing comparatively smaller and sometime even insignificant results. Such results call into question the validity of many prior research findings. A 2013 article in the Economist outlined other research that produced similarly chilling reminders of the fallibility of science and scientists. Not to be outdone, this conundrum starts to get really weird when a 2015 replication study appearing to challenge the validity of 100 well-known psychology findings was taken apart by a 2016 study that critiqued many of the 2015 study’s replication designs and summary conclusions.

I say all this to set up what might otherwise seem like a pretty mundane data point about first-generation students. But first, what do we think we know about first-gen students?

According to the current body of research on first generation students, the existing evidence suggests that these students a more likely to lack basic knowledge about how college is supposed to work. In the absence of this knowledge, the fog is a little thicker, the path is less clear, and they are more susceptible to feeling lost and uncertain about their progress. All this sets up an increased vulnerability that heightens the potential for difficulty and early departure. Although we can see the gap in first-second year retention rates between first-gen students and their peers, differences in retention rates don’t necessarily confirm the more granular elements of prior findings about the first-gen experience.

To find that kind of granular confirmation, we need to identify specific items in the first year surveys that could suss out these differences, parse the array of data we gather from first year students by first generation status, and test for statistically significant differences.

One prime possibility is a survey item from the end of the first year that asks first-year students to respond (i.e., choosing from 5 options that range from strongly disagree to strongly agree) to the statement, “Reflecting on the past year, I can think of specific experiences or conversations that helped me clarify my life/career goals.” If the first-generation student experience involves a relatively higher frequency of feeling lost or unsure about how to connect all of one’s activities, classes, and experiences into a coherent narrative, then first generation student responses, on average, should end up lower (and statistically significantly lower) than the overall response.

It turns out that this gap in average responses is profound.  While the overall average score is 3.83 (which translates to just south of ‘agree’), the average score for first-gen students is 3.23 (just north of ‘neither agree nor disagree’), a gap that amounts to an “extremely” statistically significant difference (i.e., p<.001 for all you quant nerds out there). Since we can conclude from these two mean scores that the average response from non first-gen students is a good bit higher than 3.83, it’s even more clear that whatever is going on isn’t merely a function of chance.

It’s possible that this difference mirrors the degree to which first-generation students simply do not engage in as many potentially influential activities and experiences as other students. If this were the case, we’d likely see these differences emerge elsewhere in the data. However, every other measure of involvement and participation suggests that there are no differences in frequency of engagement between first-generation students and their peers.

So maybe this difference in recalling specific experiences of conversations that helped clarify life/career goals is exactly the kind of thing that we might expect based on our prior understanding of first-generation students’ experience. Maybe first-gen students are engaged in the same average number of experiences as other students, but they are less likely to recognize the potential value of these experiences. As a result, maybe not knowing to look for the potential value of an experience makes it less likely that these students would see a way to connect these experiences to a longer-term goal.

It seems that this finding fits with our prior understanding of first-generation students. It also has important implications for the way that we talk with first-gen students about what they are doing in college. More than simply suggesting what they might do, it appears that first-gen students might need even more explicit guidance about how to reflect on the impact of a given experience, how that reflective activity might help them decide what experiences to prioritize, and how to connect what they might have learned through one experience with the developmental purpose of a subsequent experience.

In future years it’s very likely that a healthy proportion (about a third) of our new students will continue to be first-generation students. Much of what they don’t know about college is stuff that they don’t know they need to know. So our job is not only to tell them what they could do, but to show them how to decide what to do and how to use what they learn through those experiences to guide their future choices.

Make it a good (snow) day,

Mark

Differences in our students’ major experiences by race/ethnicity; WARNING: messy data ahead

It’s great to see the campus bustling again.  If you’ve been away during the two-week break, welcome back!  And if you stuck around to keep the place intact, thanks a ton!

Just in case you’re under the impression that every nugget of data I write about comes pre-packaged with a statistically significant bow on top, today I’d like to share some data findings from our senior survey that aren’t so pretty. In this instance, I’ve focused on data from the nine questions that comprise the section called “Experiences in the Major.” For purposes of brevity, I’ve paraphrased each of the items in the table below, but if you want to see the full text of the question, here’s the link to the 2015-16 senior survey on the IR web page. The table below disaggregates the responses to each of these items by Hispanic, African-American, and Caucasian students. The response options are one through five, and range either from strongly disagree to strongly agree or from never to very often (noted with an *).

Item Hispanic African-American Caucasian
Courses allowed me to explore my interests 3.86 3.82 4.09
Courses seemed to follow in a logical sequence 3.85 3.93 4.11
Senior inquiry brought out my best intellectual work 3.61 4.00 3.78
I received consistent feedback on my writing 3.72 4.14 3.96
Frequency of analyzing in class * 3.85 4.18 4.09
Frequency of applying in class * 3.87 4.14 4.15
Frequency of evaluating in class * 3.76 4.11 4.13
Faculty were accessible and responsive outside of class 4.10 4.21 4.37
Faculty knew how to prepare me for my post-grad plans 3.69 4.00 4.07

Clearly, there are some differences in average scores that jump out right away. The scores from Hispanic students are lowest among the three groups on all but one item. Sometimes there is little discernible difference between African-American and Caucasian students’ score while in other instances the gap between those two groups seems large enough to indicate something worth noting.

So what makes this data messy? After all, shouldn’t we jump to the conclusion that Hispanic students’ major experience needs substantial and urgent attention?

The problem, from the standpoint of quantitative analysis, is that none of the differences conveyed in the table meet the threshold for statistical significance. Typically, that means that we have to conclude that there are no differences between the three groups. But putting these findings in the context of the other things that we know already about differences in student experiences and success across these three groups (i.e., differences in sense of belonging, retention, and graduation) makes a quick dismissal of the findings much more difficult. And a deeper dive into the data both adds more useful insights to the mess.

The lack of statistical significance seems attributable to two factors. First, the number of students/majors in each category (570 responses from Caucasian students, 70 responses from Hispanic students, and 28 responses from African-American students) makes it a little hard to reach statistical significance. The interesting problem is that, in order to increase the number of Hispanic and Black students we would need to enroll more students in those groups, which might in part happen as a result of improving the quality of those students’ experience. But if we adhere to the statistical significance threshold, we would have to conclude that there is no difference between the three groups and would then be less likely to take the steps that might help us improve the experience, which would in turn improve the likelihood of enrolling more students in these two groups and ultimately get us to the place where a quantitative analysis would find statistical significance.

The other factor that seems to be getting in the way is that the standard deviations among Hispanic and African-American students is unusually large. In essence, this means that their responses (and therefore their experiences) are much more widely dispersed across the range of response options, while the responses from white students are more closely packed around the average score.

So we have a small number of non-white students relative to the number of white students and the range of experiences for Hispanic or African-American students seem unusually varied. Both of these finds make it even harder to conclude that “there’s nothing to see here.”

Just in case, I checked to see if the distribution of majors among each group differed. They did not. I also checked to see if there were any other strange differences between these student groups that might somehow affect these data. Although average incoming test score, the proportion of first-generation status, and the proportion of Pell Grant qualifiers differed, these differences weren’t stark enough to explain all of the variation in the table.

So the challenge I’m struggling with in this case of messy data is this:

We know that non-Caucasian students on average indicate a lower sense of belonging than their Caucasian peers. We know that our retention and graduation rates of non-white students are consistently lower than white students. We also know that absolute differences between two groups of .20-.30 are often statistically significant if the number of cases in each group is closer in size and if the standard deviation (aka dispersion) is in an expected range.

As a result, I can’t help thinking that just because a particular analytic finding doesn’t meet the threshold for statistical significance doesn’t necessarily mean that we should discard it outright. At the same time, I’m not comfortable arguing that these findings are rock solid.

In cases like these, one way to inform the inquiry is to look for other data sources with which we might triangulate our findings. So I ask all of you, do any of these findings match with anything you’ve observed or heard from students?

Make it a good day,

Mark

What experiences most effectively improve our students’ intercultural competence?

Looking back, we must have been a little bit clairvoyant to start a four-year study like this in 2012. After all, how else does one explain the serendipity of having such robust data on our students’ intercultural competence growth precisely when current events on campus and across the country seem to epitomize our societal need for more substantive intercultural skills?

On Friday afternoon, I presented the second of three Friday Conversations focused on our examination of the four-year study of our students that concluded in the spring of 2016. If you missed the first presentation, you can see the power point slides and get a sense of our conversation in the subsequent Delicious Ambiguity post from last fall. In essence, last fall we focused only on the nature of our students’ change and how it might differ across various types of students.

Last week, I shared what variables we had found seem to correlate, and might therefore predict, the change that we see in our students. Again, if you missed it, you can click on the following hyperlinks to see the power point slides and look over the final table of regression results.

In essence, what we found mirrors what researchers who examine the impact of college experiences on student learning outcomes consistently find. Mere participation in various experiences isn’t enough. Instead, it is the nature of what happens within those experiences, and the degree to which those experiences are designed to address specific learning goals, that matters most. In this case, the degree to which students’ out-of-class experiences helped them develop a deeper understanding of how to interact with someone who might disagree with them turned out to be the largest and most pervasive factor in driving our students’ intercultural competence growth. Importantly, when we accounted for participation in specific experiences and accounted for the quality of those experience (i.e., what happened within those experiences), whether or not the student participated in a particular experience didn’t matter.  Instead, almost every bit of our findings pointed toward the nature of their experience across their college career.

Take the time to scroll through the linked slides and scan the final table of results above. I think you’ll see that there are clearly ways that we can implement educational design elements across a variety of experiences that will improve our students’ intercultural competence growth.

At our third Friday Conversation focused on this topic (April 7th), we will tackle the biggest challenge: what changes are we willing to implement based on our findings that should help us improve what we do? It’s all well and good to stroke our chins and puzzle over the data. But the mark of a great college is the willingness and ability to jump in, make a change, and commit to it. I hope you’ll join us in April. In the meantime, if you have questions about the findings, the study, or the implications we’ve noticed, don’t hesitate to post them below.

Make it a good day,

Mark

“We all want to belong, yeah …”

I just watched a wonderful TEDx talk by Terrell Strayhorn, Professor of Higher Education at (the) Ohio State University, called “Inalienable Rights: Life, Liberty, and the Pursuit of Belonging.” With enviable ease, Dr. Strayhorn walks his audience through the various factors that impede college persistence and demonstrates why a sense of belonging is so important for student success. He concludes his talk with his remarkably smooth singing voice, crooning, “We all want to belong, yeah . . .”

If you’ve been following my blog over the last year you’ve seen me return to our student data that reveals troubling differences in sense of belonging on campus across various racial and ethnic groups. The growing body of research on belongingness and social identity theory continues to demonstrate that the factors that shape a sense of belonging are extensive. While these complicated findings might gratify the social scientist in me, the optimistic activist part of me has continued to beg for more concrete solutions; things that individuals within a community can do right away to strengthen a sense of membership for anyone in the group who might not be so sure that they belong.

So here are a couple of ideas that poured some of the best kind of fuel onto my fire over the weekend: Micro-Kindness and Micro-Affirmations. Both terms refer to a wonderfully simple yet powerful idea. In essence, both concepts recognize that we live in an imperfect world rife with imperfect interactions and, if we want the community in which we exist to be better than it is (no matter how good or bad it is at present), then individual members of that community have to take action to change it. Applied to the ongoing discussion of microaggressions and their potential impact on individuals within a community (particularly those from traditionally marginalized groups), both ideas assert that there are things that we can do to emphasize to others that we welcome them into our community and reduce the existence of microaggressions. These actions can be as simple as opening a door for someone and smiling at them, making eye contact and saying hello, or engaging in brief but inclusive conversation. Instructors can have a powerful micro-affirmative impact by taking the time to tell a student who might be hesitant or struggling that you know that he or she can succeed in your class.

Researchers at the Higher Education Research Institute at UCLA have found that validating experiences, much like the micro-kindnesses and micro-affirmations described above, appear to have a significant impact in reducing perceptions of discrimination and bias. In fact, after accounting for the negative impact of discrimination and bias on a sense of belonging, interpersonal validations generated by far the largest positive effect on a sense of belonging.

Research on the biggest mistakes that people can make in trying to change behavior has found that trying to eliminate bad behaviors is much less effective than instituting new behaviors. Since individuals often perceive microaggressions to come in situations where a slight was not intended, eradicating everything that might be perceived as a slight or snub seems almost impossible. But if each of us were to make the effort to enact a micro-kindness or a micro-affirmation several times each day, we might set in motion a change in which we

  1. substantially improve upon the community norms within which microaggressions might occur, and
  2. significantly increase a sense a belonging among those most likely to feel like outsiders.

Make it a good day,

Mark

 

Rethinking our “competition” for future students

Welcome back! I hope you found a way to carve out at least a few moments of relaxation and rejuvenation during the holiday break. Of course, the phrase “holiday break” doesn’t mean nearly the same thing for everyone, especially this time of year. For example, the folks in admissions are in the midst of working their tails off. Nowadays, the mayhem of recruiting high school students to a private liberal arts college doesn’t take a holiday, ever.

Over the last few years, we’ve learned a lot about the nature of our “competition” for prospective students. Not so long ago, many of us might have assumed that a high school senior considering Augustana College would therefore have already limited their list of potential colleges to a set of small liberal arts colleges, mostly located in the Midwest. Several decades ago this assumption was almost always correct. However, these days we know that the majority of prospective students who consider Augustana tend to look hardest at Midwestern public or larger urban private institutions as they narrow toward their final choice, not other small liberal arts colleges. This knowledge has clearly helped us make a more convincing case for choosing Augustana College, since knowing which institutions we are competing against helps us make our case more precisely and concretely.

Over the last few years, we’ve heard rumblings about other looming competitors, mostly in the form of online colleges or MOOCs (massive open online courses). Fortunately, most of those up-and-comers have blown themselves up on their own launch pads. But the underlying assumptions that justify the continued quest to build similar launch pads might be the real “competition” that we need to understand most of all.

During the holiday break I stumbled upon an opinion piece that lays bare those assumptions in a way that is as explicit as it is cocky. Neil Patel, a bigwig in the online start-up and entrepreneur world (exemplifying his marketing chops with the hyperbolic clickbait headline “My Biggest Regret in Life: Going to College”) asserts that going to college was a waste of time and money because it didn’t teach him any of the things he needed to learn in order to succeed as an entrepreneur. He argues that his college classes were little more than instances of learning isolated facts, theories, and concepts solely to regurgitate them on a test or in a paper before the end of that academic term (sort of the academic equivalent of “lather, rinse, repeat”). He argues that the entire exercise fails an ROI (return on investment) analysis because he could have learned much more useful information, grown in more substantive ways, and ultimately made more money by diving into the real world right out of high school.

I am not sharing this article to suggest that Patel is right, although my own experience at big public universities as both a student and as an employee doesn’t do much to squash his argument. Rather, I share this article to lay bare the nature of our real competition. Because whether it is less expensive public institutions (2-year or 4-year schools), online institutions, some combination of MOOCs and competency-based education, or merely the simplification of a college choice to the largest financial aid package, in most cases our real competition isn’t other institutions. Instead, it is embedded in a series of assumptions that set up an entirely reasonable conclusion . . . IF those assumptions are, or appear to be, true. The logic stream goes something like this:

  1. College is primarily composed of a series of discrete experiences (AKA classes) that require regurgitating information that has been recently memorized.
  2. The information that is to be regurgitated exists in isolation (AKA is rarely transferable to other college experiences or to life after college).
  3. Accumulating completion approval (AKA at least a passing grade) for set number of classes across a set of categories earns a credential of completion (AKA a bachelor’s degree).
  4. Therefore, find the least expensive way to ensure a reasonable likelihood that one earns this credential.

The hardest part of facing the real world implications of this rationale is that we aren’t talking about our truth. We are talking about prospective students’ truth – the conclusions they draw as they take in what we tell them online, in print, and in person. This is the “truth” that drives real behavior. So as much as we might want to passionately argue that college transforms or that students just can’t know how what they learn will be useful until long after they’ve learned it, if the information that prospective students gather as they look at Augustana College doesn’t emphatically dispel the assumptions that undergird the logic stream spelled out above, all of our hot air (hot print, hot pixels, etc.) will likely end up sounding like a lone coyote howling at the moon.

The other hard part of facing this reality is realizing that prospective students apply this logic (fairly or not) in real time. So we help ourselves a whole lot when we show concrete evidence, from the very beginning of our interactions with each prospective student, that the experience we provide is not focused on memorizing and regurgitating information. And we help ourselves even more when we can show concrete evidence that the things students learn in one setting are directly applied during college and after college. Unfortunately, the lens through which prospective students increasingly evaluate potential colleges is not an unbiased lens. Rather, it is pre-tinted with the aforementioned assumptions, making it critical that every student sees in the most explicit and obvious ways that our understanding of a college education blows those pre-existing assumptions to bits.

All this leads to a pretty important question. If someone were to look at any of the documents or webpages that describe a given educational experience at Augustana (a syllabus, a program description, etc.), how would someone holding the assumptions described above respond? Is there a chance that the document or webpage in question would leave those assumptions unchallenged? Worse, would a review of those documents or webpages confirm those assumptions? Or would that document or webpage shatter those assumptions and open the door for a conversation about how an Augustana education might be completely different from anywhere else?

For those of us who aren’t on the front lines of recruiting students every day, this post might seem overblown. For the folks who are slogging it out in the trenches, this post might not seem urgent enough. But it seems pretty clear that these assumptions are driving the way that many prospective students and their parents start the college search process. If we don’t actively shatter those assumptions early and often, we leave ourselves susceptible to ending up on the short end of a flawed ROI argument. And to rub salt into the wound, if we end up on the losing end of this argument, we won’t even get the chance to challenge the flawed nature of their ROI analysis, because by then the prospective student has likely already crossed us off their list.

Sorry for the sobering post to start the new year. But sometimes sobering isn’t such a bad thing. In this case, we have the winning argument and the evidence to back it up. So knowing the nature of the “competition” gives us one more advantage that we ought to use every chance we get.

Make it a good day,

Mark

How do we improve a student’s sense of belonging?

For the last few years we’ve been talking a lot about our students’ “sense of belonging” after seeing some troubling differences between various student types. Although the overall scores might seem pretty good, stark differences between black and white students suggest a disturbing problem. Looking deeper, we found that hispanic male students also indicate a notably lower sense of belonging. We’ve since found indications that low-income students, first generation students, and lesser academically prepared students can exhibit signs of a lower sense of belonging as well.

Although this news has been tough to swallow, I’ve been really proud of the way that our whole community has committed to making Augustana a more inclusive place. This is a critical first step that shouldn’t go unnoticed, since there are lots of examples of places that have responded to this kind of sobering news by sticking their proverbial head in the sand (or snow, as the case may be). But finding answers to this challenge is complicated. None of our students fit into neat little exclusive categories like hispanic or low-income or first-generation or male. Instead, every student possesses some mix of characteristics that, taken together, uniquely affect the way that they experience Augustana. So improving any student’s sense of belonging means that we need to know a lot more about the perceptions that lie beneath this more general malaise.

Last spring several of my students and I decided to see if we could figure out a bit more about those underlying perceptions. Although there are probably lots of ways to tackle this challenge, after digging into the relevant research my student-workers and I decided to build a set of survey items derived from research on a concept called microaggressions. In short, microaggressions are expressions that communicate animus, aversion, or disregard toward someone specifically because of that person’s membership in a marginalized group. They can be verbal or nonverbal and are sometimes intentional and sometimes not. Although there are some legitimate critiques of the applications of the microaggression construct, this taxonomy of microaggressions provided a useful framework that aptly applied to our project. After testing and tweaking these items with a small group of students, we plugged them into the freshman survey that went out at the end of last year’s spring term. Each item was accompanied by five response options ranging from strongly disagree to strongly agree. The final list of survey items were:

  • I can learn anything if I set my mind to it.
  • I have to work harder to fit in at Augustana than most students.
  • People on this campus believe that I am just as capable as everyone else.
  • People on this campus believe that everyone has the same chance of making the most of their college career as long as they work hard.
  • I’ve gotten better at bouncing back after facing disappointment or failure.
  • Augustana students recognize discrimination when it happens on campus.
  • People on this campus seem to feel uneasy or nervous around me.
  • People on this campus do not seem to acknowledge the characteristics that make me different.
  • People at Augustana tend to assume that I come from a different culture.
  • More than once students on campus have made inappropriate comments or demeaning jokes about me or the group to which I belong.
  • Students at Augustana often make assumptions about me based on the way I look and dress.
  • More than once I have felt overlooked when trying to interact with faculty or staff.

Our first clue that we might be on to something came when we tested the correlations between each of these items and sense of belonging. In all but one case (“I can learn anything if I set my mind to it.”), the correlations were statistically significant and in several cases intriguingly large. (For all you stats nerds out there, by “intriguingly large” I mean approximately .3 and .4 or -.3 and -.4 depending on the phrasing of the item). Then, when we ran more elaborate regression equations that took into account race, gender, incoming ACT score, socioeconomic status, and first-generation status, we found that 10 of the 12 hypothesized sense-of-belonging predictors (all of the above items except “I can learn anything if I set my mind to it” and “People at Augustana tend to assume that I come from a different culture”) produced statistically significant results in the direction that we would expect. In other words, most of these items appear to capture some of the perceptions that underlie a reduced sense of belonging and, consequently, might also give us some hints about the ways that we could bolster sense of belonging among students who lack it.

Lastly, we noticed a curious pattern in our regression equations. In 7 of the 12 equations, race (coded as white/non-white) produced a statistically significant effect, and, in all 12 equations socioeconomic status (coded as receiving a Pell grant or not) produced a statistically significant effect. In other words, race and socioeconomic status consistently play a critical role in shaping a student’s sense of belonging even after accounting for each individual predictor above. So we conducted one more set of analyses to identify the items that might be most prominent in shaping sense of belonging for different types of students.

Although I’ll summarize what we found below, I’ve added a link to the full table of results testing differences by gender, race, socioeconomic status, first-generation status, and incoming ACT score (for clarity’s sake we compared the bottom third against the top third of incoming ACT scores). We’ve included the items where the difference between the two groups was statistically significant as well as the two instances where the difference was just a hair above the p=.05 threshold.

In essence, we found that differences on various items appear between groups across all of the pairings that we tested. In many cases, the differences played out as we would expect. Students of color exhibited disadvantaging self-perceptions on numerous items, particularly items addressing the assumptions (be they perceived or real) that others make about them. Students with lower incoming ACT scores also exhibited a number of disadvantaging self-perceptions. Moreover, “I have to work harder to fit in at Augustana than most students” and “People on this campus seem to feel uneasy or nervous around me” produced statistically significant differences across multiple pairings.

Interestingly, some results challenged prior applications of microaggression theory. For example, the differences between men and women clearly suggested that men potentially suffer from several disadvantaging perceptions. Contrary to the prevailing assertion that women would be the ones to exhibit lower self-perceptions, men scored lower on four items, most notably, “People on this campus believe that I am just as capable as everyone else” and “People on this campus believe that everyone has the same chance of making the most of their college career as long as they work hard.” And although students of color scored lower on a host of items than their white counterparts, they did score higher on the item, “I can learn almost anything if I set my mind to it.”

So what are we to do with all of this? Clearly, this analysis seems to suggest some useful hints about the types of students who might be susceptible to a lesser sense of belonging as well as some hints about ways that we could validate their membership in our community. For example, for students who might feel like they have to work harder to fit in, we can take the time to explain that with regard to academics, developing a robust workrate is a vital precursor to a successful life and if a students already finds themselves increasingly working hard, this may well mean that they are further along than many of their peers. Conversely, if their sense of working harder to fit in relates to their social integration, then we might just have carved out an opening to the kind of conversation or referral that could address this concern. I suspect that some reflection on each of the items noted in the full table might generate additional ideas about how to help students who find themselves wondering if they really belong.

One other implication of these findings seems worth noting. Much of the research on microaggressions has argued that evidence of differences in self-perceptions on items like these is likely, or even necessarily, evidence of discriminatory behaviors or beliefs on the part of members of the pairing who scored higher on that item. In some cases, maybe. But it seems that the pervasiveness of differences that we found across all of these pairings suggests that the factors contributing to a lack of belonging can’t be solely attributed to verbal and nonverbal, intentional and inadvertent slights, snubs, or insults. It’s likely much more complicated than that. It seems to me that this taxonomy of microaggressions is more useful in guiding the way that we might build up someone’s sagging sense of belonging than it is in forcing an interaction to be perpetually framed within the confines of a target/victim label. Intent is a dicey thing to presume, and although we certainly want to help our students understand the implications of their words, arguing about the intentions of another seems likely to become an unresolveable errand after which there is little chance of learning the greater lesson.

As educators, we are always striving for two simultaneous results:

  • to foster an ideal learning environment in the present, and
  • to prepare our students to succeed no matter what life throws at them in the future.

While we absolutely want every student to feel a similarly robust sense of belonging, and while we certainly want every student to feel a similarly minimal set of inferiorities and anxieties, I wonder whether we could ever achieve an ideal learning environment without moments of interactive difficulty that spawn feelings of self-doubt and uncertainty. In the Wabash National Study of Liberal Arts Education, we saw that institutions where students’ intercultural skills grew the most also reported higher frequencies of positive and negative diverse interactions. Certainly we will always need to teach students to think carefully about the import of their words, but I hope we can remember to balance our efforts to support our students in the midst of their hurt or offense with equal efforts to push, prod, and persuade our students to grow in the presence of difficulty. It would be in those moments that, when we support and challenge, we will most fully accomplish our educational mission.

Make it a good day . . . and a good holiday break,

Mark

Careful Planning of Course Offerings Seems to be Paying Off

Have you ever had one of those moments where you put a lot of time into something only to discover that you really just need to start over?  Well, that has been my experience this week in trying to write about some data that we collected from our freshmen last spring regarding the kind of perceptions that are often attributed to microaggressions. So instead of dumping a post on you that isn’t up to snuff (what does that phrase mean, anyway?), I’m going to take it back to the drawing board and post it next week.

In the mean time (cue the Jeopardy music), since many of you are working through course master planning for next year, here is a set of data points that ought to make you smile.

One of the more practical predictors of our seniors’ sense that they would choose Augustana again is the degree to which they found that the courses they needed to take were available in the order in which they needed to take them. Even though there might be a myriad of paths to complete one’s degree, it’s not too difficult to tell the difference between a student who can talk through how their classes fit together and a student who seems to have scrambled through their four years with little more than a grab bag of credits to show for it.

Over the last three years, our seniors’ average response to this item has gone from a 3.06 in 2014 to a 3.50 in 2016. I suspect that this improvement can be credited to improved course planning as well as improved advising. I don’t have a good sense of how that balance breaks down, but I think anyone who’s played a role in either aspect of helping students move through their four years in a more deliberate way deserves a small pat on the back.

So if you are slogging through course master planning ’til late at night and you wonder if it’s worth it . . . it might just be. And if you find yourself wondering if all the time you spend advising students is worth it . . . it might just be.

Make it a good day,

Mark