When presumptions about going to college while working a job collide

The results of a recent large-scale study of college students found that, on average college students spend more time during college working paid jobs than they spend going to class and studying (see one of many news reports about these findings here).  Depending on the news outlet, reports of these findings are followed by either:

  1. These findings are further proof that cost of college is so high that students have to work most of the time just to afford it.  Tuition is too damn high . . . blah blah blah . . .
  2. These findings are further proof that college’s academic requirements have gone horribly soft.  Back in my day . . . blah blah blah . . .

For the sake of argument, let’s say that both points are true.  I think there is a third point to be made that might be more important than all the rest.  The narrative about college graduates that we keep hearing argues that colleges don’t teach enough of the skills required to succeed in the world of work (have a look at one such news story here).  But if college students are spending more than half of their time working in paid jobs, then maybe the alleged skills gap (some folks make at least a partially reasonable argument that the whole claim is crap, like this opinion piece here) shouldn’t be laid at the feet of the colleges at all.

Maybe those who hire college students for all those paying jobs ought to shoulder some of the blame.  Especially if the majority of working college students are employed in the retail, restaurant, or hospitality sectors (a reasonable supposition, I think), then those students are actually working for a much larger corporations that are certainly hiring many of those college graduates.

It seems that maybe the employers who blame colleges for a perceived skills gap ought to take a look in the collective mirror.  And the pundits who use these findings to drive home a pre-determined agenda that college is supposed to produce young adults perfectly ready for everything that the world of work might throw at them . . . you might reconsider your premise.

Revisiting the Value of Early Feedback

It is becoming increasingly apparent in the world of quantitative research that producing a single statistically significant finding shouldn’t carry too much weight (whether or not the value of a single statistically significant finding should have ever been allowed to rise to such a level of deference in the first place is a thoroughly valid discussion for a more technical blog than mine here or here).  In recent years, scholars in medicine, psychology, and economics (among others) have increasingly failed in their attempts to reproduce the statistically significant findings of an earlier study, creating what has been labeled “the replication crisis” across a host of disciplines and splattering egg on the faces of many well-known scholars.

So in the interest of making sure that our own studies of Augustana student data don’t fall prey to such an embarrassing fate (although I love a vaudevillian cracked egg on a buffoon’s head as much as the next guy), I thought it would be worth digging into the archives to rerun a prior Delicious Ambiguity analysis and see if the findings can be replicated when applied to a different dataset.

In February 2014, I posted some analysis under the provocative heading, “What if early feedback made your students work harder?”  Looking back, I’m a little embarrassed by the causal language that I used in the headline (my apologies to the “correlation ≠ causation” gods, but a humble blogger needs some clickbait!).  We had introduced a new item into our first-year survey that asked students to indicate their level of agreement (or disagreement) with the statement, “I had access to my grades and other feedback early enough in the term to adjust my study habits or seek help as necessary.”  The response set included the usual suspects: five options ranging from strongly disagree to strongly agree.

While we found this item to significantly predict (in a statistical sense) several important aspects of positive student interactions with faculty, the primary focus of the February 2014 post turned to another potentially important finding. Even after accounting for several important pre-college demographic traits (race, sex, economic background, and pre-college academic performance) and dispositions (academic habits, academic confidence, and persistence and grit), the degree to which students agreed that they had access to grades and other feedback early in the term significantly predicted student’s response to this item: “How often did you work harder than you have in the past in order to meet your instructor’s expectations?”  In essence, it appeared that students who felt that they got more substantive feedback early in the term also tended to work harder to meet their instructor’s expectations more often.

Replication is risky business. Although the technical details that need to be reconstructed can make for a dizzying free-fall into the minutiae, committing to reproduce a prior study and report the results publicly sort of feels like playing Russian Roulette with my integrity.  Nonetheless, into the breach rides . . . me and my trusty data-wrangling steed.

Although it would have been nice if none of the elements of the analysis had changed, that turned out not to be the case – albeit for good reason.  We tend to review the usefulness of various survey items every couple of years just to make sure that we aren’t wasting everyone’s time by asking questions that really don’t tell us anything.  This turned out to be a possibility with the original wording of the item we were predicting (what stats nerds would call the dependent variable). When we put the statement, “How often did you work harder than you have in the past in order to meet your instructor’s expectations?” under the microscope, we saw what appeared to be some pockets of noise (stats nerd parlance for unexplainable chaos) across the array of responses. Upon further evaluation, we decided that maybe the wording of the question was a little soft.  After all, what college freshman would plausibly say “never” or “rarely” in response?  I think it’s safe to assume that most students would expect college to make them work harder than they had in the past (i.e., in high school) to meet the college instructor’s expectations. If we were a college where students regularly found the curricular work easier than they experienced in high school . . . we’d have much bigger problems.

Since the purpose of this item was to act as a reasonable proxy for an intrinsically driven effort to learn, in 2016 we altered the wording of this item to, “How often did you push yourself to work harder on an assignment even though the extra effort wouldn’t necessarily improve your grade?” and added it to the end-of-the-first-year survey.  Although this wording has proven to increase the validity of the item (subsequent analyses suggests that we’ve reduced some of the previous noise in the data), it’s important to note at the outset that this change in wording and relocation of the item to the end of the year alters the degree to which we can precisely reproduce our previous study.  On the other hand, if the degree to which students get early feedback (an item that is asked at the end of the fall term) significantly predicts the degree to which students push themselves to work harder on their homework regardless of their grade (now asked at the end of the spring term) in the current replication study, it strikes me that this finding might be even more important than the 2014 study.

Thankfully, all of the other variables in the 2016-17 data remained the same as the 2014-15 first-year data. So . . . what did we find?

I’ve provided the vital statistics in the table below.  In a word – bingo!  Even after taking into account sex, race/ethnicity, socioeconomic status (i.e., Pell grant status), pre-college academic performance (i.e., ACT score), academic habits, academic confidence, and persistence and grit, the degree to which students receive early feedback appears to significantly predict the frequency of pushing oneself to work harder on assignment regardless of whether or not the extra effort might improve one’s grade.

Variable Standardized coefficient Standard error P-value
Sex (female = 1) 0.022 0.143 0.738
Race/ethnicity (white = 1) 0.001 0.169 0.089
Socioeconomic Status (Pell = 1) -0.088 0.149 0.161
Pre-college academic performance -0.048 0.010 0.455
Academic habits scale 0.317 *** 0.149 0.000
Academic Confidence scale -0.065 0.167 0.374
Persistence and grit scale 0.215 ** 0.165 0.010
Received early feedback 0.182 ** 0.056 0.005

It is particularly intriguing to me that the statistically significant effect of receiving early feedback in the fall term appears when the outcome item is asked at the end of the spring term – a full six months later. Furthermore, it seems important that receiving early feedback produces a unique effect even in the presence of measures of academic habits (e.g., establishing a plan before starting a paper, starting homework early, etc.) and persistence and grit (e.g., continuing toward a goal despite experiencing disappointment, sticking with a plan to reach a goal over a longer period of time, etc.), both of which produce unique effects of their own.

The implications of these findings seem pretty important. In essence, no matter the student’s pre-college academic performance, the degree of positive academic habits, or the depth of persistence and grit when arriving at Augustana, receiving early feedback in the fall term appears to improve a student’s likelihood of working harder on their schoolwork no matter how that effort might impact their grade.

Whew!  I guess my integrity survives for another day.  More importantly (MUCH more importantly), it seems even more clear now after replicating the 2014-15 finding with 2016-17 data, that creating ways to provide early feedback to students so that they can recalibrate their study habits as necessary appears to be a critical element of effective course design.

Make it a good day,


Data, Analysis, ACTION (now the camera’s bright lights shine on you!)

A couple of weeks ago, the Assessment for Improvement Committee (AIC) and Institutional Research and Assessment (IR&A) hosted the third of three Friday Conversations focused on improving our students’ cognitive sophistication. Unless you’ve been living under a rock (or a pile of semester transition documents!), you know by now that one of the primary functions of AIC and IR&A is to foster an organizational culture of perpetual improvement. To that end, we run a perpetual cycle of data collection, analysis, and communication about the relationships between the student learning and student experience to shine a light on the ways in which we can improve what we do as a college.

The cycle that culminated this year (the entire process takes 5-6 years) focused on the category of learning outcomes we have called “cognitive sophistication.” In particular, we explored data gathered from the cohort of students who entered Augustana in the fall of 2013 and graduated in the spring of 2017 to examine the development of our students’ inclination to, and interest in, thinking about complex or complicated issues or ideas. Just in case you need to catch yourself up, have a quick look at the three previous posts about this process:

  1. Does our Students’ Interest in Complex Thinking Change over Four Years
  2. What Experiences Improve our Student’s Inclination Toward Complex Thinking
  3. Doing Something with What We Now Know

In the fall term, we presented what we had found about the nature of our student’s growth and collected your suggestions about student experiences and characteristics that might influence this growth. In the winter term, we presented the results of testing your suggestions to identify the student experiences that appear to be statistically significant predictors (i.e., particularly influential experiences) of our students’ growth. By contrast, during the spring term Friday Conversation, AIC and IR&A changes it up a bit and turn the session over to whoever shows up. Because if we – meaning the Augustana community – are going to convert our findings into demonstrable improvements, then we – meaning AIC and IR&A – need to hand these findings over to you and let you shape the way that we translate evidence into improvement.

If you clicked on the third post linked above, you didn’t find the results of the third Friday Conversation, but rather a plug and a plea for attendees. Fortunately, a healthy number of faculty and staff showed up ready to put their brains to work. Folks broke into three groups and narrowed a range of ideas into one or two practical ways that the college could put our findings to use. So without further ado, here are the focal points of the conversation from the last Friday Conversation.

Learning in Context

The first set of findings from our data suggested that when students engage in hands-on or experiential learning experiences, their inclination toward complex thinking seems to increase. This may be because experiencing learning in real-world or hands-on settings inevitably add a context that often complicates what might have seemed more simple when discussed in the sanitary safety of a classroom. As students get accustomed to learning or applying prior learning in these real-world settings, research on experiential learning reveals that students find this learning more interesting and sometimes even invigorating.

Even though Augustana offers all sorts of hands-on learning experiences (e.g., internships, research with faculty, community involvement, etc.), it seems that the distribution of these opportunities across majors is uneven. As a result, students in some programs have a much higher chance of gaining access to these kinds of experiences than other students. The faculty and staff focused on this topic considered policy or practice ideas that could bring more of these kinds of opportunities to programs where they have not traditionally thrived. At the same time, the faculty and staff who joined this part of the conversation emphasized the need to offer professional development in order to help faculty in these programs imagine or craft an expanded range of hands-on learning opportunities, especially in disciplines where faculty research tends to be a solo endeavor or the nature of that research tends to explore far beyond an undergraduate’s scope of understanding.

Integrative Advising

This discussion focused on the “integrative” part of integrative advising. Our findings suggested that the more students engage in the integrative aspects of advising conversations (i.e. when faculty or staff prod students to weave together the variety of things they’ve done in college – AKA that long list at the bottom of the email signature – into a coherent narrative), the more they tend to develop an inclination toward complex thinking. This may be because asking students to turn their own raw data (after all, a list of disparate activities is very much like a set of raw data) into a story requires them to engage in complex thinking about uncertainty from two directions: 1) what themes are already present throughout my various activities that could form the basis of a compelling narrative and 2) given where I want to end up after college, how should I alter my list of activities to better prepare for success in that setting?

Participants in this discussion honed in on three ideas that are either already in development or could be introduced. First, they talked about the existing FYI proposal that includes a portfolio. This portfolio might be an especially good way to get first-year students to map out their college experience with the end (i.e. who they want to be when they receive their diploma) explicitly in mind. Second, the participants talked about the need for a way to continue this way of thinking beyond the first year portfolio and landed on a common assignment within the Reasoned Examination of Faith course (formerly Christian Traditions) that would focus on vocation-seeking and purpose. Third, they identified a continuing need for faculty development that would help individuals apply holistic/integrative advising practices no matter the advising context.

Interdisciplinary Discussions

The third group of faculty and staff tackled the challenge of increasing student participation in interdisciplinary discussions. It shouldn’t be much of a surprise by now that the experiences that we found to predict greater gains in cognitive sophistication were those that required students to apply one set of perspectives or ideas within a different, and often more tangible, context or framework. Augustana already offers several avenues for these kinds of conversations (e.g., Salon, Symposium Day, etc.), and there is a certain subset of students who continually participate with enthusiasm. But increasing student participation in these events means focusing on the subset of students who don’t jump at these opportunities. One possibility included finding ways for students to attend conferences in the region when they aren’t presenting research. Another possibility included fostering more interdisciplinary student groups. A third intriguing idea involved the conversations about a Creativity Center on campus and the idea that this initiative might be an ideal vehicle to bring together students from disciplines that might not normally intersect.

Now comes the hardest part of this process. There isn’t a lot of reason to collect student learning data and identify the experiences that shape that learning if we don’t do anything with what we find out.  AIC and IR&A will continue to encourage the campus to plug these findings into policy, program, or curricular design. But we need you to take these findings and discussion points and champion them within your own work.

When you (notice the “when” rather than “if”?) have implemented something cool and creative, can you send me an email and tell me about it?  I’ll be sure to share it with the rest of the college and celebrate your work!

Make it a good day,


What good are those Starfish flags anyway?

Now that we’ve been using the Starfish tool for a couple of years to foster a network of early alerts and real-time guidance for our students, I suppose it makes sense to dig into this data and see if there are any nifty nuggets of knowledge worth knowing. Kristin Douglas (the veritable Poseidon of our local Starfish armada) and I have started combing through this data to look for useful insights. Although there is a lot more combing to be done (no balding jokes, please), I thought I’d share just a few things that seem like they might matter.
Starfish is an online tool that allows us to provide something close to real-time feedback, positive, negative, or informational, to students. In addition, this same information goes to faculty and staff who work closely with that student in an effort to provide early feedback that influences future behavior. Positive feedback should beget more the same behavior. Negative feedback hopefully spurs the student to do something differently.
In general, there are two ways to raise a Starfish flag for a student. The first is pretty simple: you see something worth noting to a student, you raise a flag. These flags can come from anyone who works with students and has access to Starfish. The second is through one of two surveys that are sent to faculty during the academic term. This data is particularly interesting because it is tied to performance in a specific class and, therefore, can be connected to the final grade the student received in that class. The data I’m going to share today comes from this survey data.
We send a Starfish survey to faculty twice per term.  The first goes out in week 3 and asks faculty to raise flags on any student that has inspired one(or more) of four different concerns:
  • Not engaged in class
  • Unprepared for class
  • Missing/late assignments
  • Attendance concern
The second Starfish survey goes out in week 6 and asks faculty to raise flags that address two potential concerns:
  • Performing at a D level
  • Performing at an F level
We now have a dataset of almost six thousand flags from winter, 2015/16 through winter, 2017/18. Do any of these flags appear to suggest a greater likelihood of success or failure? Given that we are starting with the end in mind, let’s first look at the flags that come from the week 6 academic concerns survey.
There are 1,947 flags raised for performing at a D level and 940 flags raised for performing at an F level. What proportion of those students (represented by a single flag each) ultimately earned a passing grade in the class in which a flag was raised?
The proportion that finished with a C or higher final grade
  • Performing at a D level (1059 out of 1947)   –   54%
  • Performing at an F level (232 out of 940)    –    25%

On first glance, these findings aren’t much of a surprise. Performing at an F level is pretty hard to recover from with only three weeks left in a term. At the same time, over half of the students receiving the “D” flag finished that course with a C grade or higher. This information seems useful for those advising conversations where you need to have a frank discussion with a student about what it will take to salvage a course or drop it late in the term.

The second set of flags comes from the third week of the term and represent behaviors instead of performance. Are any of these raised flags – not engaged in class (278), unprepared for class (747), missing/late assignments (1126), and attendance concern (904) – more or less indicative of final performance?

The proportion that finished with a C or higher final grade
  • Not engaged in class (202/278)       –        73%
  • Unprepared for class (454/747)        –        61%
  • Missing/late assignments (571/1126)   –    51%
  • Attendance concern (387/904)        –        43%
There appears that these four flags vary considerably in their correlation with a final grade. Attendance concern flags appear to be the most indicative of future trouble while appearing unengaged in class seems relatively salvageable.
Without knowing exactly what happened after these flags were raised, it’s hard to know exactly what (if anything) might have spurred a change in the behavior of those students who earned a final grade of C or higher. However, at the very least these findings add support to the old adage about just showing up.
What does this data suggest to you?
Make it a good day,

Beware of the Average!

It’s been a crazy couple of weeks, so I’m just going to put up a nifty little picture.  But since I generally try to write about 1000 words, this pic ought to do the trick . . .

In case you can’t make out the sign on the river bank, it says that the average depth of the water is 3 ft!


Sometimes an average is a useful number, but we get ourselves in some deep water if we assume that there is no variation across the range of data points from which that average emerged. Frequently, there is a lot of variation. And if that variation clusters according to another set of characteristics, then we can’t spend much time celebrating anything no matter how good that average score might seem.

Make it a good day,


Warming Perceptions across the Political Divide

Welcome back to campus for the headlining event – Spring Term! (about as likely a band name as anything else these days, right?).

At the very end of winter term, Inside Higher Ed published a short piece highlighting a study that suggested the first year of college might broaden students’ political views. The story reviewed findings (described in more depth here) from an ongoing national study of college students’ interfaith understanding development that goes by the acronym IDEALS (AKA, the Interfaith Diversity Experiences & Attitudes Longitudinal Survey). In essence, both politically conservative and politically liberal students (self-identified at the beginning of their first year in college) developed more positive perceptions of each other by the beginning of their second year. Since Augustana is one of the participating institutions in this study, I thought it might be interesting to see if our local data matches up with the national findings.

The IDEALS research project is designed to track change over four years, asking students to complete a set of survey questions at the beginning of the first year (fall, 2015), at the beginning of the second year (fall, 2016), and at the end of the fourth year (spring, 2019). Many of the survey questions ask individuals about their perceptions of people of different religions, races, ethnicities, and beliefs. For the purposes of this post, I’ll focus on the responses to four statements listed below and zero in on the responses from conservative students about liberal students and the responses from liberal students about conservative students.

  • In general, I have a positive attitude toward people who are politically conservative
  • In general, I have a positive attitude toward people who are politically liberal
  • In general, individuals who are politically conservative are ethical people
  • In general, individuals who are politically liberal are ethical people
  • In general, people who are politically conservative make a positive contribution to society
  • In general, people who are politically liberal make a positive contribution to society
  • I have things in common with people who are politically conservative
  • I have things in common with people who are politically liberal

For each item, the five response options ranged from “disagree strongly” to “agree strongly.”

First, let’s look at the responses from politically conservative students. The table below provides the average response score for each item at the beginning of the first year and at the beginning of the second year.

Politically Conservative Student’s Perceptions of Politically Liberal Students

Item Fall, 2015 Fall, 2016
Positive Attitudes 3.71 3.46
Ethical People 3.21 3.50
Positive Contributors 3.64 3.92
Positive Commonalities 3.23 3.29

Overall, it appears that conservative students’ perceptions of liberal students improved during the first year. Scores on two items (ethical people and positive contributors) increased substantially. Perceptions of commonalities remained essentially the same, and a self-assessment of positive attitudes toward liberal students declined. Normally, the drop in positive attitude would seem like a cause for concern, but conservative students positive attitudes toward other conservatives dropped as well, from 4.29 to 3.92. So maybe it’s just that the first year of college makes conservatives grouchy about everyone.

Second, let’s look at the responses from politically liberal students when asked to assess their perceptions of politically conservative students. Again, the table below provides the average response score for each item at the beginning of the first year and at the beginning of the second year.

Politically Liberal Student’s Perceptions of Politically Conservative Students

Item Fall, 2015 Fall, 2016
Positive Attitudes 3.61 3.65
Ethical People 3.58 3.78
Positive Contributors 3.33 3.76
Positive Commonalities 3.31 3.69

It appears that liberal students’ views of conservative students improved as well, maybe even more so. While positive attitudes about conservative students didn’t change, perceptions of conservatives as ethical people, positive contributors to society, and people with whom liberals might have things in common increased significantly.

Although the repeated gripe from conservative pundits is that colleges are a bastion of liberalism indoctrinating young minds, research (here and here) seems to contest this assertion. While the findings above don’t directly address students’ changing political beliefs, they do suggest that both politically conservative and politically liberal student’s perceptions of the other shift in a positive direction (i.e., they perceive each other more positively after the first year). This would seem to bode well for our students, our campus community, and for the communities in which they will reside after graduation. Because no matter how any of these student’s political views might change over four years in college, more positive perceptions of each other sets the stage for better interactions across differing belief systems. And that is good for all of us.

If we situate these findings in the context of a four-year period of development, I think we ought to be encouraged by these findings, no matter if we lean to the left or to the right. Maybe, even in the midst of all the Sturm und Drang we’ve experienced in the past few years, we are slowly developing students who are more equipped to interact successfully despite political differences.

Make it a good day,



What experiences improve our student’s inclination toward complex thinking?

I’ve always been impressed by the degree to which the members of Augustana’s Board of Trustees want to understand the sometimes dizzying complexities that come with trying to nudge, guide, and redirect the motivations and behaviors of young people on the cusp of adulthood. Each board member that I talk to seems to genuinely enjoy thinking about these kinds of complicated, even convoluted, challenges and implications that they might hold for the college and our students.

This eagerness to wrestle with ambiguous, intractable problems exemplifies the intersection of two key Augustana learning outcomes that we aspire to develop in all of our students. We want our graduates to have developed incisive critical thinking skills and we want to have cultivated in them a temperament that enjoys applying those analytical skills to solve elusive problems.

Last spring Augustana completed a four-year study of one aspect of intellectual sophistication. We chose to measure the nature of our students’ growth by using a survey instrument called the Need for Cognition Scale, an instrument that assesses one’s inclination to engage in thinking about complex problems or ideas. Earlier in the fall, I presented our findings regarding our students’ growth between their initial matriculation in the fall of 2013 and their graduation in the spring of 2017 (summarized in a subsequent blog post). We found that:

  1. Our students developed a stronger inclination toward thinking about complex problems. The extent of our students’ growth mirrored the growth we saw in an earlier cohort of Augustana students while participating in the Wabash National Study between 2008 and 2012.
  2. Different types of students (defined by pre-college characteristics) grew similar amounts, although not all students started and finished with similar scores. Specifically, students with higher HS GPA or ACT/SAT scores started and finished with higher Need for Cognition scores than students with lower HS GPA or ACT/SAT scores.

But, as with any average change-over-time score, there are lots of individual cases scattered above and below that average. In many ways, that is often where the most useful information is hidden. Because if the individuals who produce change-over-time scores above, or below, the average are similar to each other in some other ways, teasing out the nature of that similarity can help us figure out what we could do more of (or less of) to help all students grow.

At the end of our first presentation, we asked for as many hypotheses as folks could generate involving experiences that they thought might help or hamper gains on the Need of Cognition Scale. Then we went to work testing every hypothesis we could possibly test. Taylor Ashby, a student working in the IR office, did an incredible job taking on this monstrous task. After several months of pulling datasets together, constructing new variables to approximate many of the hypotheses we were given, and running all kinds of statistical analyses, we found a couple of pretty interesting discoveries that could help Augustana get even better at developing our student’s inclination or interest in thinking about complex problems or ideas.

To help us organize all of the hypotheses that folks suggested, we organized them into two categories: participation in particular structured activities (e.g., being in the choir or completing a specific major) and experiences that could occur across a range of situations (e.g., reflecting on the impact of one’s interactions across difference or talking with faculty about theories and ideas).

First, we tested all of the hypotheses about participation in particular structured activities. We found five specific activities to produce positive, statistically significant effects:

  • service learning
  • internships
  • research with faculty
  • completing multiple majors
  • volunteering when it was not required (as opposed to volunteering when obligated by membership in a specific group)

In other words, students who did one or more of these five activities tended to grow more than students who did not. This turned out to be true regardless of the student’s race/ethnicity, sex, socioeconomic status, or pre-college academic preparation. Furthermore, each of these experiences produced a unique, statistically significant effect when they were all included in the same equation. This suggests the existence of a cumulative effect: students who participated in all of these activities grew more than students who only participate in some of these activities.

Second, we tested all of the hypotheses that focused on more general experiences that could occur in a variety of settings. Four experiences appeared to produce positive, statistically significant effects.

  • The frequency of discussing ideas from non-major courses with faculty members outside of class.
  • Knowledge among faculty in a student’s major of how to prepare students to achieve post-graduate plans.
  • Faculty interest in helping students grow in more than just academic areas.
  • One-on-one interactions with faculty had a positive influence on intellectual growth and interest in ideas.

In addition, we found one effect that sort of falls in between the two categories described above. Remember that having a second major appeared to produce a positive effect on the inclination to think about complex problems or ideas? Well, within that finding, Taylor discovered that students who said that faculty in their second major emphasized applying theories or concepts to practical problems or new situations “often” or “very often” grew even more than students who simply reported a second major.

So what should we make of all these findings? And equally important, how do we incorporate these findings into the way we do what we do to ensure that we use assessment data to improve?

That will be the conversation of the spring term Friday Conversation with the Assessment for Improvement Committee.

Make it a good day,


Should the male and female college experience differ?

The gap between males and females at all levels of educational attainment paints a pretty clear picture. Males complete high school at lower rates than females. Of those who finish high school, males enroll in college at lower rates than females. This pattern continues in college, where men complete college at lower rates than women. Of course, some part of the gap in college enrollment is a function of the gap in high school completion, and some part of the gap in college completion is a function of the gap in college enrollment. But overall, it still seems apparent that something troubling is going on with boys and young men in terms of educational attainment. Yet, looking solely at these outcome snapshots does very little to help us figure out what we might do if we were going to reverse these trends.

A few weeks ago, I dug into some interesting aspects of the differences in our own male and female enrollment patterns at Augustana, because understanding the complexity of the problem is a necessary precursor to actually solving it. In addition, last year I explored some differences between men and women in their interest in social responsibility and volunteering behaviors. Today, I’d like to share a few more differences that we see between male and female seniors in their responses to senior survey questions about their experience during college.

Below I’ve listed four of the six senior survey questions that specifically address aspects of our students’ co-curricular experience. In each case, there are five response options ranging from strongly disagree (1) to strongly agree (5). Each of the differences shown below between male and female responses is statistically significant.

  • My out-of-class experiences have helped me connect what I learned in the classroom with real-life events.
    • Men     –    3.86
    • Female –    4.17
  • My out-of-class experiences have helped me develop a deeper understanding of myself.
    • Men     –    4.10
    • Female –    4.34
  • My out-of-class experiences have helped me develop a deeper understanding of how I interact with someone who might disagree with me.
    • Men     –    4.00
    • Female –    4.28
  • My co-curricular involvement helped me develop a better understanding of my leadership skills.
    • Men     –    4.14
    • Female –    4.35

On one hand, we can take some comfort in noting that the average responses in all but one case equate with “agree.” However, when we find a difference across an entire graduating class that is large enough to result in statistical significance we need to take, at the very least, a second look.

Why do you think these differences are appearing in our senior survey data? Is it just a function of the imprecision that comes with survey data? Maybe women tend to respond in rosier terms right before graduation than men? Or maybe there really is something going on here that we need to address. One way to test that possibility is to ask whether or not there might be other evidence that corroborates these findings, be it anecdotal or otherwise qualitative. Certainly, the prior evidence I’ve noted and linked above should count some, but that data also comes from senior survey data.

Recent research on boys and young men seems to suggest that these differences in our data may not be a surprise (check out the books Guyland (I found a free pdf of the book!) and Angry White Men or a Ted Talk by Philip Zimbardo for a small sample of the scholarship on men’s issues). This growing body of scholarship also suggests that differences that we might see between males and females begin to emerge long before college, but it also suggests that we are not powerless to reverse some of the disparity.

At the board meetings this weekend, we will be talking about some of these issues. In the meantime, what do you think? And if you think that these differences in our data ought to be taken seriously, does it mean that we ought to construct educationally appropriate variations in the college experience for men and women?

I’d love to read what you think as you chew on this.

Make it a good day,


Sometimes you find a nugget where you least expect it

As many of you already know, data from the vast majority of the college ranking services is not particularly applicable to improving the day-to-day student experience. In many cases, this is because those who construct these rankings rely on “inputs” (i.e., information about the resources and students that come to the institution) and “outputs” (i.e., graduation rates and post-graduate salaries) rather than any data that captures what happens while students are actually enrolled in college.

But just recently I came across some of the data from the Wall Street Journal/Times Higher Education College Rankings that surprised me. Although this ranking is still (in my opinion) far too dependent on inputs and outputs, 20% of their underlying formula comes from a survey of current students. In this survey, they ask some surprisingly reasonable questions about the college experience, the responses to which might provide some useful information for us.

Here is a list of those questions, with the shortened label that I’ll use in the table below bolded within each question.

  • To what extent does your college or university provide opportunities for collaborative learning?
  • To what extent does the teaching at your university or college support critical thinking?
  • To what extent does the teaching at your university or college support reflection upon, and making connections among, things you have learned?
  • To what extent does the teaching at your university or college support applying your learning to the real world?
  • To what extent did the classes you took in your college or university so far challenge you?
  • If a friend or family member were considering going to university, based on your experience, how likely or unlikely are you to recommend your college or university to them?
  • Do you think your college is effective in helping you to secure valuable internships that prepare you for your chosen career?
  • To what extent does your college or university provide opportunities for social engagement?
  • Do you think your college provides an environment where you feel you are surrounded by exceptional students who inspire and motivate you?
  • To what extent do you have the opportunity to interact with the faculty and teachers at your college or university as part of your learning experience?

Below is a table of average responses comparing the average responses of Augustana students with average responses from students at other US institutions. Although I haven’t been able to confirm it by checking the actual survey, it appears that the response options for each item consist of a 1-10 scale on which the participant can plot their response to each question.

Question Augustana Average Response Top US Institution Response 75th Percentile US Institution Response Median US Institution Response 25th Percentile US Institution Response Bottom US Institution Response
Collaborative Learning 8.5 9.5 8.4 8.1 7.7 6.7
Critical Thinking 8.8 9.6 8.7 8.3 8.0 7.1
Connections 8.5 9.4 8.5 8.2 7.9 7.0
Applying Learning 8.4 9.4 8.5 8.1 7.8 6.8
Challenge 8.2 9.4 8.6 8.3 8.0 7.2
Recommend 8.6 9.8 8.7 8.3 7.8 6.7
Prepare 8.3 9.4 8.3 7.8 7.4 6.2
Social 8.9 9.7 8.7 8.5 8.1 7.2
Inspire 8.0 9.3 8.1 7.7 7.2 6.0
Interact 9.3 10.0 9.2 8.9 8.4 7.3

Two things stand out to me in the table above. First, our students’ average responses compare quite favorably to the average responses from students at other institutions.  On six of the ten items, Augustana’s average student response equaled or exceeded the 75th percentile of all US institutions. On three of the remaining four items, Augustana students’ average response fell just short of the 75th percentile by a tenth of a point.

Second, our student’s response to one question – the degree to which they felt challenged by the classes they have taken so far – stands out like a sore thumb. Unlike the rest of the data points, Augustana’s average student response falls a tenth of a point below the median of all US institutions. Compared to the relative strength of all our other average response scores, the “challenge” score seems . . . curious.

Before going any further, it’s important to take into account the quality of the data that was used to generate these averages. The Wall Street Journal/Times says that they got responses from over 200,000 students, so if they want to make claims about overall average responses they’d be standing on pretty solid ground. However, they are trying to compare individual institutions against one another, so what matters is how many responses they received from students at each institution and to what degree those responses might represent all students at each institution. Somewhere in the smaller print farther down the page that explains their methodology, they state that in most cases they received between 50-100 responses from students at each institution (institutions with fewer than 50 responses were not included in their rankings). Wait, what? Given the total enrollments at most of the colleges and universities included in these rankings, 100 responses would represent less than 10% of all students at most of these institutions – in many cases far less than 10%. So we ought to approach the comparative part results with a generous dose of skepticism.

However, it doesn’t mean that we should dismiss the entirety of this data outright. In my mind, the findings from our own students ought to make us very curious. Why would data from a set of about 100 Augustana students (we received responses from 87 students who, upon further examination, turn out to be mostly first-year, female, pretty evenly scattered across different intended majors, and are almost all from the state of Illinois) produce such a noticeable gap between all of the other items on this survey and the degree to which our students feel challenged by their courses?

This is exactly why I named this blog “Delicious Ambiguity.” This is messy data. It definitely doesn’t come with a pre-packaged answer. One could point out several flaws in the Augustana data set (not to mention the entirety of this ranking system) and make a reasonable case to dismiss the whole thing. Yet, it seems like there is something here that isn’t nothing. So the question I’d ask you is this: are there other things going on at Augustana that might increase the possibility that some first-year students would not feel as challenged as they should? Remember, we aren’t talking about a dichotomy of challenged or not challenged. We are talking about degrees of quality and nuance that is the lifeblood of improving an already solid institution.

Make it a good day,


Two numbers going in the right direction. Are they related?

It always seems like it takes way too long to get the 10th-day enrollment and retention numbers for the winter term. Of course, that is because the Thanksgiving holiday pushes the whole counting of days into the third week of the term and . . . you get the picture.  But now that we’ve got those numbers processed and verified, we’ve got some good news to share.

Have a look at the last four years of fall-to-winter term retention rates for students in the first-year cohort –

  • 14/15 – 95.9%
  • 15/16 – 96.8%
  • 16/17 – 96.7%
  • 17/18 – 97.4%

What do those numbers look like to you? Whatever you want to call it, it looks to me like something good. Right away, this improvement in the proportion of first-year students returning for the winter term equates to about $70,000 in net tuition revenue that we wouldn’t have seen had this retention rate remained the same over the last four years.

Although stumbling onto a positive outcome (albeit an intermediate one) in the midst of producing a regular campus report makes for a good day in the IR office, it gets a lot better when we can find a similar sequence of results in our student survey data. Because that is how we start to figure out which things that we are doing to help our students correlate with evidence of increased student success.

About six weeks into the fall term, first-year students are asked to complete a relatively short survey about their experiences so far. Since this survey is embedded into the training session that prepares these students to register for winter classes, the response rate is pretty high. The questions in the survey focus on the academic and social experiences that would help a student acclimate successfully. One of those items, added in 2013, asks about the degree to which students had access to grades or other feedback that allowed them to adjust their study habits or seek help as necessary. In previous years, we’ve found this item to correlate with students’ sense of how hard they work to meet academic expectations.

Below I’ve listed the proportion of first-year students who agree or strongly agree that they had access to the sufficient grades or feedback during their first term. Compare the way this data point changes over the last four years to the fall-to-winter retention rates I listed earlier.

  • 14/15 – 39.6%
  • 15/16 – 53.3%
  • 16/17 – 56.4%
  • 17/18 – 75.0%

Obviously, both of these data points trend in the same direction over the past four years. Moreover, both of these trends look similar in that they jump a lot between the 1st and 2nd year, remain relatively flat between the 2nd and 3rd year, and jump again between the 3rd and 4th year.

I can’t prove that improved early academic feedback is producing improved fall-to-winter term retention. The evidence that we have is correlational, not causal. But we know enough to know that an absence of feedback early in the term hurts those students who either need to be referred for additional academic work or need to be shocked into more accurately aligning their perceived academic ability with their actual academic ability. We began to emphasize this element of course design (i.e., creating mechanisms for providing early term feedback about academic performance) because other research on student success (as well as our own data) suggested that this might be a way to improve student persistence.

Ultimately, I think it’s fair to suggest that something we are doing more often may well be influencing our students’ experience. At the very least, it’s worth taking a moment to feel good about both of these trends. Both data points suggest that we are getting better at what we do.

Make it a good day,