Ideals, Metrics, and Myths (oh no!)

Educators have always been idealists. We choose to believe what we hope is possible, and that belief often keeps us going when things aren’t going our way. It’s probably what drove many of us to finish a graduate degree and what drives us to put our hearts into our work despite all the discouraging news about higher ed these days.

But an abundance of unchecked idealism can also be a dangerous thing. Because the very same passion that can drive one to achieve can also blind one to believe in something just because it seems like it ought to be so. Caught up in a belief that feels so right, we are often less likely to scrutinize the metrics that we choose to measure ourselves or compare ourselves to others. Worse still, our repeated use of these unexamined metrics can become etched into institutional decision-making. Ultimately, the power of belief that once drove us to overcome imposing challenges can become our Achilles heel because we are absolutely certain of things that may, in fact, not be so.

For decades, colleges have tracked the distribution of their class sizes (i.e., the number of classes enrolling 2-9, 10-19, 20-29, 30-39, 40-49, 50-99, and more than 100 students, respectively) as a part of something called the Common Data Set. The implication behind tracking this data point is that a higher proportion of smaller classes ought to correlate with a better learning environment. Since the mid-1980s, the U.S. News and World Report rankings of colleges and universities have included this metric in its formula distilling it down to two numbers – the proportion of classes at an institution with 19 or fewer students (more is better) and the proportion of classes at an institution with 50 or more students (less is better). Two years ago U.S News added a twist by creating a sliding scale so that classes of 19 and fewer received the most credit, the percentage of classes with 20-29, 30-39, and 40-49 received proportionally less credit, and classes of over 50 received no credit. Over time these formulations have produced a powerful mythology across many postsecondary institutions: classes with 19 or fewer students are better than classes with 20 or more.

This begs a pretty important question: are those cut points (19/20, 29/30, etc.) grounded in anything other than an arbitrary application of the Roman numbering system?

Our own fall term IDEA course feedback data provides an opportunity to test the validity of this metric. The overall distribution of class sizes is almost perfect (a nicely shaped bell curve), with almost 80% of courses receiving a robust response rate. Moreover, IDEA’s aggregate dataset allows us to compare three useful measures of the student learning experience across all courses: a student-reported proxy of learning gains called the “progress on relevant objectives” (PRO) score (for a short explanation of the PRO score with additional links for further information, click here), the student perception of the instructor, and the student perception of the course. The table below spells out the average response scores for each measure across eight different categories of class size. Each average score comes from a 5-item response set (converted to a range of 1-5). The PRO score response options range from “no progress” to “exceptional progress,” and the perception of instructor and course excellence response options range from “definitely false” to “definitely true” (to see the actual items on the survey, click here). For this analysis, I’ve only included courses that exceed a 2/3rds (66.67%) response rate.

Class Size PRO Score Excellent Teacher Excellent Course
6-10 students (35 classes) 4.24 4.56 4.38
11-15 students (85 classes) 4.12 4.38 4.13
16-20 students (125 classes) 4.08 4.29 4.01
21-25 students (71 classes) 4.18 4.40 4.27
26-30 students (37 classes) 4.09 4.31 4.18
31-35 students (9 classes) 3.90 4.13 3.81
36-40 students (11 classes) 3.64 3.84 3.77
41 or more students (8 classes) 3.90 4.04 3.89

First, classes enrolling 6-10 students appear to produce notably higher scores on all three measures than any other category. Second, it doesn’t look like there is much difference between subsequent categories until we get to classes enrolling 31 or more students (further statistical testing supports this observation). Based on our own data – assuming that the fall 2017 data does not significantly differ from other academic terms, if we were going to replicate the notion that class size distribution correlates with the quality of the overall learning environment we might be inclined to choose only two cut points to create three categories of class size: those with 10 or fewer students, those with between 11 and 30 students, and those with more than 30 students.

However, further examination of the smallest category of classes indicates that these courses are almost entirely upper-level major courses. Since we know that all three metrics tend to score higher for upper-level major courses because the students in them are more intrinsically interested in the subject matter than students in lower-level courses (classes that often also meet general education requirements), we can’t attribute the higher scores for this group to class size per se. This leaves us with two general categories: classes with 30 or fewer students, and classes with more than 30 students.

How does this comport with the existing research on class size? Although there isn’t much out there, two brief overviews (here and here) don’t find much of a consensus. Some studies suggest that class size is not relevant, others find a positive effect on the learning experience as classes get smaller, and a few others indicate a slight positive effect as classes get larger(!). Especially in light of developments in pedagogy and technology over the past two decades, a 2013 essay that spells out some findings from IDEA’s extensive dataset suggests that other factors almost certainly complicate the relationship between class size and student learning.

So what do we do with all this? Certainly, mandating that all class enrollments sit just below 30 would be, um, stupid. There is a lot more to examine before anyone should march out onto the quad and declare a “class size” policy. One finding from researchers at IDEA that might be worth exploring on our own campus is the variation of learning objectives selected and achieved by class size. IDEA found that smaller classes might be more conducive to more complex (sometimes called “deeper”) learning objectives, while larger classes might be better suited for learning factual knowledge, general principles, or theories. If class size does, in fact, set the stage for different learning objectives, it might be worth assessing the relationship between learning objectives and class size at Augustana to see if we are taking full advantage of the learning environment that a smaller class size provides.

.And what should we do about the categories of class sizes that U.S. News uses in their college rankings formula? As family incomes remain stagnant, tuition revenue continues to lag behind institutional budget projections, and additional resources seem harder to come by, that becomes an increasingly valid question. Indeed, there might be a circumstance where an institution ought to continue using the Common Data Set class size index to guide the way that it fosters an ideal classroom learning environment. And it is certainly reasonable to take other considerations (e.g., faculty workload, available classroom space, intended learning outcomes of a course, etc.) into account when determining an institution’s ideal distribution of class enrollments. But if institutional data suggests that there is little difference in the student learning experience between classes with 16-20 students and classes with 21-25 students, it might be worth revisiting the rationale that an institution uses to determine its class size distribution. No matter what an institution chooses to do, it seems like we ought to be able to justify our choices based on the most effective learning environment that we can construct rather than an arbitrarily-defined and externally-imposed metric.

Make it a good day,


“Not so fast!” said the data . . .

I’ve been planning to write about retaining men for several weeks. I had it all planned out. I’d chart the number of times in the past five years that male retention rates have lagged behind female retention rates, suggest that this might be an issue for us to address, clap my hands together, and publish the post. Then I looked closer at the numbers behind those pesky percentages and thought, “Now this will make for an interesting conversation.”

But first, let’s get the simple stuff out of the way. Here are the differences in retention rates for men and women over the last five years.

Cohort Year Men Women
2016 83.2% 89.1%
2015 85.6% 91.3%
2014 85.0% 86.8%
2013 83.2% 82.7%
2012 78.6% 90.1%

It looks like a gap has emerged in the last four years, right? Just in case you’re wondering (especially if you looked more carefully at all five years listed in the table), “emerged” isn’t really the most accurate word choice. It looks like the 2013 cohort was more of an anomaly than anything else since the 2012 cohort experienced the starkest gap in male vs. female retention of any in the past five years. Looking back over the three years prior to the start of this table, this gap reappears within the 2011, 2010, and 2009 cohorts.

But in looking more closely at the number of men and women who enrolled at Augustana in each of those classes, an interesting pattern appears that adds a least one layer of complexity to this conversation. Here are the numbers of enrolled and retained men and women in each of the last five years.

Cohort Year                   Men                Women
Enrolled Retained Enrolled Retained
2016 304 253 393 350
2015 285 244 392 358
2014 294 250 432 375
2013 291 242 336 278
2012 295 232 362 326

Do you see what I see?  Look at the largest and smallest numbers of men enrolled and the largest and smallest numbers of men retained. In both cases, we are talking about a difference of about 20 male students (for enrolled men: 304 in 2016 for a high and 285 in 2015 for a low; for retained men, 253 in 2016 for a high and 232 in 2012 for a low). No matter the total enrollment in a given first-year class, these numbers seem pretty consistent. By contrast, look at the largest and smallest numbers of women enrolled and retained. The differences between the high and the low of either enrolled or retained women are much greater – by almost a factor of five.

So what does it mean when we put these retention rate gaps and the actual numbers of men and women enrolled/retained into the same conversation? For me, this exercise is an almost perfect example of how quantitative data that is supposed to reveal deep and incontrovertible truth can actually do exactly the opposite. Data just isn’t clean, ever.

Situating these data within the larger conversation about male and female rates of educational attainment, our own findings begin to make some sense. Nationally, the educational attainment gap between men and women starts long before college. Men (boys) finish high school at lower rates than women. Men go to college at lower rates than women. Men stay in college at lower rates than women. And men graduate from college at lower rates than women. So when the size of our first-year class goes up, it shouldn’t be all that surprising that the increase in numbers is explained by a disproportionate increase in women.

Finally, we have long known (and should also regularly remind ourselves) that retention rates are a proxy for something more important: student success. And student success is an outcome of student engagement in the parts of the college experience that we know help students grow and learn. On this score, we have plenty of evidence to suggest that we ought to focus more of our effort on male students. I wrote about one such example last fall when we examined some differences between men and women in their approaches toward social responsibility and volunteering rates. A few years back, I wrote about another troubling finding involving a sense of belonging on campus among Black and Hispanic men.

I hope we can dig deeper into this question over the next several weeks.  I’ll do some more digging into our own student data and share what I find. Maybe you’ve got some suggestions about where I might look?

Make it a good day,




Improving Interfaith Understanding at Augustana

This is a massively busy week at Augie. We had a packed house of high school students visiting on Monday (I’ve never seen the cafeteria so full of people ever!), the Board of Trustees will gather on campus for meetings on Thursday and Friday, and hundreds of alumni and family will arrive for Homecoming over the weekend. With all of this hustle and bustle, you probably wouldn’t have noticed three unassuming researchers from the Interfaith Diversity Experiences and Attitudes Longitudinal Survey (IDEALS) quietly talking to faculty, staff, and students on Monday and Tuesday. They were on campus to find out more about our interfaith programs, experiences, and emphasis over the past several years.

Apparently, we are doing something right when it comes to improving interfaith understanding at Augustana. Back in the fall of 2015, our first-year cohort joined college freshmen from 122 colleges and universities around the country to participate in a 4-year study of interfaith understanding development. The study was designed to collect data from those students at the beginning of the first year, during the fall of the second year, and in the spring of the fourth year. In addition to charting the ways in which these students changed during college, the study was also constructed to identify the experiences and environments that influence this change.

As the research team examined the differences between the first-year and second-year data, an intriguing pattern began to emerge. Across the entire study, students didn’t change very much. This wasn’t so much of a surprise, really, since the Wabash National Study of Liberal Arts Education had found the same thing. However, unlike students across the entire study, Augustana students consistently demonstrated improvement on most of the measures in the study. This growth was particularly noticeable in areas like appreciative knowledge of different worldviews, appreciative attitudes toward different belief systems, and global citizenship. Although the effect sizes weren’t huge, a consistent pattern of subtle but noticeable growth suggested that something good might be happening at Augustana.

However, using some fancy statistical tricks to generate an asterisk or two (denoting statistical significance) doesn’t necessarily help us much in practical terms. Knowing that something happened doesn’t tell us how we might replicate it or how we might do it even better. This is where the qualitative ninjas need to go to work and talk to people (something us quant nerds haven’t quite figured out how to do yet). Guided by the number-crunching, the real gems of knowledge are more likely to be unearthed through focus groups and interviews where researchers can delve deep into the experiences and observations of folks on the ground.

So what did our visiting team of researchers find? They hope to have a report of their findings for us in several months. So far all I could glean from them is that Augustana is a pretty campus with A LOT of steps.

But there is a set of responses from the second-year survey data that that might point in a direction worth contemplating. There is a wonderfully titled grouping of items called “Provocative Encounters with Worldview Diversity,” from which the responses to three statements seem to set our students’ experience apart from students across the entire study as well as students at institutions with a similar Carnegie Classification (Baccalaureate institutions – arts and sciences). In each case, we see a difference in the proportion of students who responded “all the time” or “frequently.”

  1. In the past year, how often have you had class discussions that challenged you to rethink your assumptions about another worldview?
    1. Augustana students: 51%
    2. Baccalaureate institutions: 43%
    3. All institutions in the study: 33%
  2. In the past year, how often have you felt challenged to rethink your assumptions about another worldview after someone explained their worldview to you?
    1. Augustana students: 44%
    2. Baccalaureate institutions: 34%
    3. All institutions in the study: 27%
  3. In the past year, how often have you had a discussion with someone of another worldview that had a positive influence on your perceptions of that worldview?
    1. Augustana students: 48%
    2. Baccalaureate institutions: 45%
    3. All institutions in the study: 38%

In the past several years, there is no question that we have been trying to create these kinds of interactions through Symposium Day, Sustained Dialogue, course offerings, a variety of co-curricular programs, and increased diversity among our student body. Some of the thinking behind these efforts dates back six or seven years when we could see from our Wabash National Study Data and our prior NSSE data that our students reported relatively fewer serious conversations with people who differed from them in race/ethnicity and/or beliefs/values. Since a host of prior research has found that these kinds of serious conversations across difference are key to developing intercultural competence (a skill that certainly includes interfaith understanding), it made a lot of sense for us to refine what we do so that we might improve our students’ gains on the college’s learning outcomes.

The response to the items above suggests to me that the conditions we are trying to create are indeed coming together. Maybe, just maybe, we have successfully designed elements of the Augustana experience that are producing the learning that we aspire to produce.

It will be very interesting to see what the research team ultimately reports back to us. But for now, I think it’s worth noting that there seems to be early evidence that we have implemented intentionally designed experiences that very well might be significantly impacting our students’ growth.

How about that?!

Make it a good day,



Does Our Students’ Interest in Complex Thinking Change over Four Years?

One of the best parts of my job is teaming up with others on campus to help us all get better at doing what we do. Over the past seven years, I’ve been lucky enough to work with almost every academic department or student life office on projects that have genuinely improved the student experience. But if I had to choose, I think my favorite partnership is the annual student learning assessment initiative that combines the thoughtfulness (and sheer intellectual muscle) of the Assessment for Improvement Committee with the longitudinal outcome data (and nerdy statistical meticulousness) from the Office of Institutional Research and Assessment.

For those of you who don’t know about this project already, the annual student learning assessment initiative starts anew every summer – although it takes about four years before any of you see the results. First, the IR office chooses a previously validated survey instrument that aligns with one of Augustana’s three broad categories of learning outcomes. Second, we give this survey to the incoming first-year class just before the fall term starts. Third, when these students finish their senior year we include the same set of questions in the senior survey, giving us a before-and-after set of data for the whole cohort. Fourth, after linking all of that data with freshman and senior survey data, admissions data, course-taking data, and student readiness survey results, we explore both the nature of that cohort’s change on the chosen outcome as well as the experiences or other characteristics that might predict positive or negative change on that outcome.

The most recent graduating cohort (spring, 2017), provided their first round of data in the fall of 2013. Since we had already started assessment cycles of intrapersonal conviction growth (the 2011 cohort) and interpersonal maturity growth (the 2012 cohort), it was time to turn our attention to intellectual sophistication (the category that includes disciplinary knowledge, critical thinking and information literacy, and quantitative literacy). After exploring several possible assessment instruments, we selected an 18-item survey called The Need for Cognition Scale. This instrument tries to get at the degree to which the respondent is interested in thinking about complicated or difficult problems or ideas. Since the Need for Cognition Scale had been utilized by the Wabash National Study of Liberal Arts Education, they had already produced an extensive review of the ways in which this instrument correlated with aspects of intellectual sophistication as we had defined it. And since this instrument is short (18 questions) and cheap (free), we felt very comfortable putting it to work for us.

Fast forward four years and, after some serious number crunching, we have some interesting findings to share!

Below I’ve included the average scores from the 2013 cohort when they took the Need for Cognition Scale in the fall of their first year and in the spring of their fourth year. Keep in mind that scores on this scale range from 1 to 5.

Fall, 2013 3.43
Spring, 2017 3.65

The difference between the two scores is statistically significant, meaning that we can confidently claim that our students are becoming more interested in thinking about complicated or difficult problems or ideas.

For comparison purposes, it’s useful to triangulate these results against Augustana’s participation in the Wabash National Study between 2008 and 2012. Amazingly, that sample of students produced remarkably similar scores. In the fall of 2008, they logged a pre-test mean score of 3.43. Four years later, they registered a post-test mean score of 3.63. Furthermore, the Wabash National Study overall results suggest that students at other small liberal arts colleges made similar gains over the course of four years.

It’s one thing to look at the overall scores, but the proverbial devil is always in the details. So we’ve made it a standard practice to test for differences on any outcome (e.g., critical thinking, intercultural competence) or perception of experience (e.g., sense of belonging on campus, quality of advising guidance, etc.) by race/ethnicity, sex, socioeconomic status, first-generation status, and pre-college academic preparation. This is where we’ve often found the real nuggets that have helped us identify paths to improvement.

Unlike last year’s study of intercultural competence, we found no statistically significant differences by race/ethnicity, sex, socioeconomic status, or first-generation status in either where these different types of students scored when they started college or how much they had grown by the time they graduated. This was an encouraging finding because it suggests that the Augustana learning experience is equally influential for a variety of student types.

However, we did find some interesting differences among students who come to Augustana with different levels of pre-college preparation. These differences were almost identical no matter if we used our students ACT scores or high school GPA to measure pre-college academic preparation. Below you can see how those differences played out based upon incoming test score.

ACT Score Fall, 2013 Spring, 2017
Bottom 3rd ( < 24) 3.33 3.54
Middle 3rd   (24-28) 3.42 3.63
Top 3rd ( > 28) 3.59 3.77

As you can see, all three groups of students grew similarly over four years. But the students entering with a bottom third ACT score started well behind the students who entered with a top third ACT score. Moreover, by the time this cohort graduated, the bottom third ACT student had not yet reached the entering scores of the top third ACT students (3.54 compared with 3.59).

So what should we make of these findings? First, I think it’s worth noting that once again we have evidence that on average our students grow on a key aspect of intellectual sophistication. This is worth celebrating. Furthermore, our student growth doesn’t appear to vary across several important demographic characteristics, suggesting that, on at least one learning metric, we seem to have achieved some outcome equity. And although there appear to be differences by pre-college academic preparation in where those students end up, the change from first-year to fourth-year across all three groups is almost identical. This suggests something that we might gloss over at first, namely that we seem to be accomplishing some degree of change equity. In other words, no matter where a student is when they arrive on campus, we are able to help them grow while they are here.

At the end of our presentation of this data last Friday afternoon, we asked everyone in attendance to hypothesize about the kinds of student experiences that might impact change on this outcome. Everyone wrote their hypotheses (some suggested only one idea while others, who shall yet remain nameless, suggested more than ten!) on a 4×6 card that we collected. Over the next several months, we will do everything we can to test each hypothesis and report back to the Augustana community what we found at our winter term presentation.

Oh, you say through teary eyes that you missed our presentation? Well, lucky for you (and us) we are still taking suggestions. So if you have any hypotheses, speculations, intuition, or just outright challenges that you want to suggest, bring it! You can post your ideas in the comments below or email me directly.

I can’t wait to start digging into the data to find what mysteries we might uncover! And look for our presentation of these tests as an upcoming winter term Friday Conversation.

Make it a good day,


Just when you think you’ve got everything figured out . . .

This post started out as nothing more than a humble pie correction; something similar to what you might find at the bottom of the first page of your local newspaper (if you are lucky enough to still have a local newspaper). But as I continued to wrestle with what I was trying to say, I realized that this post wasn’t really about a correction at all. Instead, this post is about what happens when a changing student population simply outgrows the limits of the old labels you’ve been using to categorize them.

Last week, I told you about a stunningly high 89.8% retention rate for Augustana’s students of color, more than five percentage points higher than our retention rate for white students. During a meeting later in the week, one of my colleagues pointed out that the total number of students of color from which we had calculated this retention rate seemed high. Since this colleague happens to be in charge of our Admissions team, it seemed likely that he would know a thing or two about last year’s incoming class. At the same time, we’ve been calculating this retention rate in the same way for years, so it didn’t seem possible that we suddenly forgot how to run a pretty simple equation.

Before I go any further, let’s get the “correction,” or maybe more precisely “clarification,” out of the way. Augustana’s first-to-second year retention rate for domestic students of color this year is 87.3%, about a point higher than the retention rate for domestic white students (86%). Still impressive, just not quite as flashy. Furthermore, our first-to-second year retention rate for international students is 88.4%, almost two percentage points higher than our overall first-to-second year retention rate of 86.5%. Again, this is an impressive retention rate among students who, in most cases, are also dealing with the extra hurdle of adapting to (if not learning outright) Midwestern English.

So what happened?

For a long time, Augustana has used the term “multicultural students” as a way of categorizing all students who aren’t white American citizens raised in the United States. Even though the term is dangerously vague, when almost 95% of Augustana’s enrollment was white domestic students (less than two decades ago) there was a reasonable logic to constructing this category. Just as categories can become too large to be useful, so too can categories become so small that they dissipate into a handful of individuals. And even the most caring organization finds it pretty difficult to explicitly focus itself on the difficulties of a few individuals.

Moreover, this categorization allowed us to construct a group large enough to quantify in the context of other larger demographic groups. For example, take one group of students from which 10 of 13 return for a second year and compare it with another group of students from which 200 of 260 return for a second year. Calculated as a proportion, both groups share the same retention rate. But in practice, the retention success for each group seems very different; one group lost very few students (3) while the other group lost a whole bunch (60). In the not so distant past when an Augustana first-year class would include maybe 20 domestic students of color and 5 international students (out of a class of about 600), grouping these students into the most precise race, ethnicity, and citizenship categories would almost guarantee that these individuals would appear as intriguing rarities or, worse yet, quaint novelties. Under these circumstances, it made a lot of sense to combine several smaller minority groups into one category large enough to 1) conceptualize as a group with broadly similar needs and challenges and 2) quantify in comparable terms to other distinct groups of Augustana students.

In no way am I arguing that the term “multicultural” was a perfect label. As our numbers of domestic students of color increased, the term grew uncomfortably vague. Equally problematic, some inferred that we considered the totality of white students to be a monoculture or that we considered all multicultural students to be overflowing with culture and heritage. Both of these inferences weren’t necessarily accurate, but as with all labels, seemingly small imperfections can morph into glaring weaknesses when the landscape changes.

Fast forward fifteen years. Entering the 2017-18 academic year, our proportion of “multicultural students” has increased by over 400%, and the combination of domestic students of color and international students make up about 27% of our total enrollment – roughly 710 students. Specifically, we now enroll enough African-American students, Hispanic students, and international students to quantitatively analyze their experiences separately. To be clear, I’m not suggesting that we should prioritize one method of research (quantitative) over another (qualitative). I am arguing, however, that we are better able to gather evidence that will inform genuine improvement when we have both methods at our disposal.

After a conversation with another colleague who is a staunch advocate for African-American students, I resolved to begin using the term “students of color” instead of multicultural students. Although it’s taken some work, I’m making slow progress. I was proud of myself last week when I used the term “students of color” in this blog without skipping a beat.

Alas, you know the story from there. Although I had used an arguably more appropriate term for domestic students of color in last week’s post, I had not thought through the full implications of shifting away from an older framework for conceptualizing difference within our student body. Clearly, one cannot simply replace the term “multicultural students” with “students of color” and expect the new term to adequately include international students. At the same time, although the term “multicultural” implies an air of globalism, it could understandably be perceived to gloss over important domestic issues of race and inequality. If we are going to continue to enroll larger numbers across multiple dimensions of difference, we will have to adopt a more complex way of articulating the totality of that difference.

Mind you, I’m not just talking about counting students more precisely from each specific racial and ethnic category – we’ve been doing that as long as we’ve been reporting institutional census data to the federal government. I guess I’m thinking about finding new ways to conceptualize difference across all of its variations so that we can adopt language that better matches our reality.

I’d like to propose that all of us help each other shift to a terminology that better represents the array of diversity that we’ve worked so hard to achieve, and continue to work so hard to sustain. I know I’ve got plenty to learn (e.g., when do I use the term “Hispanic” and when do I use the term “Latinx”?), and I’m looking forward to learning with you.

And yes, I’ll be sure to reconfigure our calculations in the future. Frankly, that is the easy part. Moreover, I’ll be sure to reconceptualize the way I think about student demographics. We’ve crossed a threshold into a new dimension of diversity within our own student body. Now it’s time for the ways that we quantify, convey, and conceptualize that diversity to catch up.

Make it a good day,


Retention, Realistic Goals, and a Reason to be Proud

When we included metrics and target numbers in the Augustana 2020 strategic plan, we made it clear to the world how we would measure our progress and our success. As we have posted subsequent updates about our efforts to implement this strategic plan, a closer look into those documents exposes some of the organizational challenges that can emerge when a goal that seemed little more than a pipe dream suddenly looks like it might just be within our grasp.

Last week we calculated our first-to-second year retention numbers for the cohort that entered in the fall of 2016. As many of you know, Augustana 2020 set a first-to-second year retention rate goal of 90%, a number that we’d never come close to before. In fact, colleges enrolling students similar to ours top out at retention rates in the upper 80s. But we decided to set a goal that would stretch us outside of this range. To come up with this goal, we asked ourselves, “What if the stars aligned with the sun and the moon and we retained every single student that finished the year in good academic standing?” Under those conditions, we might hit a 90% retention rate. A pipe dream? Maybe. But why set a goal if it doesn’t stretch us a little bit? So we stuck that number in the document and thought to ourselves, “This will be a good number to shoot for over the next five years.”

Last year (fall of 2016), we were a little stunned to find that we’d produced an overall first-to-second year retention rate of 88.9%. Sure, we had instituted a number of new initiatives, tweaked a few existing programs, and cranked up the volume on our prioritizing retention megaphone to eleven. But we weren’t supposed to have had so much success right away. To put this surprise in the context of real people, an 89% retention rate meant that we whiffed on a whopping seven students! SEVEN!!! Even if we had every retention trick in the book memorized, out of a class of 697, an 88.9% retention rate is awfully close to perfect.

So what do you do when you find yourself close to banging your head on a ceiling that you didn’t really ever expect to see up close? The fly-by-night thought leader tripe would probably answer with an emphatic, “Break through that ceiling!” (complete with an infomercial and a special deal on a book and a DVD). Thankfully, we’ve got enough good sense to be smarter than that. In reality, a situation like this can set the stage for a host of delicately dangerous delusions. For example, if we were to exceed that retention rate in the very next year we could foolishly convince ourselves into thinking that we’ve discovered the secret to perfect retention. Conversely, if our retention rate were to slip in the very next year we could start to think that our aspiration was always beyond our grasp and that we really ought to just stop trying to be something that we are not (cue the Disney movie theme song).

The way we’ve chosen to approach this potential challenge is to start with a clear understanding of all of the various retention rates of student subpopulations that make up the overall number. By examining and tracking these subgroups, we can make a lot more sense of whatever the next year’s retention rate turns out to be. We also need to remind ourselves that for an overall retention rate to hit our aspired goal, all of the subpopulation retention rates have to end up evenly distributed around that final number. And that has always been where the really tough challenges lie because retention rates for some student groups (e.g., low income, students of color, lower academic ability) have languished below other groups (e.g., more affluent, white students, higher academic ability) for a very long time.

With all that as a prelude, Let’s dive into the details that make up the retention rate of our 2016 cohort.

Overall, our first-to-second year retention rate this fall is 86.5%. No, it’s not as strong as last year’s 88.9% retention rate. Even though our three-year trend of 3-year retention rate averages continues to improve (84.7%, 86.0%, and 87.2% most recently), I would be lying if I said that I wasn’t just a little disappointed by the overall number. Yet, this sets up the perfect opportunity to examine our data more closely for evidence that might confirm or counter the narrative I described above.

As always, this is where things get genuinely interesting. Over the last few years, we’ve put additional effort into the quality of the experience we provide for our students of color. So we should expect to see improvement in our first-to-second year retention rate for these students. And in fact, we have seen improvement over the past four years as retention rates for these students have risen from 78.4% four years ago to 86.1% last year.

So it is particularly gratifying to report that the retention rate for students of color among the 2016 cohort increased again, this time to an impressive 89.8%! Amazingly, these students persisted at a rate almost five percentage points higher than white students.

That’s right.  Retention of first-year students of color from fall of 2016 to fall of 2017 was 89.8%, while the retention of white students over the same period was 85.6%.

Of course, there is clearly plenty of work for us yet to do in creating the ideal learning environment where, as a result, the maximal number of students succeed. And I don’t for a second think that everything is going to be unicorns and rainbows from here on out. But for a moment, I think it’s worth taking just a second to be proud of our success in improving the retention rate of our students of color.

Make it a good day,


For the Want of a Nail: Maybe another vital predictive clue?

I’ve always loved the Todd Rundgren song “The Want of a Nail.”  In addition to a jumpin’ soul groove that could levitate a gospel choir, syncopated piano, punchy horn arrangements, and the legendary Bobby Womack singing call-and-response with Rundgren’s lead vocals (as if that weren’t enough!), the lyrics relay the old proverb “For Want of a Nail” that laments the loss of an entire kingdom due to the seemingly minor detail of a missing nail that would have kept a horse’s shoe attached to a lone steed during a fleeting cavalry charge.

After I wrote last week’s post comparing a couple of major pre-college predictors of first year success, I wondered again about whether there might be other key predictors of first year success that tend to fly just under the radar. I guess I’m thinking of the kinds of traits and attributes that, although they might not dominate a first impression, might just tip the balance for a student teetering on the edge of academic survival. In addition, by “under-the-radar” I’m referring to the type of variables that might normally get overshadowed in large-scale statistical analyses of college student success. These almost imperceptible influencers are often (for lack of a better term) predictors of predictors; like the way that a regular sleep schedule might increase the ability to focus when studying, which in turn increases one’s cognitive stamina during extended test-taking and ultimately improves the likelihood of a higher standardized test scores. Whether they are called “soft skills” in the popular press or “non-cognitive factors” in the academic press, it has become increasingly clear that these attributes matter for college success. So if we could figure out whether any of these attributes play a similar role among Augustana students who fell into that muddy middle of maybe when we were considering their application for admission, we might know a little bit more about teasing out just the right details in the process of trying to decide if a person is a good fit for Augustana or not.

Fortunately, we already collect some of this kind of data through the Student Readiness Survey that incoming freshmen take before attending summer registration. Although the Student Readiness Survey was primarily designed to better inform early conversations between advisors and new students, this data mirrors several of the soft skill traits and is ripe for testing as a predictor of first year success. To make sure we avoid the brute force effect (AKA when a typically powerful predictor like test score or high school GPA drowns out the more subtle impact of other potentially important factors), we took into account high school GPA throughout our analyses so that we could focus on the influence of these attributes and traits.

As a quick reminder, the Student Readiness Survey collects data on six non-cognitive factors or soft skills:

  • Academic Habits (e.g., using a highlighter, planning ahead to do homework)
  • Academic Confidence (e.g., belief in one’s ability to learn)
  • Persistence and Grit (e.g., tendency to fight through difficulty to achieve a goal)
  • Interpersonal Skills (e.g., tendency to consider multiple views in navigating conflict)
  • Stress Management (e.g., perception of one’s temper or ability to be patient)
  • Comfort with Social Interaction (e.g., ability to make new friends)

Sure enough, we found that two of these traits stood out as predictors of cumulative first-year GPA even after taking into account high school GPA.  Can you guess which concepts rose to the top?

Although it might seem obvious, academic habits held up as a significant predictor of first year success. In other words, if you took two prospective students with similar high school GPAs, the one more likely to succeed would be the one who takes a more deliberate approach to academic habits, plans ahead, and studies actively rather than passively.

The second significant predictive trait turned out to be interpersonal skills. Specifically, we found that the more a student was inclined to consider another’s perspective as well as their own when negotiating a disagreement, the more likely they were to succeed as a first year student. Again, this finding took into account high school GPA. So if you were to consider two students with similar high school GPAs for admission, the one that seems to exhibit more sophisticated interpersonal skills may well be the one who is more likely to succeed. It’s important to note that this is not the same as leaning toward the student who seems more comfortable socially. In fact, our data suggests that there may be a small negative relationship between comfort with social interaction and first year success after taking into account high school GPA.

So what should we do with these findings? First of all, tread carefully. Applying these findings is sort of like fixing watches – one wrong move and you’ve turned a nifty time piece into an expensive paper weight. Moreover, these findings will only be useful when you put them in context with everything else you might know about a prospective student. Yet, when straining to find some whisper of evidence about a student’s potential for success, these may well be exactly the veins that you ought to mine. That doesn’t mean that you’ll necessarily find what you’re looking for, but it does mean that you know just a little bit more about where to look.

Although I’ve harped on this point in previous posts, I think it is worth repeating – a mountain of research demonstrates that the experiences during the first year are critical to student success, notwithstanding all sorts of pre-college characteristics. So nothing we might know about a prospective student before they start college rises to the level of slam-dunk proof of their future success or failure. But knowing just a little bit more about traits and attributes under the radar might help us make better decisions about students who seem to cluster at the margins.

Make it a good day,


Standardized test score vs. high school GPA: The battle of the predictors!!

No matter how solid the overall academic characteristics of a first year class, by the time spring rolls around we always seem to wish that we could be just a bit more precise in identifying students who can succeed at Augustana. Yes, there are some applicants who are almost guaranteed to succeed and some who are obviously nowhere near ready for the Augustana experience. But most applications fall somewhere in between those two poles. And even though this is an exceedingly complicated and imperfect exercise, every bit of information we can tease from our own data can help us perfect our efforts to pick out those diamonds in the rough.

Standardized test score (i.e., ACT or SAT) and high school GPA have always been the big dogs in predicting college success. In the not too distant past, these two metrics were often the only numbers that a college used to make admissions decisions. But (at least) two problems have emerged that make it critically important to test the veracity of both metrics among our own students. First, the test prep business has become an almost ubiquitous partner to the tests themselves. ACT or SAT preparation resources (be they online or in person) are often strongly encouraged and sometimes are even offered as a part of the high school curriculum. As a result, one could argue that standardized test scores increasingly predict test taking skills rather than academic preparation. (Given that the average ACT composite score has remained the same from 1997 to 2016, I’m not exactly sure what the growth in the test prep business says about the industry or the people who pay for those services . . . but that is another story). When we add to the mix the correlation between socioeconomic status and available educational resources, test score becomes an even more suspect measure of academic potential.

Second, high school GPA has become an increasingly “flexible” number as high schools have added more and more weighted courses and varying academic “tracks” for different types of students. As a result, high school GPAs sometimes appear much more tightly clustered for certain types of students, making it more difficult to claim that a moderate differences in high school GPA between two applicants represents an actual difference in college readiness. In addition, these patterns of clustering quickly become specific to an individual school or district, making it even harder to compare applicants between schools or districts. For these reasons Augustana decided a number of years back to generate for each applicant a recalculated GPA that removes much of the peculiarity of the GPA initially provided by the high school.

Given the increasing murkiness of these two metrics, it makes sense to test the predictive validity (i.e., trustworthiness) of each among our own students. In addition, since students almost always submit both a test score and a high school GPA, it would help us a lot to know more about each metric in the context of the other. For example, what if a student’s test score seems much stronger than their high school GPA, or the other way around. Should we place more value on one over the other? Should we put our trust in the more favorable of the two metrics?

To conduct this inquiry thoroughly, we tested the effect of high school GPA and standardized test score on three different measures of first year success: cumulative GPA at the end of the first year, retention to the second year, and the number of credits completed during the first year.

In short, it isn’t much of a contest. The recalculated high school GPA significantly predicts first year cumulative GPA, retention, and number of credits completed. Standardized test score only predicts first year cumulative GPA, while producing no statistically significant effect on retention or number of credits completed. In addition, when high school GPA and test score were analyzed head-to-head (i.e., both variables were included in the same statistical analysis predicting first year cumulative GPA), the size of the high school GPA effect was two and a half times larger than the effect of the standardized test score.

This would suggest that one ought to prioritize the recalculated high school GPA over the standardized test score when evaluating a prospective student. But remember the questions I posed a few paragraphs back about an applicant whose test score and high school GPA don’t seem to match up? We felt like we needed to run one more test just in case.

To test the phenomenon of a test score and a high school GPA that appear to “disagree” with each other, we created a variable that reflected the relative gap between test score and high school GPA and tested the relationship between this variable and the first year cumulative GPA (I’ll gladly explain in more detail the steps we took to build this variable off line. Suffice it to say that, “We got our stats nerd on, and it was awesome.”) Interestingly, our findings closely mirrored our prior results. As the test score exceeded the high school GPA (i.e., as the test score represented an increasingly higher academic potential than the GPA), the first year cumulative GPA tended to drop. Conversely, when the high school GPA exceeded the test score, first year cumulative GPA tended to rise. Although there is a point at which this variable is no longer useful (e.g., if the test score is 35 and the high school GPA is 1.5, one starts to suspect something more nefarious might be at play), these findings corroborate our earlier tests. For Augustana applicants, recalculated high school GPA is a more accurate predictor of first year success than the standardized test score.

So if you are advising first year students, be careful about making assumptions about your students’ ability based on their test scores. Likewise, when there appears to be a gap between what the test score and the high school GPA might suggest, which metric exceeds the other might tell very different stories about the needs of a given student.

Make it a good day,


Strap yourself in. It’s gonna be an awesome year!

Although every weekend before the start of a fall term seems to bring a surge of energy to campus, this fall feels just a little bit different. I’m not sure I can put my finger on it just yet, but what usually seems like a low hum of electrical current has been elevated to a palpable buzz throughout the quad. It’s almost as if we’ve crossed some sort of invisible threshold where a confluence of undercurrents have achieved a synergy that is about to burst into a kaleidoscopic firework display of sparkle and color.

Over the last 10 years, the Augustana student body has grown more diverse on a variety of dimensions. The graph below shows the growing proportions of students from five different demographic groups since 2006. The race/ethnic line (blue) represents the proportion of non-white students. The socioeconomic line (red) represents the proportion of Pell Grant recipients. The religious line (green) represents the proportion of students who identify with a belief system that is not Protestant or Catholic. The out-of-state line (purple) represents the proportion of students who are from states other than Illinois. And the international line (light blue) represents the students who are coming to Augustana from a country outside of the United States.


These demographic shifts don’t happen by chance, and we ought to recognize and applaud the policy decisions that helped to bolster these trends.

But without a concerted and sustained commitment to make the most of this blossoming tapestry of difference, we leave ourselves vulnerable to a devastating missed opportunity. It would be terribly disappointing if all of this effort to bring such a wealth of diversity to campus produced nothing more than larger groups of homogeneity sitting in separate corners of the Gerber dining hall.

Weaving our varied demographics into a single diverse tapestry will take a continual effort to create, cultivate, and sometimes even coerce meaningful interactions across difference. This will mean that all of us will have to recognize our very human tendency to find comfort in familiarity. Although I’m not suggesting that we shun the familiar, we must humbly recognize our frailty and make the choice to open ourselves to new relationships, new ideas, and new experiences.

I know that helping young people navigate such a complex challenge is not simple, and it certainly doesn’t happen overnight. And, to be clear, I’m definitely not suggesting that we should shove our students into situations that are beyond their capability or maturity. But if we are to weave the diverse tapestry that we imagine, then we will need to put our collective shoulder to the grindstone and push, steadily, kindly, and purposefully; especially when the conversations turn difficult and and our vulnerabilities and insecurities feel exposed.

We have gathered the resources together on this campus to make difference one of our most powerful assets. Now let’s get to work.

Make it a good day,


What kind of work goes into recruiting a freshman class?

As you have almost certainly seen by now, the number of tuition deposits received by the end of last week for next year’s incoming class had almost surpassed 740; well above the number we had cautiously hoped to reach at the beginning of this recruiting cycle. In the context of a shrinking population of high school graduates in the Midwest, this is a genuinely impressive feat.

So, for the last Delicious Ambiguity post of the year, I thought I’d share some numbers that spell out the enormity of this effort.

From June 1, 2016 through April 29, 2017, 3,123 prospective seniors (i.e., students who would start college in August, 2017) visited our campus. Averaged over that 47-week period, this works out to about 66 visits per week. Given the extended planning required to host any of the several Saturday admissions events, and given that prospective students visit campus even when classes aren’t in session or many of us might be enjoying a holiday week, it seems pretty clear that this office is running at a high clip all year long.

Moreover, the nature of recruiting students to a campus seems to have slowly shifted in recent decades so that relatively more time is required to recruit students after they have already been accepted for admission. This is where I found two numbers to be pretty astounding.

Between December 1, 2016 and April 29, 2017, the admissions counselors and student ambassadors sent 2,868 emails and made 6,145 phone calls to accepted applicants. That averages out to about 137 emails and about 293 phone calls per week. All of this communication happened on top of the multitude of in-person campus visits.

Moreover, all of this communication likely doesn’t include all of the emails and phone calls that faculty and coaches made to prospective students all year long.

So, at least for a few minutes this week, let’s lose the good ole Lutheran Midwestern reservedness and congratulate ourselves unabashedly for a job well done. In addition, I think that the admissions staff and the crew of student ambassadors deserve a giant shout out. Well done, y’all!

Make it a good day, everybody . . . and have a wonderful summer,