Ideals, Metrics, and Myths (oh no!)

Educators have always been idealists. We choose to believe what we hope is possible, and that belief often keeps us going when things aren’t going our way. It’s probably what drove many of us to finish a graduate degree and what drives us to put our hearts into our work despite all the discouraging news about higher ed these days.

But an abundance of unchecked idealism can also be a dangerous thing. Because the very same passion that can drive one to achieve can also blind one to believe in something just because it seems like it ought to be so. Caught up in a belief that feels so right, we are often less likely to scrutinize the metrics that we choose to measure ourselves or compare ourselves to others. Worse still, our repeated use of these unexamined metrics can become etched into institutional decision-making. Ultimately, the power of belief that once drove us to overcome imposing challenges can become our Achilles heel because we are absolutely certain of things that may, in fact, not be so.

For decades, colleges have tracked the distribution of their class sizes (i.e., the number of classes enrolling 2-9, 10-19, 20-29, 30-39, 40-49, 50-99, and more than 100 students, respectively) as a part of something called the Common Data Set. The implication behind tracking this data point is that a higher proportion of smaller classes ought to correlate with a better learning environment. Since the mid-1980s, the U.S. News and World Report rankings of colleges and universities have included this metric in its formula distilling it down to two numbers – the proportion of classes at an institution with 19 or fewer students (more is better) and the proportion of classes at an institution with 50 or more students (less is better). Two years ago U.S News added a twist by creating a sliding scale so that classes of 19 and fewer received the most credit, the percentage of classes with 20-29, 30-39, and 40-49 received proportionally less credit, and classes of over 50 received no credit. Over time these formulations have produced a powerful mythology across many postsecondary institutions: classes with 19 or fewer students are better than classes with 20 or more.

This begs a pretty important question: are those cut points (19/20, 29/30, etc.) grounded in anything other than an arbitrary application of the Roman numbering system?

Our own fall term IDEA course feedback data provides an opportunity to test the validity of this metric. The overall distribution of class sizes is almost perfect (a nicely shaped bell curve), with almost 80% of courses receiving a robust response rate. Moreover, IDEA’s aggregate dataset allows us to compare three useful measures of the student learning experience across all courses: a student-reported proxy of learning gains called the “progress on relevant objectives” (PRO) score (for a short explanation of the PRO score with additional links for further information, click here), the student perception of the instructor, and the student perception of the course. The table below spells out the average response scores for each measure across eight different categories of class size. Each average score comes from a 5-item response set (converted to a range of 1-5). The PRO score response options range from “no progress” to “exceptional progress,” and the perception of instructor and course excellence response options range from “definitely false” to “definitely true” (to see the actual items on the survey, click here). For this analysis, I’ve only included courses that exceed a 2/3rds (66.67%) response rate.

Class Size PRO Score Excellent Teacher Excellent Course
6-10 students (35 classes) 4.24 4.56 4.38
11-15 students (85 classes) 4.12 4.38 4.13
16-20 students (125 classes) 4.08 4.29 4.01
21-25 students (71 classes) 4.18 4.40 4.27
26-30 students (37 classes) 4.09 4.31 4.18
31-35 students (9 classes) 3.90 4.13 3.81
36-40 students (11 classes) 3.64 3.84 3.77
41 or more students (8 classes) 3.90 4.04 3.89

First, classes enrolling 6-10 students appear to produce notably higher scores on all three measures than any other category. Second, it doesn’t look like there is much difference between subsequent categories until we get to classes enrolling 31 or more students (further statistical testing supports this observation). Based on our own data – assuming that the fall 2017 data does not significantly differ from other academic terms, if we were going to replicate the notion that class size distribution correlates with the quality of the overall learning environment we might be inclined to choose only two cut points to create three categories of class size: those with 10 or fewer students, those with between 11 and 30 students, and those with more than 30 students.

However, further examination of the smallest category of classes indicates that these courses are almost entirely upper-level major courses. Since we know that all three metrics tend to score higher for upper-level major courses because the students in them are more intrinsically interested in the subject matter than students in lower-level courses (classes that often also meet general education requirements), we can’t attribute the higher scores for this group to class size per se. This leaves us with two general categories: classes with 30 or fewer students, and classes with more than 30 students.

How does this comport with the existing research on class size? Although there isn’t much out there, two brief overviews (here and here) don’t find much of a consensus. Some studies suggest that class size is not relevant, others find a positive effect on the learning experience as classes get smaller, and a few others indicate a slight positive effect as classes get larger(!). Especially in light of developments in pedagogy and technology over the past two decades, a 2013 essay that spells out some findings from IDEA’s extensive dataset suggests that other factors almost certainly complicate the relationship between class size and student learning.

So what do we do with all this? Certainly, mandating that all class enrollments sit just below 30 would be, um, stupid. There is a lot more to examine before anyone should march out onto the quad and declare a “class size” policy. One finding from researchers at IDEA that might be worth exploring on our own campus is the variation of learning objectives selected and achieved by class size. IDEA found that smaller classes might be more conducive to more complex (sometimes called “deeper”) learning objectives, while larger classes might be better suited for learning factual knowledge, general principles, or theories. If class size does, in fact, set the stage for different learning objectives, it might be worth assessing the relationship between learning objectives and class size at Augustana to see if we are taking full advantage of the learning environment that a smaller class size provides.

.And what should we do about the categories of class sizes that U.S. News uses in their college rankings formula? As family incomes remain stagnant, tuition revenue continues to lag behind institutional budget projections, and additional resources seem harder to come by, that becomes an increasingly valid question. Indeed, there might be a circumstance where an institution ought to continue using the Common Data Set class size index to guide the way that it fosters an ideal classroom learning environment. And it is certainly reasonable to take other considerations (e.g., faculty workload, available classroom space, intended learning outcomes of a course, etc.) into account when determining an institution’s ideal distribution of class enrollments. But if institutional data suggests that there is little difference in the student learning experience between classes with 16-20 students and classes with 21-25 students, it might be worth revisiting the rationale that an institution uses to determine its class size distribution. No matter what an institution chooses to do, it seems like we ought to be able to justify our choices based on the most effective learning environment that we can construct rather than an arbitrarily-defined and externally-imposed metric.

Make it a good day,

Mark

Have a wonderful Thanksgiving!

A short post for a short week . . .

We talk a lot about the number of students at Augustana who have multiple talents and seem like they will succeed in life no matter what they choose to do.  So many of them seem to qualify as “MultiPotentialites”.

Although it makes sense that we would first see this phenomenon among our students, I think we might be missing another group of particularly gifted folks all around us.  So many of you, the Augustana faculty and staff, have unique talents, insightful perspectives, and unparalleled interpersonal skills that make us good at what we do. Almost every day I see someone step into a gap and fill a need that just needs to get done. Maybe we are just Midwestern humble or maybe we are just so busy scrambling to put out one fire after another that we never really get the chance to pause and see the talent we all bring to this community.

So I want to make sure that I thank all of you.  I know this might sound hokey.  Maybe it is.

So what.

Make it a good Thanksgiving weekend.

Mark

Some anecdotes and data snippets from our first experience with the IDEA online course feedback system

Welcome to Winter Term! Maybe some of you saw the big snowflakes that fell on Sunday morning. Even though I know I am in denial, it is starting to feel like fall might have slipped from our collective grasp over the past weekend.

But on the bright side (can we get some warmth with that light?), during the week-long break between fall and winter term, something happened that had not happened since we switched to the IDEA course feedback system. Last Wednesday morning – only a 48 hours after you had entered your final grades, your IDEA course feedback was already processed and ready to view. All you had to do was log in to your faculty portal and check it out! (You can find the link to the IDEA Online Course Feedback Portal on your Arches faculty page).

I’m sure I will share additional observations and data points from our first experience with the online system this week during one of the three “Navigating your Online IDEA Feedback Report” sessions on Monday, Tuesday, and Thursday starting just after 4 PM in Olin 109. (A not so subtle hint – come to Olin 109 on Monday, Tuesday, or Thursday this week (Nov. 13, 14, and 16) at or just after 4 PM to walk through the online feedback reports and maybe one or two cool tricks with the data).  Bring a laptop if you’ve got one just in case we run out of computer terminals.

But in the meantime, I thought I’d share a couple of snippets that I found particularly interesting from our first online administration.

First, it seems that no news about problems logging in to the system turned out to be extremely good news. I was fully prepped to solve all kinds of connectivity issues and brainstorm all sorts of last-minute solutions. But I only heard from one person about one class having trouble getting on to the system . . . and that was when the internet was down all over campus for about 45 minutes. Otherwise, it appears that folks were able to administer the online course feedback forms in class or get their students to complete them outside of class with very little trouble. Even in the basement of Denkmann! This doesn’t mean that we won’t have some problems in the future, but at least with one term under our collective belt . . . maybe the connectivity issue isn’t nearly as big as we worried it might be.

Second, our overall student response rates were quite strong. Of the 467 course sections that could have administered IDEA online, about 74% of those course sections achieved a response rate of 75% or higher. Furthermore, several instructors tested what might happen if they asked students to complete the IDEA online outside of class (incentivized with an offer of extra credit to the class if the overall response rate reached a specific threshold). I don’t believe that any of these instructors’ classes failed to meet the established thresholds.

In addition, after a preliminary examination of comments that students provided, it appears that students actually may have written more comments with more detail than they previously provided on paper-and-pencil forms. This would seem to corroborate feedback from a few faculty members who indicated that their students were thankful that their comments would now be truly anonymous and no longer potentially identifiable given the instructor’s prior familiarity with the student’s handwriting.

Finally, in response to faculty concerns that the extended student access to their IDEA forms (i.e., students were able to enter data into their response forms until the end of finals no matter when they initially filled out their IDEA forms) might lead to students going back into the system and exacting revenge on instructors in response to a low grade on a final exam or paper, I did a little digging to see how likely this behavior might be. In talking to students about this option during week 10 of the term, I got two responses. Several international students said that they appreciated this flexibility because they had been unable to finish typing their comments in the time allotted in class. Since many international students (particular first-year international students) find that it takes them much longer than domestic students to express complex thoughts in written English. I also got the chance to ask a class of 35(ish) students whether or not they were likely to go back into the IDEA online system and change a response several days after they had completed that form. After giving me a bewildered look for an uncomfortably long time, one student finally blurted out, “Why would we do that?”  Upon further probing, the students said that they couldn’t imagine a situation where they would care enough to take the time to find the student portal and change their responses. When I asked, “Even if something happened at the end of the term like a surprisingly bad grade on a test or a paper that you felt was unfair?” The students responded by saying that by the end of the term they would already know enough to know what they thought of that instructor and that class. Even if they got a surprisingly low grade on a final paper or test, the students said that they would know the nature of that instructor and course long before the final test or paper.

To see if those student’s speculation about their own behavior matches with IDEA’s own data, I talked to the CEO of IDEA to ask what proportion of students go back into the system and change their responses and if that was a question that faculty at other institutions had asked.  He told me that he had heard that concern raised repeatedly since they introduced the online format. As a result, they have been watching that data point closely. Across all of the institutions that use the online system over the last several years, only 0.6% of all students actually go back into the system and edit their responses. He did not know what proportion of that small minority altered their responses in a substantially negative direction.
Since the first of my three training sesssions starts in about an hour, I’m going to stop now.  But so far, it appears that moving to IDEA online has been a pretty positive thing for students and our data. Now I hope we can make the most of it for all of our instructors. So I better get to work prepping for this week!
Make it a good day,
Mark

“Not so fast!” said the data . . .

I’ve been planning to write about retaining men for several weeks. I had it all planned out. I’d chart the number of times in the past five years that male retention rates have lagged behind female retention rates, suggest that this might be an issue for us to address, clap my hands together, and publish the post. Then I looked closer at the numbers behind those pesky percentages and thought, “Now this will make for an interesting conversation.”

But first, let’s get the simple stuff out of the way. Here are the differences in retention rates for men and women over the last five years.

Cohort Year Men Women
2016 83.2% 89.1%
2015 85.6% 91.3%
2014 85.0% 86.8%
2013 83.2% 82.7%
2012 78.6% 90.1%

It looks like a gap has emerged in the last four years, right? Just in case you’re wondering (especially if you looked more carefully at all five years listed in the table), “emerged” isn’t really the most accurate word choice. It looks like the 2013 cohort was more of an anomaly than anything else since the 2012 cohort experienced the starkest gap in male vs. female retention of any in the past five years. Looking back over the three years prior to the start of this table, this gap reappears within the 2011, 2010, and 2009 cohorts.

But in looking more closely at the number of men and women who enrolled at Augustana in each of those classes, an interesting pattern appears that adds a least one layer of complexity to this conversation. Here are the numbers of enrolled and retained men and women in each of the last five years.

Cohort Year                   Men                Women
Enrolled Retained Enrolled Retained
2016 304 253 393 350
2015 285 244 392 358
2014 294 250 432 375
2013 291 242 336 278
2012 295 232 362 326

Do you see what I see?  Look at the largest and smallest numbers of men enrolled and the largest and smallest numbers of men retained. In both cases, we are talking about a difference of about 20 male students (for enrolled men: 304 in 2016 for a high and 285 in 2015 for a low; for retained men, 253 in 2016 for a high and 232 in 2012 for a low). No matter the total enrollment in a given first-year class, these numbers seem pretty consistent. By contrast, look at the largest and smallest numbers of women enrolled and retained. The differences between the high and the low of either enrolled or retained women are much greater – by almost a factor of five.

So what does it mean when we put these retention rate gaps and the actual numbers of men and women enrolled/retained into the same conversation? For me, this exercise is an almost perfect example of how quantitative data that is supposed to reveal deep and incontrovertible truth can actually do exactly the opposite. Data just isn’t clean, ever.

Situating these data within the larger conversation about male and female rates of educational attainment, our own findings begin to make some sense. Nationally, the educational attainment gap between men and women starts long before college. Men (boys) finish high school at lower rates than women. Men go to college at lower rates than women. Men stay in college at lower rates than women. And men graduate from college at lower rates than women. So when the size of our first-year class goes up, it shouldn’t be all that surprising that the increase in numbers is explained by a disproportionate increase in women.

Finally, we have long known (and should also regularly remind ourselves) that retention rates are a proxy for something more important: student success. And student success is an outcome of student engagement in the parts of the college experience that we know help students grow and learn. On this score, we have plenty of evidence to suggest that we ought to focus more of our effort on male students. I wrote about one such example last fall when we examined some differences between men and women in their approaches toward social responsibility and volunteering rates. A few years back, I wrote about another troubling finding involving a sense of belonging on campus among Black and Hispanic men.

I hope we can dig deeper into this question over the next several weeks.  I’ll do some more digging into our own student data and share what I find. Maybe you’ve got some suggestions about where I might look?

Make it a good day,

Mark

 

 

Big Data, Blindspots, and Bad Statistics

As some of you know, last spring I wrote a contrarian piece for The Chronicle of Higher Education that posed some cautions to unabashedly embracing big data.  Since then, I’ve found two Ted Talks that add to the list of reasons to be suspicious of an overreliance on statistics and big data.

Tricia Wang outlines the dangers of relying on historical data at the expense of human insight when trying to anticipate the future.

Mona Chalabi describes three ways to spot a suspect statistic.

Both of these presenters reinforce the importance of triangulating information from quantitative data, individual or small-group expertise, and human observation. In addition, all of this information can’t eliminate ambiguity. Any assertion of certainty is almost always one more reason to be increasingly skeptical.

So if you think I’m falling victim to either of these criticisms, feel free to call me out!

Make it a good day,

Mark

Improving Interfaith Understanding at Augustana

This is a massively busy week at Augie. We had a packed house of high school students visiting on Monday (I’ve never seen the cafeteria so full of people ever!), the Board of Trustees will gather on campus for meetings on Thursday and Friday, and hundreds of alumni and family will arrive for Homecoming over the weekend. With all of this hustle and bustle, you probably wouldn’t have noticed three unassuming researchers from the Interfaith Diversity Experiences and Attitudes Longitudinal Survey (IDEALS) quietly talking to faculty, staff, and students on Monday and Tuesday. They were on campus to find out more about our interfaith programs, experiences, and emphasis over the past several years.

Apparently, we are doing something right when it comes to improving interfaith understanding at Augustana. Back in the fall of 2015, our first-year cohort joined college freshmen from 122 colleges and universities around the country to participate in a 4-year study of interfaith understanding development. The study was designed to collect data from those students at the beginning of the first year, during the fall of the second year, and in the spring of the fourth year. In addition to charting the ways in which these students changed during college, the study was also constructed to identify the experiences and environments that influence this change.

As the research team examined the differences between the first-year and second-year data, an intriguing pattern began to emerge. Across the entire study, students didn’t change very much. This wasn’t so much of a surprise, really, since the Wabash National Study of Liberal Arts Education had found the same thing. However, unlike students across the entire study, Augustana students consistently demonstrated improvement on most of the measures in the study. This growth was particularly noticeable in areas like appreciative knowledge of different worldviews, appreciative attitudes toward different belief systems, and global citizenship. Although the effect sizes weren’t huge, a consistent pattern of subtle but noticeable growth suggested that something good might be happening at Augustana.

However, using some fancy statistical tricks to generate an asterisk or two (denoting statistical significance) doesn’t necessarily help us much in practical terms. Knowing that something happened doesn’t tell us how we might replicate it or how we might do it even better. This is where the qualitative ninjas need to go to work and talk to people (something us quant nerds haven’t quite figured out how to do yet). Guided by the number-crunching, the real gems of knowledge are more likely to be unearthed through focus groups and interviews where researchers can delve deep into the experiences and observations of folks on the ground.

So what did our visiting team of researchers find? They hope to have a report of their findings for us in several months. So far all I could glean from them is that Augustana is a pretty campus with A LOT of steps.

But there is a set of responses from the second-year survey data that that might point in a direction worth contemplating. There is a wonderfully titled grouping of items called “Provocative Encounters with Worldview Diversity,” from which the responses to three statements seem to set our students’ experience apart from students across the entire study as well as students at institutions with a similar Carnegie Classification (Baccalaureate institutions – arts and sciences). In each case, we see a difference in the proportion of students who responded “all the time” or “frequently.”

  1. In the past year, how often have you had class discussions that challenged you to rethink your assumptions about another worldview?
    1. Augustana students: 51%
    2. Baccalaureate institutions: 43%
    3. All institutions in the study: 33%
  2. In the past year, how often have you felt challenged to rethink your assumptions about another worldview after someone explained their worldview to you?
    1. Augustana students: 44%
    2. Baccalaureate institutions: 34%
    3. All institutions in the study: 27%
  3. In the past year, how often have you had a discussion with someone of another worldview that had a positive influence on your perceptions of that worldview?
    1. Augustana students: 48%
    2. Baccalaureate institutions: 45%
    3. All institutions in the study: 38%

In the past several years, there is no question that we have been trying to create these kinds of interactions through Symposium Day, Sustained Dialogue, course offerings, a variety of co-curricular programs, and increased diversity among our student body. Some of the thinking behind these efforts dates back six or seven years when we could see from our Wabash National Study Data and our prior NSSE data that our students reported relatively fewer serious conversations with people who differed from them in race/ethnicity and/or beliefs/values. Since a host of prior research has found that these kinds of serious conversations across difference are key to developing intercultural competence (a skill that certainly includes interfaith understanding), it made a lot of sense for us to refine what we do so that we might improve our students’ gains on the college’s learning outcomes.

The response to the items above suggests to me that the conditions we are trying to create are indeed coming together. Maybe, just maybe, we have successfully designed elements of the Augustana experience that are producing the learning that we aspire to produce.

It will be very interesting to see what the research team ultimately reports back to us. But for now, I think it’s worth noting that there seems to be early evidence that we have implemented intentionally designed experiences that very well might be significantly impacting our students’ growth.

How about that?!

Make it a good day,

Mark

 

Something to think about for the next Symposium Day

Symposium Day at Augustana College has grown into something truly impressive.  The quality of the concurrent sessions hosted by both students and faculty present an amazing array of interesting approaches to the theme of the day. The invited speakers continue to draw large crowds and capture the attention of the audience. And we continue to cultivate in the Augustana culture a belief in owning one’s learning experience by hosting a day in which students choose the sessions they attend and talk to each other about their reactions to those sessions.

Ever since its inception, we’ve emphasized the value of integrating Symposium Day participation into course assignments. Last year, we tested the impact of such curricular integration and found that Symposium Day mattered for first-year student growth in a clear and statistically significant way. We also know that graduating classes have increasingly found Symposium Day to be a valuable learning opportunity. Since 2013, the average response to the statement “Symposium Day activities influenced the way I now think about real-world issues,” has risen steadily. In 2017, 46% of seniors agreed or strongly agreed with that statement.

So what more could be written about an idea that has turned out to be so successful? Well, it turns out that when an organization values integration and autonomy, sometimes those values can collide and produce challenging, albeit resolvable, tensions. This year a number of first-year advisors encountered advisees who had assignments from different classes requiring them to be at different presentations simultaneously. Not surprisingly, these students were stressing about how they were going to pull this off and were coming up with all sorts of schemes to be in two places at once.

In some cases, the students didn’t know that they might be able to see a video recording of one of the conflicting presentations (although no one was sure whether that recording would be available before their assignment was due). But in other cases, there was simply no way for the student to attend both sessions.

This presents us all with a dilemma. How do we encourage the highest possible proportion of students that have course assignments that integrate their courses with Symposium Day without creating a situation where students are required to be in two places at once or run around like chickens with their proverbial heads cut off?

One possibility might be some sort of common assignment that originates in the FYI course. Another possibility might reside in establishing some sort of guidelines for Symposium Day assignments so that students don’t end up required by two different classes to be in two different places at the same time. I don’t have a good answer, nor is it my place to come up with one (lucky me!).

But it appears that our success in making Symposium Day a meaningful educational experience for students has created a potential obstacle that we ought to avoid. One student told me that the worst part about the assignments that she had to complete wasn’t that she was frustrated that she had homework. Instead, the worst part for her was that the session she really wanted to see, “just because it looked really interesting,” was also at the same time as the two sessions she was required to attend.

It would be ironic if we managed to undercut the way that Symposium Day participation seems to foster our students’ intrinsic motivation as learners because we got so good at integrating class assignments with Symposium Day.

Something to think about before we start planning for our Winter Term event.

Make it a good day,

Mark

Does Our Students’ Interest in Complex Thinking Change over Four Years?

One of the best parts of my job is teaming up with others on campus to help us all get better at doing what we do. Over the past seven years, I’ve been lucky enough to work with almost every academic department or student life office on projects that have genuinely improved the student experience. But if I had to choose, I think my favorite partnership is the annual student learning assessment initiative that combines the thoughtfulness (and sheer intellectual muscle) of the Assessment for Improvement Committee with the longitudinal outcome data (and nerdy statistical meticulousness) from the Office of Institutional Research and Assessment.

For those of you who don’t know about this project already, the annual student learning assessment initiative starts anew every summer – although it takes about four years before any of you see the results. First, the IR office chooses a previously validated survey instrument that aligns with one of Augustana’s three broad categories of learning outcomes. Second, we give this survey to the incoming first-year class just before the fall term starts. Third, when these students finish their senior year we include the same set of questions in the senior survey, giving us a before-and-after set of data for the whole cohort. Fourth, after linking all of that data with freshman and senior survey data, admissions data, course-taking data, and student readiness survey results, we explore both the nature of that cohort’s change on the chosen outcome as well as the experiences or other characteristics that might predict positive or negative change on that outcome.

The most recent graduating cohort (spring, 2017), provided their first round of data in the fall of 2013. Since we had already started assessment cycles of intrapersonal conviction growth (the 2011 cohort) and interpersonal maturity growth (the 2012 cohort), it was time to turn our attention to intellectual sophistication (the category that includes disciplinary knowledge, critical thinking and information literacy, and quantitative literacy). After exploring several possible assessment instruments, we selected an 18-item survey called The Need for Cognition Scale. This instrument tries to get at the degree to which the respondent is interested in thinking about complicated or difficult problems or ideas. Since the Need for Cognition Scale had been utilized by the Wabash National Study of Liberal Arts Education, they had already produced an extensive review of the ways in which this instrument correlated with aspects of intellectual sophistication as we had defined it. And since this instrument is short (18 questions) and cheap (free), we felt very comfortable putting it to work for us.

Fast forward four years and, after some serious number crunching, we have some interesting findings to share!

Below I’ve included the average scores from the 2013 cohort when they took the Need for Cognition Scale in the fall of their first year and in the spring of their fourth year. Keep in mind that scores on this scale range from 1 to 5.

Fall, 2013 3.43
Spring, 2017 3.65

The difference between the two scores is statistically significant, meaning that we can confidently claim that our students are becoming more interested in thinking about complicated or difficult problems or ideas.

For comparison purposes, it’s useful to triangulate these results against Augustana’s participation in the Wabash National Study between 2008 and 2012. Amazingly, that sample of students produced remarkably similar scores. In the fall of 2008, they logged a pre-test mean score of 3.43. Four years later, they registered a post-test mean score of 3.63. Furthermore, the Wabash National Study overall results suggest that students at other small liberal arts colleges made similar gains over the course of four years.

It’s one thing to look at the overall scores, but the proverbial devil is always in the details. So we’ve made it a standard practice to test for differences on any outcome (e.g., critical thinking, intercultural competence) or perception of experience (e.g., sense of belonging on campus, quality of advising guidance, etc.) by race/ethnicity, sex, socioeconomic status, first-generation status, and pre-college academic preparation. This is where we’ve often found the real nuggets that have helped us identify paths to improvement.

Unlike last year’s study of intercultural competence, we found no statistically significant differences by race/ethnicity, sex, socioeconomic status, or first-generation status in either where these different types of students scored when they started college or how much they had grown by the time they graduated. This was an encouraging finding because it suggests that the Augustana learning experience is equally influential for a variety of student types.

However, we did find some interesting differences among students who come to Augustana with different levels of pre-college preparation. These differences were almost identical no matter if we used our students ACT scores or high school GPA to measure pre-college academic preparation. Below you can see how those differences played out based upon incoming test score.

ACT Score Fall, 2013 Spring, 2017
Bottom 3rd ( < 24) 3.33 3.54
Middle 3rd   (24-28) 3.42 3.63
Top 3rd ( > 28) 3.59 3.77

As you can see, all three groups of students grew similarly over four years. But the students entering with a bottom third ACT score started well behind the students who entered with a top third ACT score. Moreover, by the time this cohort graduated, the bottom third ACT student had not yet reached the entering scores of the top third ACT students (3.54 compared with 3.59).

So what should we make of these findings? First, I think it’s worth noting that once again we have evidence that on average our students grow on a key aspect of intellectual sophistication. This is worth celebrating. Furthermore, our student growth doesn’t appear to vary across several important demographic characteristics, suggesting that, on at least one learning metric, we seem to have achieved some outcome equity. And although there appear to be differences by pre-college academic preparation in where those students end up, the change from first-year to fourth-year across all three groups is almost identical. This suggests something that we might gloss over at first, namely that we seem to be accomplishing some degree of change equity. In other words, no matter where a student is when they arrive on campus, we are able to help them grow while they are here.

At the end of our presentation of this data last Friday afternoon, we asked everyone in attendance to hypothesize about the kinds of student experiences that might impact change on this outcome. Everyone wrote their hypotheses (some suggested only one idea while others, who shall yet remain nameless, suggested more than ten!) on a 4×6 card that we collected. Over the next several months, we will do everything we can to test each hypothesis and report back to the Augustana community what we found at our winter term presentation.

Oh, you say through teary eyes that you missed our presentation? Well, lucky for you (and us) we are still taking suggestions. So if you have any hypotheses, speculations, intuition, or just outright challenges that you want to suggest, bring it! You can post your ideas in the comments below or email me directly.

I can’t wait to start digging into the data to find what mysteries we might uncover! And look for our presentation of these tests as an upcoming winter term Friday Conversation.

Make it a good day,

Mark

Just when you think you’ve got everything figured out . . .

This post started out as nothing more than a humble pie correction; something similar to what you might find at the bottom of the first page of your local newspaper (if you are lucky enough to still have a local newspaper). But as I continued to wrestle with what I was trying to say, I realized that this post wasn’t really about a correction at all. Instead, this post is about what happens when a changing student population simply outgrows the limits of the old labels you’ve been using to categorize them.

Last week, I told you about a stunningly high 89.8% retention rate for Augustana’s students of color, more than five percentage points higher than our retention rate for white students. During a meeting later in the week, one of my colleagues pointed out that the total number of students of color from which we had calculated this retention rate seemed high. Since this colleague happens to be in charge of our Admissions team, it seemed likely that he would know a thing or two about last year’s incoming class. At the same time, we’ve been calculating this retention rate in the same way for years, so it didn’t seem possible that we suddenly forgot how to run a pretty simple equation.

Before I go any further, let’s get the “correction,” or maybe more precisely “clarification,” out of the way. Augustana’s first-to-second year retention rate for domestic students of color this year is 87.3%, about a point higher than the retention rate for domestic white students (86%). Still impressive, just not quite as flashy. Furthermore, our first-to-second year retention rate for international students is 88.4%, almost two percentage points higher than our overall first-to-second year retention rate of 86.5%. Again, this is an impressive retention rate among students who, in most cases, are also dealing with the extra hurdle of adapting to (if not learning outright) Midwestern English.

So what happened?

For a long time, Augustana has used the term “multicultural students” as a way of categorizing all students who aren’t white American citizens raised in the United States. Even though the term is dangerously vague, when almost 95% of Augustana’s enrollment was white domestic students (less than two decades ago) there was a reasonable logic to constructing this category. Just as categories can become too large to be useful, so too can categories become so small that they dissipate into a handful of individuals. And even the most caring organization finds it pretty difficult to explicitly focus itself on the difficulties of a few individuals.

Moreover, this categorization allowed us to construct a group large enough to quantify in the context of other larger demographic groups. For example, take one group of students from which 10 of 13 return for a second year and compare it with another group of students from which 200 of 260 return for a second year. Calculated as a proportion, both groups share the same retention rate. But in practice, the retention success for each group seems very different; one group lost very few students (3) while the other group lost a whole bunch (60). In the not so distant past when an Augustana first-year class would include maybe 20 domestic students of color and 5 international students (out of a class of about 600), grouping these students into the most precise race, ethnicity, and citizenship categories would almost guarantee that these individuals would appear as intriguing rarities or, worse yet, quaint novelties. Under these circumstances, it made a lot of sense to combine several smaller minority groups into one category large enough to 1) conceptualize as a group with broadly similar needs and challenges and 2) quantify in comparable terms to other distinct groups of Augustana students.

In no way am I arguing that the term “multicultural” was a perfect label. As our numbers of domestic students of color increased, the term grew uncomfortably vague. Equally problematic, some inferred that we considered the totality of white students to be a monoculture or that we considered all multicultural students to be overflowing with culture and heritage. Both of these inferences weren’t necessarily accurate, but as with all labels, seemingly small imperfections can morph into glaring weaknesses when the landscape changes.

Fast forward fifteen years. Entering the 2017-18 academic year, our proportion of “multicultural students” has increased by over 400%, and the combination of domestic students of color and international students make up about 27% of our total enrollment – roughly 710 students. Specifically, we now enroll enough African-American students, Hispanic students, and international students to quantitatively analyze their experiences separately. To be clear, I’m not suggesting that we should prioritize one method of research (quantitative) over another (qualitative). I am arguing, however, that we are better able to gather evidence that will inform genuine improvement when we have both methods at our disposal.

After a conversation with another colleague who is a staunch advocate for African-American students, I resolved to begin using the term “students of color” instead of multicultural students. Although it’s taken some work, I’m making slow progress. I was proud of myself last week when I used the term “students of color” in this blog without skipping a beat.

Alas, you know the story from there. Although I had used an arguably more appropriate term for domestic students of color in last week’s post, I had not thought through the full implications of shifting away from an older framework for conceptualizing difference within our student body. Clearly, one cannot simply replace the term “multicultural students” with “students of color” and expect the new term to adequately include international students. At the same time, although the term “multicultural” implies an air of globalism, it could understandably be perceived to gloss over important domestic issues of race and inequality. If we are going to continue to enroll larger numbers across multiple dimensions of difference, we will have to adopt a more complex way of articulating the totality of that difference.

Mind you, I’m not just talking about counting students more precisely from each specific racial and ethnic category – we’ve been doing that as long as we’ve been reporting institutional census data to the federal government. I guess I’m thinking about finding new ways to conceptualize difference across all of its variations so that we can adopt language that better matches our reality.

I’d like to propose that all of us help each other shift to a terminology that better represents the array of diversity that we’ve worked so hard to achieve, and continue to work so hard to sustain. I know I’ve got plenty to learn (e.g., when do I use the term “Hispanic” and when do I use the term “Latinx”?), and I’m looking forward to learning with you.

And yes, I’ll be sure to reconfigure our calculations in the future. Frankly, that is the easy part. Moreover, I’ll be sure to reconceptualize the way I think about student demographics. We’ve crossed a threshold into a new dimension of diversity within our own student body. Now it’s time for the ways that we quantify, convey, and conceptualize that diversity to catch up.

Make it a good day,

Mark

Retention, Realistic Goals, and a Reason to be Proud

When we included metrics and target numbers in the Augustana 2020 strategic plan, we made it clear to the world how we would measure our progress and our success. As we have posted subsequent updates about our efforts to implement this strategic plan, a closer look into those documents exposes some of the organizational challenges that can emerge when a goal that seemed little more than a pipe dream suddenly looks like it might just be within our grasp.

Last week we calculated our first-to-second year retention numbers for the cohort that entered in the fall of 2016. As many of you know, Augustana 2020 set a first-to-second year retention rate goal of 90%, a number that we’d never come close to before. In fact, colleges enrolling students similar to ours top out at retention rates in the upper 80s. But we decided to set a goal that would stretch us outside of this range. To come up with this goal, we asked ourselves, “What if the stars aligned with the sun and the moon and we retained every single student that finished the year in good academic standing?” Under those conditions, we might hit a 90% retention rate. A pipe dream? Maybe. But why set a goal if it doesn’t stretch us a little bit? So we stuck that number in the document and thought to ourselves, “This will be a good number to shoot for over the next five years.”

Last year (fall of 2016), we were a little stunned to find that we’d produced an overall first-to-second year retention rate of 88.9%. Sure, we had instituted a number of new initiatives, tweaked a few existing programs, and cranked up the volume on our prioritizing retention megaphone to eleven. But we weren’t supposed to have had so much success right away. To put this surprise in the context of real people, an 89% retention rate meant that we whiffed on a whopping seven students! SEVEN!!! Even if we had every retention trick in the book memorized, out of a class of 697, an 88.9% retention rate is awfully close to perfect.

So what do you do when you find yourself close to banging your head on a ceiling that you didn’t really ever expect to see up close? The fly-by-night thought leader tripe would probably answer with an emphatic, “Break through that ceiling!” (complete with an infomercial and a special deal on a book and a DVD). Thankfully, we’ve got enough good sense to be smarter than that. In reality, a situation like this can set the stage for a host of delicately dangerous delusions. For example, if we were to exceed that retention rate in the very next year we could foolishly convince ourselves into thinking that we’ve discovered the secret to perfect retention. Conversely, if our retention rate were to slip in the very next year we could start to think that our aspiration was always beyond our grasp and that we really ought to just stop trying to be something that we are not (cue the Disney movie theme song).

The way we’ve chosen to approach this potential challenge is to start with a clear understanding of all of the various retention rates of student subpopulations that make up the overall number. By examining and tracking these subgroups, we can make a lot more sense of whatever the next year’s retention rate turns out to be. We also need to remind ourselves that for an overall retention rate to hit our aspired goal, all of the subpopulation retention rates have to end up evenly distributed around that final number. And that has always been where the really tough challenges lie because retention rates for some student groups (e.g., low income, students of color, lower academic ability) have languished below other groups (e.g., more affluent, white students, higher academic ability) for a very long time.

With all that as a prelude, Let’s dive into the details that make up the retention rate of our 2016 cohort.

Overall, our first-to-second year retention rate this fall is 86.5%. No, it’s not as strong as last year’s 88.9% retention rate. Even though our three-year trend of 3-year retention rate averages continues to improve (84.7%, 86.0%, and 87.2% most recently), I would be lying if I said that I wasn’t just a little disappointed by the overall number. Yet, this sets up the perfect opportunity to examine our data more closely for evidence that might confirm or counter the narrative I described above.

As always, this is where things get genuinely interesting. Over the last few years, we’ve put additional effort into the quality of the experience we provide for our students of color. So we should expect to see improvement in our first-to-second year retention rate for these students. And in fact, we have seen improvement over the past four years as retention rates for these students have risen from 78.4% four years ago to 86.1% last year.

So it is particularly gratifying to report that the retention rate for students of color among the 2016 cohort increased again, this time to an impressive 89.8%! Amazingly, these students persisted at a rate almost five percentage points higher than white students.

That’s right.  Retention of first-year students of color from fall of 2016 to fall of 2017 was 89.8%, while the retention of white students over the same period was 85.6%.

Of course, there is clearly plenty of work for us yet to do in creating the ideal learning environment where, as a result, the maximal number of students succeed. And I don’t for a second think that everything is going to be unicorns and rainbows from here on out. But for a moment, I think it’s worth taking just a second to be proud of our success in improving the retention rate of our students of color.

Make it a good day,

Mark