Anticipating what our students need to know is SO complicated!

Over the last few weeks, I’ve been wrestling with a couple of data trends and their accompanying narratives that seem pretty important for colleges like ours. However, unlike most posts in which I pretend to have some answers, this time I’m just struggling to figure out what it all means. So this week, I’m going to toss this discombobulated stew in your lap and hope you can help me sort it all out (or at least clean up some of the mess!).

First, the pressure on colleges to prepare their students to graduate with substantial “work readiness” appears to be at an all-time high. The Gallup Organization continues to argue that employers don’t think college graduates are well-prepared for success in the workplace. Even though there is something about the phrase “work readiness” that makes me feel like I just drank sour milk, we have to admit that preparing students to succeed in a job matters, especially when student loan debt is now such a large, and often frightening, part of the calculus that determines if, and where, a family can send their kids to college. Put all this together and it’s no wonder why students overwhelmingly say that the reason they want to go to college is to get a good-paying job.

Underneath all of this lies a pretty important assumption about what the world of work will be like when these students graduate. Student loans take, on average, 21 years to pay off, and the standard repayment agreement for a federal student loan is a 10-year plan. So it would seem reasonable that students, especially those who take out loans to pay for college, would anticipate that the job for which college prepares them should in most cases outlast the time it takes for them to pay off their loans. I’m not saying that everyone thinks this through completely, but I think most folks are assuming a degree of stability and income in the job they hope to obtain after earning a college degree, making the loans that they take out to pay for college a pretty safe bet.

But this is where it gets dicey. The world of work has been undergoing a seismic shift over the past several decades. The most recent report from the Bureau of Labor Statistics suggests that, on average, a person can expect to have 12 jobs between the ages of 18 and 50. What’s more, the majority of those job changes occur between the ages of 18 and 34 – the same period of time during which one would be expected to pay off a student loan. Moreover, between 2005 and 2015, almost all of the jobs added to the economy fit into a category called “alternative work.” This category of work includes contract labor, independent work, and any sort of temporary job (in addition to the usual suspects, think Turo, Lyft, or TaskRabbit). Essentially, these are jobs that are either spun as “providing wonderful flexibility” or depressingly described as depending on “the whim of the people.” As with so many other less-than-attractive realities, someone put a bow on it and labeled this whole movement “the gig economy” (sounds really cool except there’s no stage lighting or rock and roll glamor). It’s no surprise that the gig economy presents a rather stark set of downsides for individuals who choose it (or get sucked into it by circumstances beyond their control).

So what does all of this mean for colleges like ours that are (whether we like it or not) obligated to focus a lot of our attention on preparing students for a successful professional life?  I don’t have many great answers to this one. But a couple of questions seem pretty important:

  • To what degree are we responsible for ensuring that our students are financially literate and can manage through the unpredictability that seems likely for many early in their career?
  • What knowledge, skills, or dispositions should we prioritize to help our students thrive in a professional life that is almost certain to include instability, opportunity, and unexpected change?

Of all the possible options that an 18-year-old could sign up for, a small liberal arts college seems like it ought to be the ideal place for learning how to navigate, even transcend, the turbulent realities that seem more and more an unavoidable part of the world of work. But without designing what we do so that every student has to encounter this stuff, we leave that learning up to chance. And as usual, the students who most need to learn this stuff are the ones who are least likely to find it on their own.  Looks like we better role up our sleeves and get to work!

Make it a good day,

Mark

Measures, Targets, and Goodhart’s Law

Tis the season to be tardy, fa-la-la-la-la…la-la-la-la!

I’m reasonably jolly, too, but this week seems just a little bit rushed. Nonetheless, ya’ll deserve something decent from Delicious Ambiguity this week, so I’m going to put forth my best effort.

I stumbled across an older adage last weekend that seems remarkably apropos given my recent posts about retention rates at Augustana. This phrase is most often called “Goodhart’s Law,” although the concept has popped up in a number of different disciplines over the last century or so.

“When a measure becomes a target, it ceases to be a good measure.”

You can brush up on a quick summary of this little nugget on Wikipedia here, but if you want to have more fun I suggest that you take the time to plunge yourself into this academic paper on the origin of the idea and its subsequent applications here.

Although Goodhart’s Law emerges in the context of auditing monetary policy, there are more than a few well-written examples of its application to higher ed. Jon Boekenstedt at DePaul University lays out a couple of great examples here that we still see in the world of college admissions.  In all of the instances where Goodhart’s Law has produced almost absurd results (hilarious if they weren’t so often true), the take away is the same. Choosing a metric (a simple outcome) to judge the performance (a complex process) of an organization sets in motion behaviors by individuals within that organization that will inevitably play to the outcome (the metric) rather than the performance (the process) and, as a result, corrupt the process that was supposed to lead to that outcome.

So when we talk about retention rates, let’s remember that retention rates are a proxy for the thing we are actually trying to achieve.  We are trying to achieve student success for all students who enroll at Augustana College, and we’ve chosen to believe that if students return for their second year, then they are succeeding.

But we know that life is a lot more complicated than that. And scholars of organizational effectiveness note that organizations are less likely to fall into the Goodhart’s Law trap if they identify measures that focus on underlying processes that lead to an outcome (one good paper on this idea is here). So, even though we shouldn’t toss retention rates onto the trash heap, we are much more likely to truly accomplish our institutional mission if we focus on tracking the processes that lead to student success; processes that are also, more often than not, likely to lead to student retention.

Make it a good holiday break,

Mark

Ideals, Metrics, and Myths (oh no!)

Educators have always been idealists. We choose to believe what we hope is possible, and that belief often keeps us going when things aren’t going our way. It’s probably what drove many of us to finish a graduate degree and what drives us to put our hearts into our work despite all the discouraging news about higher ed these days.

But an abundance of unchecked idealism can also be a dangerous thing. Because the very same passion that can drive one to achieve can also blind one to believe in something just because it seems like it ought to be so. Caught up in a belief that feels so right, we are often less likely to scrutinize the metrics that we choose to measure ourselves or compare ourselves to others. Worse still, our repeated use of these unexamined metrics can become etched into institutional decision-making. Ultimately, the power of belief that once drove us to overcome imposing challenges can become our Achilles heel because we are absolutely certain of things that may, in fact, not be so.

For decades, colleges have tracked the distribution of their class sizes (i.e., the number of classes enrolling 2-9, 10-19, 20-29, 30-39, 40-49, 50-99, and more than 100 students, respectively) as a part of something called the Common Data Set. The implication behind tracking this data point is that a higher proportion of smaller classes ought to correlate with a better learning environment. Since the mid-1980s, the U.S. News and World Report rankings of colleges and universities have included this metric in its formula distilling it down to two numbers – the proportion of classes at an institution with 19 or fewer students (more is better) and the proportion of classes at an institution with 50 or more students (less is better). Two years ago U.S News added a twist by creating a sliding scale so that classes of 19 and fewer received the most credit, the percentage of classes with 20-29, 30-39, and 40-49 received proportionally less credit, and classes of over 50 received no credit. Over time these formulations have produced a powerful mythology across many postsecondary institutions: classes with 19 or fewer students are better than classes with 20 or more.

This begs a pretty important question: are those cut points (19/20, 29/30, etc.) grounded in anything other than an arbitrary application of the Roman numbering system?

Our own fall term IDEA course feedback data provides an opportunity to test the validity of this metric. The overall distribution of class sizes is almost perfect (a nicely shaped bell curve), with almost 80% of courses receiving a robust response rate. Moreover, IDEA’s aggregate dataset allows us to compare three useful measures of the student learning experience across all courses: a student-reported proxy of learning gains called the “progress on relevant objectives” (PRO) score (for a short explanation of the PRO score with additional links for further information, click here), the student perception of the instructor, and the student perception of the course. The table below spells out the average response scores for each measure across eight different categories of class size. Each average score comes from a 5-item response set (converted to a range of 1-5). The PRO score response options range from “no progress” to “exceptional progress,” and the perception of instructor and course excellence response options range from “definitely false” to “definitely true” (to see the actual items on the survey, click here). For this analysis, I’ve only included courses that exceed a 2/3rds (66.67%) response rate.

Class Size PRO Score Excellent Teacher Excellent Course
6-10 students (35 classes) 4.24 4.56 4.38
11-15 students (85 classes) 4.12 4.38 4.13
16-20 students (125 classes) 4.08 4.29 4.01
21-25 students (71 classes) 4.18 4.40 4.27
26-30 students (37 classes) 4.09 4.31 4.18
31-35 students (9 classes) 3.90 4.13 3.81
36-40 students (11 classes) 3.64 3.84 3.77
41 or more students (8 classes) 3.90 4.04 3.89

First, classes enrolling 6-10 students appear to produce notably higher scores on all three measures than any other category. Second, it doesn’t look like there is much difference between subsequent categories until we get to classes enrolling 31 or more students (further statistical testing supports this observation). Based on our own data – assuming that the fall 2017 data does not significantly differ from other academic terms, if we were going to replicate the notion that class size distribution correlates with the quality of the overall learning environment we might be inclined to choose only two cut points to create three categories of class size: those with 10 or fewer students, those with between 11 and 30 students, and those with more than 30 students.

However, further examination of the smallest category of classes indicates that these courses are almost entirely upper-level major courses. Since we know that all three metrics tend to score higher for upper-level major courses because the students in them are more intrinsically interested in the subject matter than students in lower-level courses (classes that often also meet general education requirements), we can’t attribute the higher scores for this group to class size per se. This leaves us with two general categories: classes with 30 or fewer students, and classes with more than 30 students.

How does this comport with the existing research on class size? Although there isn’t much out there, two brief overviews (here and here) don’t find much of a consensus. Some studies suggest that class size is not relevant, others find a positive effect on the learning experience as classes get smaller, and a few others indicate a slight positive effect as classes get larger(!). Especially in light of developments in pedagogy and technology over the past two decades, a 2013 essay that spells out some findings from IDEA’s extensive dataset suggests that other factors almost certainly complicate the relationship between class size and student learning.

So what do we do with all this? Certainly, mandating that all class enrollments sit just below 30 would be, um, stupid. There is a lot more to examine before anyone should march out onto the quad and declare a “class size” policy. One finding from researchers at IDEA that might be worth exploring on our own campus is the variation of learning objectives selected and achieved by class size. IDEA found that smaller classes might be more conducive to more complex (sometimes called “deeper”) learning objectives, while larger classes might be better suited for learning factual knowledge, general principles, or theories. If class size does, in fact, set the stage for different learning objectives, it might be worth assessing the relationship between learning objectives and class size at Augustana to see if we are taking full advantage of the learning environment that a smaller class size provides.

.And what should we do about the categories of class sizes that U.S. News uses in their college rankings formula? As family incomes remain stagnant, tuition revenue continues to lag behind institutional budget projections, and additional resources seem harder to come by, that becomes an increasingly valid question. Indeed, there might be a circumstance where an institution ought to continue using the Common Data Set class size index to guide the way that it fosters an ideal classroom learning environment. And it is certainly reasonable to take other considerations (e.g., faculty workload, available classroom space, intended learning outcomes of a course, etc.) into account when determining an institution’s ideal distribution of class enrollments. But if institutional data suggests that there is little difference in the student learning experience between classes with 16-20 students and classes with 21-25 students, it might be worth revisiting the rationale that an institution uses to determine its class size distribution. No matter what an institution chooses to do, it seems like we ought to be able to justify our choices based on the most effective learning environment that we can construct rather than an arbitrarily-defined and externally-imposed metric.

Make it a good day,

Mark

Some anecdotes and data snippets from our first experience with the IDEA online course feedback system

Welcome to Winter Term! Maybe some of you saw the big snowflakes that fell on Sunday morning. Even though I know I am in denial, it is starting to feel like fall might have slipped from our collective grasp over the past weekend.

But on the bright side (can we get some warmth with that light?), during the week-long break between fall and winter term, something happened that had not happened since we switched to the IDEA course feedback system. Last Wednesday morning – only a 48 hours after you had entered your final grades, your IDEA course feedback was already processed and ready to view. All you had to do was log in to your faculty portal and check it out! (You can find the link to the IDEA Online Course Feedback Portal on your Arches faculty page).

I’m sure I will share additional observations and data points from our first experience with the online system this week during one of the three “Navigating your Online IDEA Feedback Report” sessions on Monday, Tuesday, and Thursday starting just after 4 PM in Olin 109. (A not so subtle hint – come to Olin 109 on Monday, Tuesday, or Thursday this week (Nov. 13, 14, and 16) at or just after 4 PM to walk through the online feedback reports and maybe one or two cool tricks with the data).  Bring a laptop if you’ve got one just in case we run out of computer terminals.

But in the meantime, I thought I’d share a couple of snippets that I found particularly interesting from our first online administration.

First, it seems that no news about problems logging in to the system turned out to be extremely good news. I was fully prepped to solve all kinds of connectivity issues and brainstorm all sorts of last-minute solutions. But I only heard from one person about one class having trouble getting on to the system . . . and that was when the internet was down all over campus for about 45 minutes. Otherwise, it appears that folks were able to administer the online course feedback forms in class or get their students to complete them outside of class with very little trouble. Even in the basement of Denkmann! This doesn’t mean that we won’t have some problems in the future, but at least with one term under our collective belt . . . maybe the connectivity issue isn’t nearly as big as we worried it might be.

Second, our overall student response rates were quite strong. Of the 467 course sections that could have administered IDEA online, about 74% of those course sections achieved a response rate of 75% or higher. Furthermore, several instructors tested what might happen if they asked students to complete the IDEA online outside of class (incentivized with an offer of extra credit to the class if the overall response rate reached a specific threshold). I don’t believe that any of these instructors’ classes failed to meet the established thresholds.

In addition, after a preliminary examination of comments that students provided, it appears that students actually may have written more comments with more detail than they previously provided on paper-and-pencil forms. This would seem to corroborate feedback from a few faculty members who indicated that their students were thankful that their comments would now be truly anonymous and no longer potentially identifiable given the instructor’s prior familiarity with the student’s handwriting.

Finally, in response to faculty concerns that the extended student access to their IDEA forms (i.e., students were able to enter data into their response forms until the end of finals no matter when they initially filled out their IDEA forms) might lead to students going back into the system and exacting revenge on instructors in response to a low grade on a final exam or paper, I did a little digging to see how likely this behavior might be. In talking to students about this option during week 10 of the term, I got two responses. Several international students said that they appreciated this flexibility because they had been unable to finish typing their comments in the time allotted in class. Since many international students (particular first-year international students) find that it takes them much longer than domestic students to express complex thoughts in written English. I also got the chance to ask a class of 35(ish) students whether or not they were likely to go back into the IDEA online system and change a response several days after they had completed that form. After giving me a bewildered look for an uncomfortably long time, one student finally blurted out, “Why would we do that?”  Upon further probing, the students said that they couldn’t imagine a situation where they would care enough to take the time to find the student portal and change their responses. When I asked, “Even if something happened at the end of the term like a surprisingly bad grade on a test or a paper that you felt was unfair?” The students responded by saying that by the end of the term they would already know enough to know what they thought of that instructor and that class. Even if they got a surprisingly low grade on a final paper or test, the students said that they would know the nature of that instructor and course long before the final test or paper.

To see if those student’s speculation about their own behavior matches with IDEA’s own data, I talked to the CEO of IDEA to ask what proportion of students go back into the system and change their responses and if that was a question that faculty at other institutions had asked.  He told me that he had heard that concern raised repeatedly since they introduced the online format. As a result, they have been watching that data point closely. Across all of the institutions that use the online system over the last several years, only 0.6% of all students actually go back into the system and edit their responses. He did not know what proportion of that small minority altered their responses in a substantially negative direction.
Since the first of my three training sesssions starts in about an hour, I’m going to stop now.  But so far, it appears that moving to IDEA online has been a pretty positive thing for students and our data. Now I hope we can make the most of it for all of our instructors. So I better get to work prepping for this week!
Make it a good day,
Mark

Big Data, Blindspots, and Bad Statistics

As some of you know, last spring I wrote a contrarian piece for The Chronicle of Higher Education that posed some cautions to unabashedly embracing big data.  Since then, I’ve found two Ted Talks that add to the list of reasons to be suspicious of an overreliance on statistics and big data.

Tricia Wang outlines the dangers of relying on historical data at the expense of human insight when trying to anticipate the future.

Mona Chalabi describes three ways to spot a suspect statistic.

Both of these presenters reinforce the importance of triangulating information from quantitative data, individual or small-group expertise, and human observation. In addition, all of this information can’t eliminate ambiguity. Any assertion of certainty is almost always one more reason to be increasingly skeptical.

So if you think I’m falling victim to either of these criticisms, feel free to call me out!

Make it a good day,

Mark

Something to think about for the next Symposium Day

Symposium Day at Augustana College has grown into something truly impressive.  The quality of the concurrent sessions hosted by both students and faculty present an amazing array of interesting approaches to the theme of the day. The invited speakers continue to draw large crowds and capture the attention of the audience. And we continue to cultivate in the Augustana culture a belief in owning one’s learning experience by hosting a day in which students choose the sessions they attend and talk to each other about their reactions to those sessions.

Ever since its inception, we’ve emphasized the value of integrating Symposium Day participation into course assignments. Last year, we tested the impact of such curricular integration and found that Symposium Day mattered for first-year student growth in a clear and statistically significant way. We also know that graduating classes have increasingly found Symposium Day to be a valuable learning opportunity. Since 2013, the average response to the statement “Symposium Day activities influenced the way I now think about real-world issues,” has risen steadily. In 2017, 46% of seniors agreed or strongly agreed with that statement.

So what more could be written about an idea that has turned out to be so successful? Well, it turns out that when an organization values integration and autonomy, sometimes those values can collide and produce challenging, albeit resolvable, tensions. This year a number of first-year advisors encountered advisees who had assignments from different classes requiring them to be at different presentations simultaneously. Not surprisingly, these students were stressing about how they were going to pull this off and were coming up with all sorts of schemes to be in two places at once.

In some cases, the students didn’t know that they might be able to see a video recording of one of the conflicting presentations (although no one was sure whether that recording would be available before their assignment was due). But in other cases, there was simply no way for the student to attend both sessions.

This presents us all with a dilemma. How do we encourage the highest possible proportion of students that have course assignments that integrate their courses with Symposium Day without creating a situation where students are required to be in two places at once or run around like chickens with their proverbial heads cut off?

One possibility might be some sort of common assignment that originates in the FYI course. Another possibility might reside in establishing some sort of guidelines for Symposium Day assignments so that students don’t end up required by two different classes to be in two different places at the same time. I don’t have a good answer, nor is it my place to come up with one (lucky me!).

But it appears that our success in making Symposium Day a meaningful educational experience for students has created a potential obstacle that we ought to avoid. One student told me that the worst part about the assignments that she had to complete wasn’t that she was frustrated that she had homework. Instead, the worst part for her was that the session she really wanted to see, “just because it looked really interesting,” was also at the same time as the two sessions she was required to attend.

It would be ironic if we managed to undercut the way that Symposium Day participation seems to foster our students’ intrinsic motivation as learners because we got so good at integrating class assignments with Symposium Day.

Something to think about before we start planning for our Winter Term event.

Make it a good day,

Mark

This March, it’s Survey Madness!

Even folks who are barely familiar with social science research know the term “survey fatigue.” It describes a phenomenon, empirically supported now by a solid body of research, in which people who are asked to take surveys seem to have only a finite amount of tolerance for it (shocking, I know). So as a survey gets longer, respondents tend to skip questions or take less time answering them carefully. When the term first emerged, it primarily referred to something that could happen within an individual survey. But now that solicitations to take surveys seem to appear almost everywhere, the concept is appropriately applied in reference to a sort of meta survey fatigue.

But if we want to get better at something, we need information to guide our choices.  We ought to know by now that “winging it” isn’t much of a strategy. So we need to collect data, and oftentimes survey research is the most efficient way to do that.

Therefore, in my never-ending quest to turn negatives into positives, I’m going to launch a new phrase into the pop culture ether. Instead of focusing on the detrimental potential of “survey fatigue,” I’m going to ask that we all dig down and build up our “survey fitness.”

Here’s why . . .

In the next couple of months, you are going to receive a few requests for survey data. Many of you have already received an invitation to participate in the “Great Colleges to Work For” survey. The questions in this survey try to capture a sense of the organization’s culture and employee engagement. For all of you who take pride in your curmudgeonly DNA, I can’t argue your criticism of the name of that survey. But they didn’t ask me when they wrote it, so we’re stuck with it. Nonetheless, the findings actually prove useful. So please take the time to answer honestly if you get an email from them.

The second survey invitation you’ll receive is for a new instrument called The Campus Living, Learning, and Work Environment. It tries to tackle aspects of equity and inclusion across a campus community. One of the reasons I signed on for this study is because it is the first that I know of to survey the entire community – faculty, staff, administration, and students. We have been talking a lot lately about the need for this kind of comprehensive data, and here is our chance to get some.

So if you find yourself getting annoyed at the increased number of survey requests this spring, you can blame it all on me. You are even welcomed to complain to me about all the surveys I’ve sent out this term if that is what it takes to get you to complete them. And if you start to worry about survey fatigue in yourself or others during the next few months, think of it as an opportunity to develop your survey fitness! And thanks for putting up with a few more requests for data than usual. I guarantee that I won’t let the data just sit at the bottom of a hard drive.

Make it a good day,

Mark

Differences in our students’ major experiences by race/ethnicity; WARNING: messy data ahead

It’s great to see the campus bustling again.  If you’ve been away during the two-week break, welcome back!  And if you stuck around to keep the place intact, thanks a ton!

Just in case you’re under the impression that every nugget of data I write about comes pre-packaged with a statistically significant bow on top, today I’d like to share some data findings from our senior survey that aren’t so pretty. In this instance, I’ve focused on data from the nine questions that comprise the section called “Experiences in the Major.” For purposes of brevity, I’ve paraphrased each of the items in the table below, but if you want to see the full text of the question, here’s the link to the 2015-16 senior survey on the IR web page. The table below disaggregates the responses to each of these items by Hispanic, African-American, and Caucasian students. The response options are one through five, and range either from strongly disagree to strongly agree or from never to very often (noted with an *).

Item Hispanic African-American Caucasian
Courses allowed me to explore my interests 3.86 3.82 4.09
Courses seemed to follow in a logical sequence 3.85 3.93 4.11
Senior inquiry brought out my best intellectual work 3.61 4.00 3.78
I received consistent feedback on my writing 3.72 4.14 3.96
Frequency of analyzing in class * 3.85 4.18 4.09
Frequency of applying in class * 3.87 4.14 4.15
Frequency of evaluating in class * 3.76 4.11 4.13
Faculty were accessible and responsive outside of class 4.10 4.21 4.37
Faculty knew how to prepare me for my post-grad plans 3.69 4.00 4.07

Clearly, there are some differences in average scores that jump out right away. The scores from Hispanic students are lowest among the three groups on all but one item. Sometimes there is little discernible difference between African-American and Caucasian students’ score while in other instances the gap between those two groups seems large enough to indicate something worth noting.

So what makes this data messy? After all, shouldn’t we jump to the conclusion that Hispanic students’ major experience needs substantial and urgent attention?

The problem, from the standpoint of quantitative analysis, is that none of the differences conveyed in the table meet the threshold for statistical significance. Typically, that means that we have to conclude that there are no differences between the three groups. But putting these findings in the context of the other things that we know already about differences in student experiences and success across these three groups (i.e., differences in sense of belonging, retention, and graduation) makes a quick dismissal of the findings much more difficult. And a deeper dive into the data both adds more useful insights to the mess.

The lack of statistical significance seems attributable to two factors. First, the number of students/majors in each category (570 responses from Caucasian students, 70 responses from Hispanic students, and 28 responses from African-American students) makes it a little hard to reach statistical significance. The interesting problem is that, in order to increase the number of Hispanic and Black students we would need to enroll more students in those groups, which might in part happen as a result of improving the quality of those students’ experience. But if we adhere to the statistical significance threshold, we would have to conclude that there is no difference between the three groups and would then be less likely to take the steps that might help us improve the experience, which would in turn improve the likelihood of enrolling more students in these two groups and ultimately get us to the place where a quantitative analysis would find statistical significance.

The other factor that seems to be getting in the way is that the standard deviations among Hispanic and African-American students is unusually large. In essence, this means that their responses (and therefore their experiences) are much more widely dispersed across the range of response options, while the responses from white students are more closely packed around the average score.

So we have a small number of non-white students relative to the number of white students and the range of experiences for Hispanic or African-American students seem unusually varied. Both of these finds make it even harder to conclude that “there’s nothing to see here.”

Just in case, I checked to see if the distribution of majors among each group differed. They did not. I also checked to see if there were any other strange differences between these student groups that might somehow affect these data. Although average incoming test score, the proportion of first-generation status, and the proportion of Pell Grant qualifiers differed, these differences weren’t stark enough to explain all of the variation in the table.

So the challenge I’m struggling with in this case of messy data is this:

We know that non-Caucasian students on average indicate a lower sense of belonging than their Caucasian peers. We know that our retention and graduation rates of non-white students are consistently lower than white students. We also know that absolute differences between two groups of .20-.30 are often statistically significant if the number of cases in each group is closer in size and if the standard deviation (aka dispersion) is in an expected range.

As a result, I can’t help thinking that just because a particular analytic finding doesn’t meet the threshold for statistical significance doesn’t necessarily mean that we should discard it outright. At the same time, I’m not comfortable arguing that these findings are rock solid.

In cases like these, one way to inform the inquiry is to look for other data sources with which we might triangulate our findings. So I ask all of you, do any of these findings match with anything you’ve observed or heard from students?

Make it a good day,

Mark

Can I ask a delicate question?

Since this is a crazy week for everyone, I’m going to try to post something that you can contemplate when you get the chance to relax your heart rate and breathe. I hope that you will give me the benefit of the doubt when you read this post, because I can imagine this question might be a delicate one and I raise it because I suspect it might help us more authentically and more honestly navigate through some obviously choppy waters as we make some key decisions about our new semester design.

Sometimes, when we advocate for the value of double majors and similar, or even improved, access to double majors in the new semester system, it seems like the rationale for this argument is grounded in the belief that double-majoring is advantageous for Augustana graduates and, as a corollary, relatively easy access to a double-major is helpful in recruiting strong prospective students. In other instances, it sounds as if we advocate for ease of access to double-majoring because we are afraid that programs with smaller numbers of majors will not survive if we build a system that produces fewer double majors.

Without question, both rationales come from the best of places. Doing all that we can for the sake of our student’s potential future success or the possibility of attracting a stronger and larger pool of future students seems utterly reasonable. Likewise, ensuring the health of all current academic departments, especially those that currently enjoy a smaller number of majors, and therefore ensuring the employment stability of all current faculty, is also utterly reasonable.

Yet I wonder if our endeavor to design the best possible semester system would benefit from parsing these concerns more clearly, examining them as distinct issues, and addressing them separately as we proceed. Because it seems to me that prioritizing double-majoring because it benefits post-graduate success, prioritizing double-majoring because it improves recruiting, and prioritizing double-majoring because it ensures employment stability for faculty is not the same as more directly identifying the factors that maximize our students’ post-graduate success, optimizing our offerings (and the way we communicate them) to maximize our recruiting efforts, and designing a system that maintains employment stability and quality for all of our current faculty members. The first approach asserts a causal relationship and seems to narrow our attention toward a single means to an end. The second approach focuses our attention on a goal while broadening the potential ways by which we might achieve it.

Certainly we can empirically test the degree to which double-majoring increases our student’s post-graduate success or whether a double-major friendly system strengthens our efforts to recruit strong students. We could triangulate our findings with other research on the impact of double-majoring on either post-graduate success or prospective student recruiting and design a system that situates double-majoring to hit that sweet spot for graduates and prospective students.

Likewise, we could (and I would argue, should) design a new semester system that ensures gratifying future employment for all current faculty (as opposed to asking someone with one set of expertise and interests to spend all of their time doing something that has little to do with that expertise and interest). However, it seems to me that we might be missing something important if we assume, or assert, that we are not likely to achieve that goal of employment stability if we do not maintain historically similar proportions of double-majors distributed in historically similar ways.

Those of you who have explored the concept of design thinking know that one of its key elements is an openness to genuinely consider the widest possible range of options before beginning the process of narrowing toward a final product or concept. At Augustana we are trying to build something new, and we are trying to do it in ways that very few institutions have done before. Moreover, we aren’t building it from an infinite array of puzzle pieces; we are building it with the puzzle pieces that we already have. So it seems that we ought not box ourselves prematurely. Instead, we might genuinely help ourselves by opening our collective scope to every possibility that 1) gives our students the best chance for success, 2) gives us the best chance to recruit future students, AND 3) uses every current faculty member’s strengths to accomplish our mission in a new semester system.

Please don’t misunderstand me – I am NOT arguing against double-majors (on the contrary, I am intrigued by the idea). I’m only suggesting that, especially as we start to tackle complicated issues that tie into very real and human worries about the future, we are probably best positioned to succeed, both in process and in final product, to the degree that we directly address the genuine and legitimate concerns that keep us up at night. We are only as good as our people and our relationships with each other. I know we are capable of taking all of this into account as we proceed into the spring. I hope every one of you take some time to relax and enjoy the break between terms so that you can start the spring refreshed and fully able to tackle the complex decisions that we have before us.

Make it a good day,

Mark

 

Time to break out the nerves of steel

When I used to coach soccer, other coaches and I would sarcastically say that if you want to improve team chemistry, start winning. Of course we knew that petty disagreements and personal annoyances didn’t vanish just because your team got on a winning streak. But it was amazing to see how quickly those issues faded into the shadows when a team found themselves basking in a winner’s glow. Conversely, when that glow faded it was equally amazing to see how normally small things could almost instantaneously mushroom into team-wide drama that would suck the life out of the locker room.

Even though one might think that in order to win again we just needed to practice harder or find a little bit of luck, almost always the best way to get back to winning was to get the team chemistry right first. That meant deliberately refocusing everyone on being the best of teammates, despite the steamy magma of hot emotion that might be bubbling up on the inside. In the end, it always became about the choice to be the best of who we aspired to be while staring into the pale, heartless eyes of the persons we could so easily become.

You might think that I’m going to launch into a speech about American values, immigration, and refugees. But actually I’m thinking about the choices that face all of us at Augustana College as we start to sort through the more complicated parts of the design process in our conversion to semesters. Like a lot of complex organisms, a functioning educational environment (especially one that includes a residential component) is much more than a list of elements prioritized from most to least important. Instead, a functioning educational environment – especially one that maximizes its impact – is an ecosystem that thrives because of the relationships between elements rather than the elements themselves. It is the combination of relationships that maintain balance throughout the organism and give it the ability to adapt, survive, adjust, recover, and thrive. If one element dominates the organism, the rest of the elements will eventually die off, ultimately taking that dominant element down with them. But if all the elements foster a robust set of relationships that hold the whole thing together, the organism really does become greater than the sum of its parts.

Likewise, we are designing a new organism that is devoted to exceptional student learning and growth. Moreover, we have to design this organism so that each of the elements can thrive while gaining strength from (and giving strength to) each other. We give ourselves the best chance of getting it right if we keep the image of an ecosystem fresh in our minds and strive to design an ecosystem in which all of the relationships between elements perpetuate resilience and energy.

But in order to collaboratively build something so complex, we have to be transparent and choose to trust. And this is where we need to break out the nerves of steel. Because we all feel the pressure, the anxiety, the unknown, and the fear of that unknown. The danger, of course, is that in the midst of that pressure it would be easy, even human, to grab on to one element that represents certainty in the near-term and lose sight of 1) the relationships that sustain any given element (including the one you might currently be squeezing the air out of), and 2) the critical role of all of those relationships in sustaining the entire organism.

As we embark toward the most challenging parts of this semester conversion design, I hope we can find a way, especially when we feel the enormity of it all bearing down on us, to embody transparency and choose to trust. That will mean willingly deconstructing our deepest concerns, facing them openly, and straight-forwardly solving them together.

Think about where we were a year ago and where we are now. We’ve done a lot of impressive work that can’t be understated. (Based on the phone calls I’ve received from other institutions asking us how we are navigating the conversion to semesters, we might just be the golden child of organizational functionality!). Now, as the more complex challenges emerge and the pressure mounts, let’s remember what got us here, what will get us through this stretch of challenging decisions, and what will get us safely to the other side.

Make it a good day,

Mark