What’s in a name?

When I first floated the idea of a weekly column, everyone in the Dean’s office seemed to be on board.  But when I proposed calling it “Delicious Ambiguity,” I got more than a few funny looks.  Although these looks could have been a mere byproduct of the low-grade bewilderment that I normally inspire, let’s just say for the sake of argument that they were largely triggered by the apparent paradox of a column written by the measurement guy that seems to advocate winging it.  So let me tell you a little bit about the origins of the phrase “Delicious Ambiguity” and why I think it embodies the real purpose of Institutional Research and Assessment.

This particular phrase is part of a longer quote from Gilda Radner – a brilliant improvisational comedian and one of the early stars of Saturday Night Live.  The line goes like this:

“Life is about not knowing, having to change, taking the moment and making the best of it, without knowing what’s going to happen next.  Delicious Ambiguity.”

For those of you who chose a career in academia specifically to reduce ambiguity, this statement probably inspires a measure of discomfort.  And there is a part of me that admittedly finds some solace in the task of isolating statistically significant “truths.”  I suppose I could have named this column “Bland Certainty,”  but – in addition to single-handedly squelching reader interest – such a title would suggest that my only role at Augustana is to provide final answers – nuggets of fact that function like the period at the end of a sentence.

Radner’s view of life is even more intriguing because she wrote this sentence as her body succumbed to cancer.  For me, her words exemplify intentional – if not stubborn – optimism in the face of darkly discouraging odds.  I have seen this trait repeatedly demonstrated in many of you over the last several years as you have committed yourself to help a particular student even as that student seems entirely disinterested in  learning.

Some have asserted that a college education is a black box; some good can happen, some good does happen – we just don’t know how it happens.  On the contrary, we actually know a lot about how student learning and development happens – it’s just that student learning doesn’t work like an assembly line.  Instead, student learning is like a budding organism that depends on the conduciveness of its environment; a condition that emerges through the interaction between the learner and the learning context.  And because both of these factors perpetually influence each other, we are most successful in our work to the degree that we know which educational ingredients to introduce, how to introduce them, and when to stir them into the mix.  The exact sequence of the student learning process is, by its very nature, ambiguous because it is unique to each individual learner.

In my mind, the act of educating is deeply satisfying precisely because of its unpredictability.  Knowing that we can make a profound difference in a young person’s life – a difference that will ripple forward and touch the lives of many more long after a student graduates – has driven many of us to extraordinary effort and sacrifice even as the ultimate outcome remains admittedly unknown.  What’s more, we look forward to that moment when our perseverance suddenly sparks a flicker of unexpected light that we know increases the likelihood – no matter how small – that this person will blossom into the life-long student we believe they can be.

The purpose of collecting educational data should be to propel us – the teacher and the student – through this unpredictability, to help us navigate the uncertainty that comes with a process that is so utterly dependent upon the perpetually reconstituted synergy between teacher and student.  The primary role of Institutional Research and Assessment is to help us figure out the very best ways to cultivate – and in just the right ways – manipulate this process.  The evidence of our success isn’t a result at the end of this process.  The evidence of our success is the process.  And pooling our collective expertise, if we focus on cultivating the quality, depth, and inclusiveness of that process, it isn’t outlandish at all to believe that our efforts can put our students on a path that someday just might change the world.

To me; this is delicious ambiguity.

Make it a good day,

Mark

 

From “what we have” to “what we do with it”

We probably all have a good example of a time when we decided to make a change – maybe drastic, maybe minimal – only to realize later the full ramifications of that change (“Yikes! Now I remember why I grew a beard.”).  This is the problem with change – our own little lives aren’t as discretely organized as we’d like to think, and there are always unintended consequences and surprise effects.

 

When Augustana decided to move from measuring itself based on the quality of what we have (incoming student profile, endowment, number of faculty, etc.) to assessing our effectiveness based on what we do (student learning and development, educational improvement and efficiency, etc.), I don’t think we fully realized the ramifications of this shift.  Although there are numerous ways in which this shift is impacting our work, I’d like to talk specifically about the implications of this shift in terms of institutional data collection and reporting.

 

First, let’s get two terms clarified.  When I say “outcomes” I mean the learning that results from educating.  When I say “experiences” I mean the experiences that students have during the course of their college career.  They could be simply described by their participation in a particular activity (e.g., a philosophy major) or they could be more ambiguously described as the quality of a student’s interaction with faculty.  Either way, the idea is – and has always been – that student experiences should lead to gains on educational outcomes.

 

I remember an early meeting during my first few months at Augustana College where one senior administrator turned to me and said, “We need outcomes.  What have you got?”  At many institutions, the answer would be something like, “I’ll get back to you in four years,”  because that is how long it takes to gather dependable data.  Just surveying students at any given point only tells you where they are at that point – it doesn’t tell you how much they’ve changed as a result of our efforts. Although we have some outcome data from a several studies that we happened to join, we still have to gather outcome data on everything that we need to measure – and that will take time.

 

But the other problem is one of design.  Ideally, you choose what you want to measure, and then you start measuring it.  In our case, although we have measured some outcomes, we don’t have measures on other outcomes that are equally important.  And there isn’t a very strong centering framework for what we have measured, what we have not, and why.  This is why we are having the conversation about identifying college-wide outcomes.  The results of that conversation will tell us exactly what to measure.

 

The second issue is in some ways almost more important for our own purposes.  We need to know what we should do to improve student learning – not just whether our students are learning (or not).  As we should know by now, learning doesn’t happen by magic.  There are specific experiences that accelerate learning, and certain experiences that grind it to a halt.  Once we’ve identified the outcomes that define Augustana, then we can track the experiences that precede them.  It is amazing how many times we have found that, despite the substantial amount of data we have on our students, the precise data on a specific experience is nowhere to be found because we never knew we were going to need it.  This is the primary reason for the changes I made in the senior survey this year.

 

This move from measuring what we have to assessing what we do is not a simple one and it doesn’t happen overnight.  And that is just the data collection side of the shop.  Just wait until I start talking about what we do with the data once we get it! (Cue evil laughter soundtrack!)

 

Make it a good day!

 

Mark

student learning as I see it

At a recent faculty forum, discussion of the curricular realignment proposal turned to the question of student learning.  As different people weighed in, it struck me that, even though many of us have been using the term “student learning” for years, some of us may have different concepts in mind.  So I thought it would be a good idea, since I think I say the phrase “student learning” at least once every hour, to explain what I mean and what I think most assessment folks mean when we say “student learning.”

 

Traditionally, “student learning” was a phrase that defined itself – it referred to what students learned.  However, the intent of college teaching was primarily to transmit content and disciplinary knowledge – the stuff that we normally think of when we think of an expert in a field or a Jeopardy champion.  So the measure of student learning was the amount of content that a student could regurgitate – both in the short term and the long term.

 

Fortunately or unfortunately, the world in which we live has completed changed since the era in which American colleges and universities hit their stride.  Today, every time you use your smart phone to get directions, look up a word, or find some other byte of arcane data, it becomes painfully clear that memorizing all of that information yourself would be sort of pointless and maybe even a little silly.  Today, the set of tools necessary to succeed in life and contribute to society goes far beyond the content itself.  Now, it’s what you can do with the content.  Can you negotiate circumstances to solve difficult problems?  Can you manage an organization in the midst of uncertainty?  Can you put together previously unrelated concepts to create totally new ideas?  Can you identify the weakness in an argument and how that weakness might be turned to your advantage?

 

It has become increasingly apparent that colleges and universities need to develop the set of skills needed to answer “yes” to those questions.  So when people like me use the phrase “student learning” we are referring to the development of the skill sets necessary to make magic out of content knowledge.  That has powerful implications for the way that we envision a general education or major curriculum.  It also holds powerful implications for how we think about integrating traditional classroom and out-of-class experiences in order to firmly develop those skills in students.

 

I would encourage all of us to reflect on what we think we mean when we say “student learning.”  First, let’s make sure we are all referring to the same thing when we talk about it.  Second, let’s move away from emphasizing content acquisition as the primary reflection of our educational effectiveness.  Yes, content is necessary, but it’s no longer sufficient.  Yes, content is foundational to substantive student learning, but very few people look at a completed functioning house and say, “Wow, what an amazing foundation.”  I’m just sayin’ . . .

 

Make it a good day!

 

Mark

Moving from satisfaction to experiences – a new senior survey

One of the “exciting” parts of my job is building surveys.  I’ve worked with many of you over the past two years to construct new surveys to answer all sorts of questions.  On the one hand, it’s a pretty interesting challenge to navigate all of the issues inherent in designing what amounts to a real life “research study.”  At the same time, it can be an exhausting project because there are so many things you just can’t be sure of until you field test the survey a few times and find all of the unanticipated flaws.  But in the end, if we get good data from the new survey and learn things we didn’t know before that help us do what we do just a little bit better, it’s a pretty satisfying feeling.

As many of you already know, Augustana College has been engaged in a major change over the last several years in terms of how we assess ourselves.  Instead of determining our quality as an institution based on what we have (student incoming profile, endowment amount, etc.), we are trying to shift to determining our quality based on what we do with what we have.  Amazingly, this places us in a very different place that many higher education institutions.  Unfortunately, it also means that there aren’t many examples on which we might model our efforts.

One of the implications of this shift involves the nature of the set of institutional data points that we collect.  Although many of the numbers we have traditionally gathered continue to be important, the measure of ourselves that we are hoping to capture what we do with those traditional numbers. And while we have long maintained pretty robust ways of obtaining the numbers you would see in our traditional dashboard, our mechanisms for gathering data that would help us assess what we do with what we have are not yet robust enough.

So over the last few months, I have been working with the Assessment for Improvement Committee and my student assistants to build a new senior survey.  While the older version had served its purpose well over more than a decade, it was ready for an update, it not an overhaul.

The first thing we’ve done is move from a survey of satisfaction to a survey of experiences.  Satisfaction can sometimes give you a vague sense of customer happiness, but it often falls flat in trying to figure out how to make a change – not to mention the fact that good educating can produce customer dissatisfaction if the that customer had unrealistic expectations or didn’t participate in their half of the educational relationship.

The second thing we’ve done is build the senior survey around the educational and developmental outcomes of the entire college.  If our goal is to develop students holistically, then our inquiry needs to be comprehensive.

Finally, the third thing we’ve done is “walk back” our thinking from the outcomes of various aspects of the college to the way that students would experience our efforts to produce those outcomes.  So, for example, if the outcome in intercultural competence, then the question we ask is how often students had serious conversations with people who differed by race/ethnicity, culturally, social values, or political beliefs.  We know this is a good question to ask because we know from a host of previous research that the degree to which students engage in these experiences predicts their growth on intercultural competence.

If you want to see the new senior survey, please don’t hesitate to ask.  I am always interest in your feedback.  In the mean time . . .

 

Make it a good day!

 

Mark

Why should our seniors participate in the Wabash National Study?

When thing get really hectic, I have a hard time remembering what month it is.  Judging by the snow falling outside as a write this first column of the spring term, it’s not just me.  Fortunately, we all have our anchoring mechanisms – our teddy bear or our safe space that keeps us grounded.  For me, it’s the Wabash National Study senior data collection that will occur in March and April.  At long last, it’s time to find out from our seniors how their Augustana experience impacted their development on many of the primary intended outcomes of a liberal arts education.  (I know.  Own it!)

 

I believe that the data we gather from the Wabash National Study could be the most important data that Augustana has collected in its 150+ year history.  I’d like to give you three reasons why I make this claim, and three ways that I need your help.

 

First, the Wabash National Study measures individual gains across a range of specific outcomes.  Instead of taking a snapshot of a group of freshmen and a snapshot of a different group of seniors and assuming that those two sets of findings represent change over time, in this study we will have actually followed the same group of students from the first year to the fourth year.  Furthermore, instead of tracking only one outcome, this study tracks 15 different outcomes, allowing us to examine how gains on one outcome might relate to gains on another outcome.

 

Second, the Wabash National Study is the first and only study that allows us to figure out which student experiences significantly impact our students’ change on each outcome measure.  In other words, from this data we can determine which experiences improve gains, which experiences inhibit gains, and which experiences seem to have little educational impact. Furthermore, this data allows us to determine whether the gains we identify on each outcome are a function of pre-college characteristics (like intellectual aptitude) or a function of an experience that happened during college (like meaningful student-faculty interaction).  This gives us the kind of information on which we can more confidently base decisions about program design, college policies, and the way we link student experiences to optimize learning.

 

Third, as we continue to try to more fully embody a college that assesses itself based on what we do rather than what we have, this data can provide a foundation as we think about clearly articulating the kind of institution we want to be in the future and how we are best able to get there.  In the past decade, we have collected bits and pieces of this kind of data from NSSE, CLA, and various Teagle-sponsored studies – all important evidence on which we have made critical decisions that have improve the quality of the education we provide.  This time around, we will have all of that data in one study, allowing us to answer many of the questions that we need to answer now; questions that have previously been exceedingly difficult answer because the applicable data was scattered across different, often incompatible, studies.

 

But just because we are going to try to collect this data from our seniors over the next two months doesn’t mean we automatically get to have our cake and eat it, too.  Our seniors have to volunteer to provide this data.  Although we have some pretty decent incentives ($25 gift cards to the book store and group incentives for some student groups), this thing could be a monumental belly flop if no one shows up to fill out our surveys.  This brings me to how you can help.

 

1)      Make it your mission to tell every senior with whom you interact to participate in the survey.  We are going to invite them by email, announce this study at various student venues, and hopefully have some articles in the Observer.  But the students need to be encouraged to participate at every turn.

2)      Tell them why they should participate!  It’s not enough to ask them to do it.  They need to know that this will fundamentally shape the way that we construct Augustana College for the next generation of students.  They can play a massive role in that effort just by showing up and filling out some surveys.  Oh, if the rest of life was so easy!

3)      Remind them to participate.  We will have four different opportunities for seniors to provide data.  We will give $25 gift cards to the first 100 students at each session – so if they all wait to participate, most of them won’t get the incentives we would like to give them.  The dates, times, and locations of these sessions are:

 

  1. Monday, March 12, 6-8:30 PM in Science 102
  2. Monday, March 26, 6-8:30 PM in Olin Auditorium
  3. Thursday, March 29, 6-8:30 PM in Science 102
  4. Thursday, April 26, 10:30 AM – 12:30 PM in John Deere Lecture Hall

 

Thank you so much for your help.  Just to let you know ahead of time, I’m not going to shut up about this data collection effort until we give away all of the gift cards or we run out of data collection dates.  Yes, it’s that important.

 

Make it a good day,

 

Mark

A positivity distraction

As you slog your way through the snow and the grading and the (hypothetical) curriculum reconstruction this week, I hope you will take a moment to wire your brain for positive thoughts.  I don’t have much to say today – I’m feeling a little beat down myself – but I watched this TED talk last night and it was just the tidbit I needed to get my head straight.

 

Make it a good day (sometimes I’m really am talking to myself),

 

Mark

Understanding the “new” learning outcomes of a college education

At the Augustana Board Retreat a couple of weeks ago, Allen Bertsche (Director of International Programs) and I hosted a discussion with members of the Board, administrators, and faculty about a fundamental shift that has occurred in higher education over the past several decades.  While a college education used to be primarily about acquiring content knowledge, today the most important outcomes of a college education are a broad range of complex cognitive, psychosocial, and interpersonal skills and dispositions. These outcomes transcend a student’s major choice and are applicable in every facet of life.  In short, although content is still necessary, it is no longer sufficient.  In recent years Augustana has identified outcomes like critical thinking, collaborative leadership, and information literacy as fundamental skills that every student should develop before graduation.

 

During our conversation at the Board Retreat, Kent Barnds (Vice President of Enrollment, Communications, and Planning) pointed out that, while some of us might grasp the ramifications of this shift, perspective students and their families are still firmly entrenched in the belief that content acquisition is the primary goal of a college education.  In their minds, a college’s value is directly related to the amount of content knowledge it can deliver to its students.  As many of you know, when prospective students and families visit, they often ask about opportunities to obtain multiple majors while participating in a host of experiences.  By comparison, they rarely ask about the exact process by which we develop critical thinking or cross-cultural skills in students.

 

I think it would do us some good to consider what the current calendar discussion looks like to those who believe that the cost of tuition primarily buys access to content knowledge.  The students quoted in the most recent Observer about the 4-1-4 calendar discussion exemplify this perspective.  Their rationale for keeping the trimester system is clearly about maximizing content acquisition – more total courses required for graduation equals more total content acquired, and shorter trimesters allow students to minimize the time spent acquiring content that they don’t need, don’t like, or don’t want.  With tuition and fees set well over $40,000 next year, it’s not hard to see their concerns.

 

Now please don’t misunderstand me – I am much more interested in what we do within the calendar we choose than whether we continue on trimesters or move to semesters.  Nor am I suggesting that student opinions should or should not influence this discussion.  But if we’re trying to have a conversation about student learning – with or without students – and we don’t share a common definition of the term, then we are likely doomed to talk right past each other and miss a real opportunity to meaningfully improve what we do regardless of whether or not the faculty votes to alter the calendar.  On the other hand, if we can more clearly spell out for students, parents, (and ourselves) what we mean when we talk about “student learning” and why our focus on complex skills and outcomes is better suited to prepare students for life after graduation, not only might it temper the tensions that seem to be bubbling up among our students, it might also allow us to help them more intentionally calibrate the relationship between their current activities and obligations and their post-graduate aspirations.

 

So no matter where you sit on the semester/trimester debate, and no matter what you think about the shift in emphasis from content acquisition to the development of skills and outcomes, I would respectfully suggest that we need to better understand the presumptions that undergird each assertion in the context of the calendar discussion. In my humble opinion, as Desi used to say to Lucy, we still “got some ‘splainin’ to do.”

 

Make it a good day,

 

Mark

What does Finland have to teach us about assessment?

Welcome back!  During the break I hope you were able to enjoy some time with loved ones and (or) recharge your intellectual batteries.  I will admit that I spent part of the break embracing my inner geek, reading about the amazing improvements in Finland’s student achievement scores since they instituted a new national education policy in the 1970s.  Previously, Finland had been decidedly average.  Today, their scores are consistently among the best in the world – particularly in reading and science.  As a result, the U.S. and the U.K. – countries with substantially lower scores – are very interested in finding out what might be driving this educational success story.

 

The point of my column this week isn’t to delve into the details of Finland’s success, but rather to consider one aspect of Finland’s approach that I think is particularly applicable to our current conversation about educational outcomes and improved student learning.  So here are a few links if you are interested in reading more about Finland educational success or about the exam that is used to measure student achievement.  Instead.

 

“Accountability is something that is left when responsibility has been subtracted.”

 

If you’ve already read the Atlantic Monthly article I hyperlinked above, you know that this statement is attributed to Pasi Sahlberg, an individual deeply involved in Finland’s educational transformation.  The principle to which he refers asserts that unless an educational endeavor is intentionally designed to produce a specific outcome, it is difficult to argue that gains on that outcome are entirely attributable to the educational endeavor in question.  However, as society has increasingly demanded that education prove its worth, it is deceptively easy to start by testing for an educational effect without ever asking whether the experience is really designed to best produce it.  To make matters worse, then we mandate improvement without addressing the systematic dysfunction that created the problem in the first place.

 

My sense of Augustana’s evolution regarding student learning outcomes is that we are in the midst of a process to make explicit what we have long valued implicitly.  We are trying to be clearer about what we want our students to learn, be more transparent about those efforts, and maximize the educational quality we provide.  In this context, Sahlberg’s comment on accountability and responsibility struck me in two ways . . .

 

First, the process of identifying outcomes and designing an educational program to meet those outcomes requires us to take full responsibility for the design of the program we are delivering.  When something is repeatedly greater than the sum of its parts, it isn’t just a happy accident.  Designing a successful educational program is more than just making pieces fit together – it’s constructing the pieces so that they fit together.

 

Second, just because an outcome idea sounds like it might be valid doesn’t make it so.  But in the absence of anything else, accountability measures that mean very little can all too easily become drivers of institutional policy – sometimes to the detriment of student learning.  However, the inverse can also be true.  An institution that takes full responsibility for the design of its educational programs and the system within which they exist will likely far exceed typical accountability standards because such an institution can make coherent, empirically-grounded, and compelling arguments for why it does what it does; arguments that quickly evaporate when a pre-packaged accountability measure is hurriedly slapped onto the back end of an educational process.

 

So I’d like to close by suggesting that we consider the statement quoted above in this way: If we take explicit responsibility for student learning and the design of the educational programs we provide, demonstrating our accountability – to our students or our accreditors – will be relatively easy by comparison.

 

Make it a good day,

 

Mark

The dynamics of tracking diversity

Over the past few weeks I’ve been digging into an interesting conundrum regarding the gathering and reporting of “diversity” data – the percentage of Augustana students who do not identify as white or Caucasian.  What emerges is a great example of the frustratingly tangled web we weave when older paradigms of race/ethnicity classification get tied up in the limitations of survey measurement and then run headlong into the world in which we actually live and work.  To fully play out the metaphor (Sir Walter Scott’s famous text, “Oh what a tangled web we weave, when first we practice to deceive”), if we don’t understand the complexities of this issue, I would suggest that in the end we might well be the ones who get duped.

For decades, questions about race or ethnicity on college applications reflected an “all or nothing” conception of race/ethnic identification.  The response options included the usual suspects – Black/African-American, White/Caucasian, Hispanic, Asian/Pacific-Islander, and Native American, and sometimes a final category of “other” – with respondents only allowed to select one category.  More recently, an option simply called “two or more races” was added to account for individuals who might identify with multiple race/ethnic categories, suggesting something about our level of (dis)interest in the complexities of multi-race/ethnic heritage.

In 2007, the Department of Education required colleges to adopt a two-part question in gathering race/ethnicity data.  The DOE gave colleges several years to adopt this new system, which we implemented for the incoming class of 2010.  The first question asks whether or not the respondent identifies as Hispanic/Latino.  The second question asks respondents to indicate all of the other race/ethnicity categories that might also apply.  The response choices are American Indian, Asian, Black/African-American, Native Hawaiian/Pacific-Islander, and White, with parenthetical expansions of each category to more clearly define their intended scope.

While this change added some nuance to reporting race/ethnicity, it perpetuated some of the old problems while introducing some new ones as well.  First, the new DOE regulations only addressed incoming student data; it didn’t obligate institutions to convert previous student data to the new configuration – creating a 3-4 year period where there was no clear way to determine a “diversity” profile.  Second, the terminology used in the new questions actually invited the possibility that individuals would classify themselves differently than they would have previously.  Third, since Augustana (like virtually every other college) receives prospective student data from many different sources that do not necessarily comport with the new two-part question, it increased the possibility of conflicting self-reported race/ethnicity data.  Similarly, the added complexity of the two-part question increased the likelihood that even the slightest variation in internal data gathering could exacerbate the extent of inconsistent responses.  Finally, over the past decade students have increasingly skipped race/ethnicity questions, as older paradigm of racial/ethnic identification have seemed increasingly less relevant to them.  This means that the effort to acquire more nuanced data could actually accelerate the increasing percentage of students who skip these questions altogether.

As a result of the new federal rules, we currently have race/ethnicity data for two groups of students (freshmen/sophomores who entered after the new rules were implemented and juniors/seniors who entered under the old rules) that reflect two different conceptions of race/ethnicity.  Although we developed a crosswalk in an attempt to create uniformity in the data, for each additional wrinkle that we resolve another one appears. Thus, we admittedly have more confidence in the “diversity” numbers that we reported this year (2011) than those we reported last year (2010).  Moreover, the change in questions has set up a domino effect across many colleges where, depending upon how an institution tried to deal with these changes, an individual institution could come up with vastly different “diversity” numbers, each supported by a reasonable analytic argument (See this recent article in Inside Higher Ed).

Recognizing the enormity of these problems, IPEDS only requires that the percentage of students we report as “race unknown” be less than 100% during the transition years (in effect allowing institutions to convert all prior student race/ethnicity data to the unknown category). And lets not even get into the issues of actual counting.  For example, the new rule says that someone who indicates “yes” to the Hispanic/Latino question and selects “Asian” on the race question must be counted as Hispanic, but someone who indicates “no” to the Hispanic/Latino question and selects both “Asian” and “African-American” to the race question must be counted as multi-racial.  Anyone need an aspirin yet?

But we do ourselves substantial harm if we get hung up on a quest for precision.  In reality, the problem originates not in the numbers themselves but in the relative value we place on those numbers and the decisions we make or the money we spend as a result.  Interestingly, if you ask our current students, they will tell you that they conceive of diversity in very different ways than those of us who came of age several decades ago (or more).  Increasingly, for example, socio-economic class is becoming a powerful marker of difference, and a growing body of research has made it even more apparent that the intersection of socio-economic class and race/ethnicity produces vastly different effects across diverse student types.

I am in no way suggesting that we should no longer care about race or ethnicity.  On the contrary, I am suggesting that if our conception of “diversity” is static and naively simplistic, we are less likely to recognize the emergence of powerfully influential dimensions on which difference also exists and opportunities are also shaped.  Thus, we put ourselves at substantial risk of treating our students not as they are, but as we have categorized them.  Worse still, we risk spending precious time and energy arguing over what we perceive to be the “right” number under the assumption that those numbers were objectively derived, when it is painfully clear that they are not.

Thanks for indulging me this week.  Next week will be short and sweet – I promise.

Make it a good day,

Mark

What is so delicious about ambiguity?

Welcome to a new academic year!  For those of you who have been away, it’s great to see you again.  For those of you who are new to Augustana, I look forward to getting to know you and supporting your educational efforts.

As many of you know by now, my primary focus at Augustana is to facilitate the continuous process of improving student learning. This means that, like most of us, I can be found wearing many different hats.  Sometimes you will find me designing and implementing on-campus studies to gather data that we need.  Sometimes you might find me discussing the findings from our data and the implications of those findings.  And other times you might find me collaborating with a wide range of individuals or groups to design policies, programs, or professional development to ensure that our efforts to improve student learning actually bear fruit.

So why am I also writing a weekly column in the Faculty Newsletter?

During the last year, I was struck by the degree to which many of us actually don’t know about the things we do really well.  As a result, it appeared to me that we often missed opportunities to take maximal advantage of these successes.  Sometimes these “successes” were happening only in isolated instances.   Other times these educational strengths were occurring repeatedly but without much connection to other similar and often complimentary successes.  And sometimes these successes were totally coincidental.

With this in mind, I hope this column will help us learn a lot more about ourselves, our students, the relative impact of our educational efforts, and the ways that we might improve in this collective endeavor. Sometimes I’ll share a nugget of data or information that struck me as interesting.  Sometimes I’ll pose a question that I think might help us be more intentional in what we do. Finally, from time to time I will take a specific belief or claim about some aspect of student learning at Augustana College and put it to the test.

I am not promising truth, justice, or beauty.  However, I am promising that I will try to inspire you to think more deeply about our students, our efforts, and our collective investment in this work.  I also hope that these columns will inspire conversations that lead to additional questions and, ideally, to a deeper understanding of the work that we do.

So – what would you like to know?  What ‘myth’ or claim would you like to see me put to the test (be careful what you wish for!)?  And what should I call this column?

Make it a good day,

Mark