Evidently, the stuff we did worked!

So what’s this about the National Survey of Student Engagement coming to Augustana to study us?  Essentially – as Ellen described in her cover article, although many institutions have been using NSSE for over a decade to assess student experiences, far fewer have 1) used the findings from their data to construct broad changes in educational programming, and 2) documented improvement in educational quality on subsequent NSSE surveys as a result of those changes.  I thought it might be helpful to explore the areas where we have seen substantive change in NSSE scores and note the program and policy changes that we think contributed to this change.

First, some background.  NSSE is a four page survey that asks a series of demographic and college experience questions.  The experience questions are organized under five broad concepts that NSSE calls “benchmarks,” each representing something that we know makes a difference in the quality of a students’ education.  They are:

Level of Academic Challenge

Active and Collaborative Learning

Student-Faculty Interaction

Enriching Educational Experiences

Supportive Campus Environment

Augustana began using NSSE in 2002 and continued to utilize it in 2003, 2006, and 2009.  In addition, NSSE is included in the Wabash National Study, so our freshmen class of 2008 also completed NSSE.  We will administer NSSE again in the spring of 2012.

Since NSSE revised the way they calculated the benchmark scores in 2005, we can’t compare the benchmark scores since 2003, but we can see some impressive changes between 2006 and 2009.  Comparing freshman scores, we made statistically significant increases in Student-Faculty Interaction and Supportive Campus Environment benchmarks.  Comparing senior scores, we made a statistically significant increase in Enriching Educational Experiences.

There are also changes on individual items that suggest some improvement in educational quality.  Changes in NSSE scores between 2003 and 2009 suggest that on average, we have increased the extent to which our freshmen make class presentations, prepare two or more drafts of a paper before turning it in, and work with classmates outside of class to prepare assignments.  In addition, our 2009 freshmen believe that Augustana is making are larger contribution to their growth in speaking clearly and effectively, working effectively with others, understanding people of other racial and ethnic backgrounds, and contributing to the welfare of the community.

A close examination of our NSSE data indicates that, although we still can do more to improve the educational quality of an Augustana experience, 1) we do a lot of things well, and 2) we have made numerous substantive improvements during the past decade.

If you’d like to know more about any aspect of our NSSE data, please don’t hesitate to ask.

Make it a good day.

Mark

Plugging in to the process of learning

To those of you who were able to take some time away last week – welcome back!  And to those of you who never left – thanks for sticking around!

Although we are well into the conversation about improving student learning through curricular reform, the other half of the educational effectiveness equation remains a bit of a conundrum.  This “other half” to which I refer is our students’ motivation to plug in to the process of learning and growing.  We all know some students who don’t seem to care much at all about their education.  In addition, we all know of students who are tremendously motivated to get good grades but seem to care very little about learning.  So what do we know about our students’ motivation to learn and succeed in college?

Augustana has not traditionally collected much data that fully addresses student motivation.  Sometimes we have presumed that increased student satisfaction will lead to increased motivation.  Yet we know that motivation is more complicated – that there are different types of motivation that can produce vastly different results.  As a liberal arts college, we actually want our students to develop an intrinsic motivation to learn and be less concerned about extrinsically measured achievement.

Although the Wabash National Study didn’t really flesh out the idea of motivation, it did include two items that we can use to dig into the way that student motivation might change in college.  Both items are presented as agree/disagree statements with a response scale of 1 (strongly disagree) to 5 (strongly agree).  The first item states, “Getting the best grades I can is very important to me.”  The second item states, “I am willing to work hard in a course to learn the material even if it won’t lead to a higher grade.”

Our current seniors participated in the Wabash National Study as freshmen in 2008 and 2009.  So the data we have comes from the beginning and the end of their first year at Augustana.  Here are their average responses to both questions.

 

Fall of 2008

Spring of 2009

Importance of Grades

4.36

4.28

Willingness to Work Hard Regardless of Grades *

3.88

3.69

 

There are two observations I would like you to consider.  First, in both the fall and spring our students appear to rate getting the best grades they can as more important than working hard to learn regardless of that effort’s effect on grades.  Second, between fall and spring the change in the importance of getting the best possible grades is not large enough to be significant, suggesting that this value does not change on average.  However, the change in willingness to work hard regardless of grades between fall and spring is significant – suggesting that intrinsic motivation to learn may have actually dropped during the first year.

I am going to revisit this topic in a couple of weeks because it cuts to the core of our efforts to effectively prepare students to succeed in their personal and professional lives.  Moreover, it appears that there are certain types of educational experiences that may increase intrinsic motivation.   How is that for a teaser?!

Make it a good day,

Mark

Complicating the “over-involvement” complaint

Last week I promised that my next column would be short and sweet.  And in the context of the time crunch that inevitably wells up during week ten of the term, I am all about short and sweet.  So consider this data nugget as you bounce from commitment to commitment this week.

I think many of us seem to accept the campus narrative that our students are too busy.  If we were portioning out blame for this phenomenon, I suspect that a large proportion of it would fall on co-curricular involvement.  This claim isn’t entirely without merit.  We have legitimate evidence from our National Survey of Student Engagement (NSSE) data that our students spend more hours per week involved in co-curricular activities than students at comparable institutions.

But rather than debunk this narrative, I’d like to complicate it.  Because I am not sure the real question should be whether or not our students are over-involved or under-involved in co-curricular activities.  Instead, maybe the question should be whether each of our students is involved in the right amount and array of experiences that best fit their developmental needs – a very different question than whether we should be managing our student body to an “average” amount of co-curricular involvement.

In addition to NSSE, our participation in the Wabash National Study (WNS) also provides insight into our first-year students’ behaviors and allows us to compare our first-year students to those at a number of comparable small liberal arts colleges.  While the WNS utilized the identical NSSE question regarding co-curricular involvement, it also asked students to report the number of student organizations in which they participated during the first year.  I wanted to know whether or not our high rank in co-curricular involvement would be replicated in our students’ organizational memberships.  Essentially, I wanted to know more about the nature of our students’ involvement.

Interestingly, the average number of organizations in which our first-year students participated ended up in the middle of the pack and did not mirror our high rank in amount of co-curricular involvement.  This suggests to me that our students are not bouncing around from meeting to meeting (as the “myth” might imply) without having the time to meaningfully immerse themselves in these experiences.

That is not to say that this contradicts the claim outright.  Instead, I would suggest that this finding might provide some insight into the nature of purpose – or lack of purpose – that drives our students’ co-curricular involvement.  I’ll let you chew on the implications of this possibility for our own work in between meetings, grading, teaching, and every other little thing you have to do this week.

Make it a good day – and a good end of the fall term!

Mark

The dynamics of tracking diversity

Over the past few weeks I’ve been digging into an interesting conundrum regarding the gathering and reporting of “diversity” data – the percentage of Augustana students who do not identify as white or Caucasian.  What emerges is a great example of the frustratingly tangled web we weave when older paradigms of race/ethnicity classification get tied up in the limitations of survey measurement and then run headlong into the world in which we actually live and work.  To fully play out the metaphor (Sir Walter Scott’s famous text, “Oh what a tangled web we weave, when first we practice to deceive”), if we don’t understand the complexities of this issue, I would suggest that in the end we might well be the ones who get duped.

For decades, questions about race or ethnicity on college applications reflected an “all or nothing” conception of race/ethnic identification.  The response options included the usual suspects – Black/African-American, White/Caucasian, Hispanic, Asian/Pacific-Islander, and Native American, and sometimes a final category of “other” – with respondents only allowed to select one category.  More recently, an option simply called “two or more races” was added to account for individuals who might identify with multiple race/ethnic categories, suggesting something about our level of (dis)interest in the complexities of multi-race/ethnic heritage.

In 2007, the Department of Education required colleges to adopt a two-part question in gathering race/ethnicity data.  The DOE gave colleges several years to adopt this new system, which we implemented for the incoming class of 2010.  The first question asks whether or not the respondent identifies as Hispanic/Latino.  The second question asks respondents to indicate all of the other race/ethnicity categories that might also apply.  The response choices are American Indian, Asian, Black/African-American, Native Hawaiian/Pacific-Islander, and White, with parenthetical expansions of each category to more clearly define their intended scope.

While this change added some nuance to reporting race/ethnicity, it perpetuated some of the old problems while introducing some new ones as well.  First, the new DOE regulations only addressed incoming student data; it didn’t obligate institutions to convert previous student data to the new configuration – creating a 3-4 year period where there was no clear way to determine a “diversity” profile.  Second, the terminology used in the new questions actually invited the possibility that individuals would classify themselves differently than they would have previously.  Third, since Augustana (like virtually every other college) receives prospective student data from many different sources that do not necessarily comport with the new two-part question, it increased the possibility of conflicting self-reported race/ethnicity data.  Similarly, the added complexity of the two-part question increased the likelihood that even the slightest variation in internal data gathering could exacerbate the extent of inconsistent responses.  Finally, over the past decade students have increasingly skipped race/ethnicity questions, as older paradigm of racial/ethnic identification have seemed increasingly less relevant to them.  This means that the effort to acquire more nuanced data could actually accelerate the increasing percentage of students who skip these questions altogether.

As a result of the new federal rules, we currently have race/ethnicity data for two groups of students (freshmen/sophomores who entered after the new rules were implemented and juniors/seniors who entered under the old rules) that reflect two different conceptions of race/ethnicity.  Although we developed a crosswalk in an attempt to create uniformity in the data, for each additional wrinkle that we resolve another one appears. Thus, we admittedly have more confidence in the “diversity” numbers that we reported this year (2011) than those we reported last year (2010).  Moreover, the change in questions has set up a domino effect across many colleges where, depending upon how an institution tried to deal with these changes, an individual institution could come up with vastly different “diversity” numbers, each supported by a reasonable analytic argument (See this recent article in Inside Higher Ed).

Recognizing the enormity of these problems, IPEDS only requires that the percentage of students we report as “race unknown” be less than 100% during the transition years (in effect allowing institutions to convert all prior student race/ethnicity data to the unknown category). And lets not even get into the issues of actual counting.  For example, the new rule says that someone who indicates “yes” to the Hispanic/Latino question and selects “Asian” on the race question must be counted as Hispanic, but someone who indicates “no” to the Hispanic/Latino question and selects both “Asian” and “African-American” to the race question must be counted as multi-racial.  Anyone need an aspirin yet?

But we do ourselves substantial harm if we get hung up on a quest for precision.  In reality, the problem originates not in the numbers themselves but in the relative value we place on those numbers and the decisions we make or the money we spend as a result.  Interestingly, if you ask our current students, they will tell you that they conceive of diversity in very different ways than those of us who came of age several decades ago (or more).  Increasingly, for example, socio-economic class is becoming a powerful marker of difference, and a growing body of research has made it even more apparent that the intersection of socio-economic class and race/ethnicity produces vastly different effects across diverse student types.

I am in no way suggesting that we should no longer care about race or ethnicity.  On the contrary, I am suggesting that if our conception of “diversity” is static and naively simplistic, we are less likely to recognize the emergence of powerfully influential dimensions on which difference also exists and opportunities are also shaped.  Thus, we put ourselves at substantial risk of treating our students not as they are, but as we have categorized them.  Worse still, we risk spending precious time and energy arguing over what we perceive to be the “right” number under the assumption that those numbers were objectively derived, when it is painfully clear that they are not.

Thanks for indulging me this week.  Next week will be short and sweet – I promise.

Make it a good day,

Mark

Student relationships with faculty and administrators

One of Augustana College’s fundamental values is the importance of high quality relationships between students and faculty.  Indeed, this is one of the most compelling arguments for attending a small liberal arts college.  I would argue that this assertion has always included the quality of relationships between students and everyone who might impact our students’ educational experience – not just those traditionally considered to be faculty.  In fact, students don’t differentiate between those of us who are “faculty” and those of us who are “staff” or “administrators” in the same way that we do.  Thus, all of us should fully expect to have a significant impact on our students’ education – no matter the context of those interactions.

There are two questions on the National Survey of Student Engagement (NSSE) that ask about the quality of students’ relationships with 1) faculty members and 2) administrative personnel and offices. The response options are somewhat unique – instead of a simple 5-item agree/disagree scale, the available responses are portrayed on a spectrum from 1-7; with 1 representing “unfriendly, unhelpful, sense of alienation” and 7 representing “friendly, supportive, sense of belonging.”  This spectrum may require a little more thought, but the idea is that these choices more fully reflect the concept underlying the question.

In comparison with all of the schools that use NSSE (most of which are much larger than Augustana), both our freshmen and our seniors report a significantly higher quality of relationships with faculty.  Moreover, the quality of relationships appears to improve between the freshmen and senior year.

Quality of Relationships w/ Faculty

Augustana

Overall NSSE Average

     Freshmen

5.59

5.21

     Seniors

5.90

5.42

 

However, something happens when our students are asked the same question about their quality of relationships with administrative personnel and offices.  While our freshmen scores are significantly higher than the overall NSSE average, our senior scores are significantly lower than the comparable overall NSSE average.

Quality of Relationships w/ Administrative Personnel and Offices

Augustana

Overall NSSE Average

     Freshmen

4.99

4.74

     Seniors

4.32

4.60

 

Regardless of the possible reasons for lower ratings of relationships with administrators generally (one such reason might be that administrators tend to dispense fines more often than faculty, for example), it seems reasonable to strive for and expect an increase in the quality of those relationships that mirrors the change in student/faculty relationships.  Unfortunately, this does not appear to be case.  While the reasons for these opposing trends are probably complex, I would humbly suggest that they not beyond our control.

Usually I end my column with a simple “Make it a good day.”  This time, I’d like to end with something slightly different.

In your own way – big or small, make it a good day for a student.

Mark

Dusting under the retention furniture

A couple of weeks ago I highlighted our success in maintaining a historically high 1st-2nd year retention rate (87%) despite a substantial increase in the size of the freshmen class between 2009 to 2010.  Although this is something that we should indeed celebrate, we need to be willing to look inside these numbers and explore whether our overall rate accurately reflects the behaviors of various student types.  This week I want to dig a little deeper and explore those variations.  To be precise, I’ll call it persistence when talking about the students’ decision and retention when talking about the number that we track as a proxy measure for student experience and success in the first year.

As you might expect, our retention rate isn’t the same for all student types.  Pre-college academic ability plays a big role.  Students with an ACT score of 23 or above persisted at 89%, while those with an ACT score below 23 persisted at 81%.  Likewise, there are two demographic characteristics – race/ethnicity and gender – that historically influence retention rates across the country as well as at Augustana.  In the 2010 cohort, white students persisted at 89%, while multicultural students persisted at 81%.  In addition, female students persisted at 89%, while male students persisted at 85%.

Before talking about what these differential rates might mean, it is important to remember that pre-college ability, race/ethnicity, and gender don’t exist independently – an individual student is necessarily categorized along all three dimensions.  So the question also becomes whether or not there is a subset of categories that, when combined, produce a starkly lower likelihood of persistence to the second year.

Not surprisingly, we have such a troublesome combination at Augustana.  Of the three categories listed above, it is the combination of being male and multicultural that produces the lowest retention rate of any combination – 77%.  Interestingly the effect of being male also influences the retention rate of students with higher incoming ACT scores.  Females with ACT scores of 23 or above persisted at 92%, while males with a similar ACT score persisted at 86%.  By comparison, the retention rate of students with lower ACT scores (below 23) did not vary significantly by gender.

Although these differences might suggest an array of programmatic interventions, solving a retention “problem” can be a bit like the old carnival game Whack-a-Mole.  A singular focus on one subset of students can become a frustratingly reactionary exercise over time. Yet, understanding the nature of these students’ challenges can be a critical first step in addressing retention issues. What common issues might be at the core of these differences across gender and race/ethnicity?  To help us take a first step in thinking about the issues that our male students face, I’d encourage you to attend Dr. Tracy Davis’ presentation at Friday Conversation this week.  I’ll talk more about the challenges facing multicultural students in a later column.

Make it a good day,

Mark

Taking advantage of a culture of student involvement

Some of you have expressed concerns about our students’ high level of involvement in extra-curricular activities and its potential effect on academic engagement as well as mental health.  There seems to be plenty of evidence to support this concern – some of the most poignant being the writings of our students themselves in the Observer and on the Augie Blog.

Yet there is one lesser known data point regarding our students’ social behaviors that reflects well on a cultural norm within the student community.  In addition, I’d like to suggest that we might learn something from our students’ social behaviors that could be a powerful lever in deepening academic engagement.

A question on the National Survey of Student Engagement asks students how often in the last year they “attended an art exhibit, play, dance, music, theatre, or other performance.”  The response options are 1= Never, 2= Sometimes, 3= Often, 4= Very Often.  Instead of focusing on the absolute numbers here (mostly because there is no consensus ‘window’ within which we want our students’ response to sit).  Rather, I want to present this data in the context of a comparison with other comparable private liberal arts colleges and what that might suggest.

It turns out that both our freshmen and seniors attend these kinds of performances substantially more often than students at comparable private liberal arts colleges. In fact, the difference in average response between our students (freshmen – 2.70, seniors – 2.60) and students at comparable colleges (freshmen – 2.44, seniors – 2.36) is extremely statistically significant, meaning that this difference is likely attributable to something happening here.

This suggests to me that there is something in the student culture that encourages and values supporting the arts.  Our students place a relatively high value on attending and supporting friends involved in those performances.  This finding corroborates independently gathered anecdotal evidence from the Office of Student Services.

What does this have to do with deepening academic engagement? If students are in the habit of supporting their friends’ co-curricular accomplishments, I would suggest that this apparent cultural norm provides a real opportunity to increase the relative value of and interest in students’ academic accomplishments. Public presentations of student scholarship can serve as a spark to inspire informal conversations among students about intellectual ideas and their application to the world around them.  Although traditionally associated with the fine arts, there is plenty of evidence to suggest placing a greater value on public presentations or exhibits of scholarship can deepen academic engagement outside of class across many disciplines.

By the way, our NSSE data also indicates that our students don’t talk about ideas from readings or classes with others outside of class at the same rate as students at comparable institutions. Hmmm . . .

Make it a good day,

Mark

Getting an handle on academic rigor

Like most colleges and universities, we believe that we should establish an educationally rigorous environment.  Unlike a lot of colleges and universities, we have a healthy body of quantitative and qualitative evidence from which we can explore, 1) whether this is in fact the case, and 2) whether appropriate academic rigor is experienced by students across the board or only in certain situations.

As you may know, for well over a decade we have been using various assessment mechanisms to measure student learning and academic rigor.  It seems that our efforts to increase our educational effectiveness and academic rigor have borne some fruit – especially on our National Survey of Student Engagement (NSSE) Academic Challenge scores among freshmen.  Those numbers have jumped markedly since we first used NSSE back in 2003.

Yet one of the hallmarks of a college that is truly focused on continual improvement is a perpetual inclination to ask questions, to compare findings with what we might already suspect or know by different means, to face what we uncover, and to take action.

With that in mind – and in light of my perpetual effort to help us all embrace a formative spirit, I’d like to present two data points from the 2006 and 2009 NSSE survey that seem especially worthy of further consideration.

In both 2006 and 2009, Augie students were asked how often they “come to class without completing readings or assignments.” (The response options are 1=Never, 2=Sometimes, 3=Often, and 4=Very Often.)  I would propose that the one thing we would not want to see is that seniors come to class unprepared more often than freshmen.  Unfortunately, this does not appear to be the case.  And if you were wondering, the difference between the average freshmen and senior response is large enough to be significant.

NSSE Year

Freshmen

Seniors

2006

1.91

2.13

2009

1.84

2.20

 

To add insult to injury, the change from freshmen to senior year looks worse when comparing our 2009 data to other small liberal arts colleges.  In this context, our freshmen actually come to class prepared significantly more often than freshmen at comparable institutions.  However, our seniors come to class prepared significantly less often than seniors at comparable institutions.

Does this match what we already suspect?  Are we ok with it?  How might we address this issue?

Make it a good day,

Mark

A reason to be proud of our efforts to improve student success

There was a time in higher education when an institution’s attrition rate was a point of pride and a supreme marker of academic rigor.  More recently, it sometimes seems as if retention and graduation rates have actually surpassed educational growth in ranking institutional quality.  In reality, these two markers are clearly intertwined.  Although educational growth is paramount, such growth seems a bit empty if an institution is also hemorrhaging students somewhere between matriculation and graduation.

 

So if a college could actually demonstrate substantial educational growth while simultaneously increasing retention rates, the faculty and staff at that institution would have a real reason to take great pride in their collective accomplishments.

 

It is becoming clear that Augustana College is one such institution.  We now have both direct and indirect evidence of educational growth.  Using the Collegiate Learning Assessment (CLA) to measure the growth of critical thinking skills between 2005 and 2009, our students improved by 28 percentile points – double the average of students participating in the Academically Adrift study.  In addition, during the last 10 years our NSSE Academic Challenge Benchmark scores have improved significantly among first year students – an accomplishment that was recently highlighted by the National Institute for Learning Outcomes Assessment.

 

By itself, this is well worth a hearty pat on the back.  However, it looks even better in the context of our increases in retaining students to the second year.  For a long time, Augustana’s first-to-second year retention rate hovered around 85%.  Three years ago, we retained the freshman class of 2008 at 82%, sparking some concern as the size of our incoming 2009 class also dropped.  After an increased focus on supporting struggling first year student, our retention rate among the 2009 class jumped to about 87%.  But we weren’t sure if that was an anomaly or a true reflection of our efforts.

 

Now that we have locked in our 10th day enrollment data this fall, we are able to look at our first-to-second year retention rate for the incoming class of 2010.  Some of us had wondered aloud whether our retention rate with this class would take a hit, presupposing that it’s a lot easier to retain students from a class of 616 (the freshman class of 2009) than students from a class of 753 (the 2010 freshmen class).

 

However, our retention rate for the 2010 freshmen class remained steady at 87.5%. Thus, despite an increase in class size of 137 students, we maintained Augustana’s highest retention rate on record.  Your efforts to help students succeed in the first year are bearing fruit.  We have a lot reasons to be very proud of our community.

 

Make it a good day,

 

Mark

Is grade inflation just a bunch of hot air?

I suspect that almost everyone has heard the “it was better in the good ol’ days” claim …if we haven’t even used it ourselves from time to time.  I would suggest that we have an academic version of this claim at Augustana.  The claim argues that there has been substantial grade inflation over the past several decades.   Apparently, this claim has carried some weight over the years, because we have created multiple mechanisms to prevent grade inflation – or at least stem the tide.

Luckily this is a claim we can test.  But before looking at the data, let’s make sure we share an understanding of this claim.  An assertion of grade inflation boils down to two points.

1)      Grades have been creeping upward.

2)      This is because faculty have shifted expectations for performance downward.

Grade inflation doesn’t just make an observation about changes in GPA; it also attributes the change to the failure of colleagues to hold the line on academic rigor.  In the context of a small college, it’s sort of a less physically damaging version of a circular firing squad.

So, testing this claim turns into two questions.  First, have grades gone up over time? And second, can we conclusively attribute this change in GPA to faculty grading practices?

Have grades gone up over time?    

Yes.

From about 1991 to the present, the average GPA of each class went up by about .15 of a grade point, whether you look at each entering cohort’s end-of-year grades from the first year to fourth year or you look at each subsequent cohort’s end-of-year grades from 1991 to the 2010.

Can we conclusively attribute this change in GPA to faculty grading practices?

No.

First, the increase in average GPA for each cohort from first to fourth year is predominantly explained by the departure – voluntary or otherwise – of students who struggled academically.  If you slice that group off the bottom of a class at the end of each year, and you recognize the likely influence of maturity and motivation for the students who remain, we would fully expect that the average GPA of a particular cohort of students would go up over time.

Second, from 1991 until 2010 the average ACT score of our incoming students improved by a full point – from 24.5 to 25.5. Since the ACT remained constant during that period, we can test whether the increase in GPA might be explained by the increase in students’ incoming academic ability.  It turns out that this increase in average test score explains virtually all of the change in GPA over the twenty year period in question.

The Verdict:

Faculty grading behaviors may well have changed over time – maybe for worse, maybe for better.  But we have little evidence to suggest a relationship between those behaviors and an increase in overall GPA.  In addition, we have better evidence to suggest that a change in our students’ pre-college academic ability might have influenced this change in GPA.  Interestingly, if faculty grading behaviors had changed in the way that the grade inflation claim suggests, ACT scores would have likely not been as powerful a predictor as they turned out to be.

So the next time you hear someone mention the good ol’ days in the context of academic standards and grades, you might remind them that there are other – and maybe better – explanations for this phenomenon.  You might also remind them of the relative trade-offs of a circular firing squad.

 

Make it a good day,

Mark