Wrestling with Creativity as a Student Learning Outcome

Before the holiday break, I described the evidence from our overall IDEA scores that our students’ Progress on Relevant Objectives (PRO) scores had increased substantively in the past year.  It is clear from looking at our data that this didn’t happen by accident and I hope you have taken a moment or two to take pride in your colleagues.  Admittedly, it is gratifying to see that all of the effort we have put toward maximizing our use of the new IDEA course feedback forms pay off.  So in the spirit of that effort, I want to highlight one other piece of data from our most recent overall report – the low proportion of courses that selected “Developing Creative Capacities” as an essential or important learning objective – and to advocate for more emphasis on that objective.

Of the 12 different learning objectives on the IDEA faculty forms, “Developing Creative Capacities” was selected by only 16% of the courses offered during the fall term – the least common selection (by comparison, 69% of courses indicated “gaining factual knowledge” as an essential or important learning objective).  As you might expect, “developing creative capacities” was chosen almost exclusively by fine arts courses, seemingly reflecting a traditional conception of creative capacities as something reserved for artistic expression.

Yet, as a liberal arts college, it seems that “developing creative capacities” should represent a central element of our educational goals and the culmination of a liberals arts education.  The parenthetical description of “creative capacities” in that objective includes “writing,” “inventing,” and “designing.”  Of course, these skills transcend any specific discipline.  Every time a student tries to make an argument with language, portray a concept visually, solve a problem that doesn’t have a singular solution, or articulate the implications of multiple sources of information on a particular point, their ability to do so hinges on these skills.

Moreover, in the updated version Bloom’s Taxonomy, “creating” is the highest cognitive domain.  Not unlike synthesizing, creating requires each of the skills listed in the preceding levels of the taxonomy (remembering, understanding, applying, analyzing, and evaluating).  It strikes me that this broadened definition of creating could apply to virtually all senior inquiry projects or other student work expected of a culminating experience.  For a more detailed discussion of creating as a higher-order skill, I’d suggest the IDEA paper that examines Objective #6.

So how do we infuse “developing creative capacities” more fully into our students’ educational experience?  I regularly hear faculty talk about the difficulty that many students exhibit when trying to synthesize disparate ideas and create new knowledge.  It’s complicated work, and I’ll bet that if we were to look back on even the best of our own undergraduate work, we would likely cringe in most cases at what we might have thought at the time was the cutting edge of genius.  Thankfully, this objective doesn’t say, “Mastering Creative Capacities.”  This learning outcome is developmental and will likely be something that most students miss at least as often as they hit.  But three ideas come to mind that I’d like to propose for your consideration . . .

  1. Students need practice.  This starts with simple experiences connecting ideas and deriving insights from those connections.  Students will surely be less capable of successfully wielding this key skill when it is needed if they haven’t explicitly been asked to develop it through previous courses and experiences.
  2. Students won’t take risks if they don’t trust those who ask them to do it.  Developing creative capacities requires learning from all manner of failure.  Students won’t take the kinds of risk necessary to make real progress if there isn’t space for them to fall down and get back up – and a professor who will help them to their feet.
  3. Eventually, you just have to jump.  If nothing else, we are experts at paralysis by analysis.  Although there is always a critical mass of information or content knowledge that students must know before they can begin to effectively connect ideas or form new ones, we sometimes get caught trying to cover more material at the expense of developing thinking skills in students.  Often, it is through trying to integrate and connect ideas without having all of the pieces that teaches the importance of seeking new knowledge and the awareness that there might be details critical to the development of an idea that we don’t yet know.

As you look at the role of your courses in the collective scheme of our students’ growth, I hope you’ll consider the possibility of adding this learning objective.  You may find that you are already doing many of the things in your course that make this happen.  You may find that you need to take a few risks yourself in the design of your course.  Whatever you decide, I hope you will consider the ways that you help students develop creative capacities as complex, higher-order thinking skills.  For our students to succeed in the world they will inherit, I would suggest that our collective future depends on the degree to which we develop their creative capacities to solve problems that we have not yet even seen.

Make it a good day,

Mark

 

Reveling in our IDEA results: A gift we gave to our students and each other

We spend a lot of time talking about the things that we would like to do better.  It’s a natural disposition for educators – continually looking for ways to perfect what is, at its core, a fundamentally imperfect enterprise.  As long as we keep in mind that our efforts to perfect are really about improvement and not about literal perfection, this mindset can cultivate a healthy environment for demonstrably increasing our educational effectiveness.

However – and I admit that I’m probably a repeat offender here – I don’t think we spend enough time reveling in our success.  Often we seem to jump from brushfire to brushfire – sometimes almost frantically so.  Though this might come from a genuinely honorable sense of urgency, I think it tends to make our work more exhausting than gratifying.  Conversely, taking the time to examine and celebrate our successes does two things.  First, it bolsters our confidence in our ability to identify a problem, analyze its cause(s), and implement a successful solution – a confidence that is vital to a culture of perpetual improvement.  Second, it helps us more naturally approach problems through a problem-solving lens.  There is a lot of evidence to show that examining the nature of a successful effort can be more beneficial than simply understanding every painful detail of how we screwed up.

So this last week before Christmas break, I want to celebrate one such success.  If I could hang mistletoe over the campus, I’d likely start doling out kisses (the chocolate kind, or course).  In the four terms since we implemented the IDEA Center course feedback process, you have significantly increased the degree to which students report learning in their courses.  Between fall of 2011 and fall of 2012, the average Progress on Relevant Objectives (PRO) score for a course has increased from a 3.8 to a 4.1.  In addition, on 10 of the 12 individual IDEA learning objectives, students in Augustana courses during the fall of 2012 (last term) reported higher average learning progress scores than students from the overall IDEA data base.  More specifically, the average learning gains from our own courses last term were higher than our overall Augustana average from the previous three terms on 10 out of 12 IDEA learning objectives.

Looking deeper into the data, the evidence continues to support the conclusion that our faculty have steadily improved their teaching.  Over four terms, faculty have reduced the number of objectives they select and narrowed the gap (i.e., variance – for those of you jonesing for statistical parlance) between progress on individual objectives chosen for a given course.  This narrowing precision likely indicates an increasing clarity of educational intent on the part of our faculty.  Moreover, this reduction in selected learning objectives has not come at the expense of higher order thinking objectives that might be considered more difficult to teach.  On the contrary, the selection of individual learning objectives remains similarly distributed – and equally effective – across surface and deep learning objectives.  In addition, students’ responses to the questions regarding “excellent teacher” and “excellent course” went up from 4.2 to 4.3 and from 3.9 to 4.0, respectively.  Finally, when asked whether “as a result of this course, I have more positive feelings about this field of study,” students’ average responses increased from 3.9 to 4.0.

Are there some reasons to challenge my conclusions?  Maybe.  While last year’s participation in the IDEA course feedback process was mandated for all faculty in an effort to develop institutional norms, only about 75% of courses participated this fall.  So it’s possible that the courses that didn’t participate in the fall would have pulled down our overall averages.  Or maybe our faculty have just learned how to manipulate the system and the increased numbers in both PRO scores, individual learning objectives, and teaching methods and styles are nothing more than our improved ability to game the system.

To both of these counter-arguments, in the spirit of the holiday I say (respectfully) . . . humbug.  First of all, although older faculty are traditionally least likely to employ course evaluations (as was the case this fall), I think it is highly unlikely that these faculty are also our worst instructors.  On contrary, many of them are master teachers who have found long ago that they needed to develop other methods of gathering course feedback that matched their own approach to teaching.  Moreover, even if there were some courses taught by senior faculty in which students would have reported lesser degrees of learning, there were courses with lower PRO scores taught by faculty from all classifications.  Second, while there might be some potential for gaming the IDEA system, what I have seen some people refer to as “gaming” has actually been nothing but intentionally designed teaching.  If a faculty member decides to select objective 11, “learning to analyze and critically evaluate ideas, arguments, and points of view,” and then tells the students that this is a focus of the course, asked students to develop this skill through a series of assignments, discussions, projects, or papers, and then explains to students when and how they were making progress on this objective . . . that all sounds to me like plain ol’ good teaching.  So if that is gaming the system or teaching to the test, then (in the words of every kid who has ever played football in the street), “GAME ON!”

Are there other data points in last term’s IDEA aggregate report that we ought to examine and seek to improve?  Sure.  But let’s have that conversation later – maybe in January.  Right now, let’s revel in the knowledge that we now have evidence to show the fruits of our labor to improve our teaching.  You made the commitment to adopt the IDEA course feedback system knowing that it might require us to step up our game.  It did, and you responded in kind.  Actually, you didn’t just meet the challenge – you rose up and proved yourselves to be better than advertised.  So congratulations.  You thoroughly deserve it.  Merry Christmas.

Make it a great day,

Mark

 

 

Assessing our current process of math (mis)placement

Nobody likes placement tests.  For incoming students, they revive the specter of being evaluated on material they have already forgotten.  For our Summer Connections staff, they become the perpetual reason that students don’t complete the registration process properly.  And for faculty, placement tests seem to miss a growing proportion of students that quickly appear in over their head in class even though the tests “placed” those students in it.

Over the last few weeks, based on questions asked by the math faculty and some very thoughtful conversations and suggestions on their part, we have been taking a hard look at our math placement process.  We compared it with alternative methods of placement and tracked students over each of the last four years to see how they did in the math courses they took.  We’ve found all kinds of interesting tidbits that have spurred some important solutions that I think will help our students in the years to come.  But one piece of data stood out to me that I wanted to share concerning (a) the difference between our incoming students’ perception of college and the way that we would like them to engage it, and (b) the ramifications of that difference.

Before launching into this post, however, I have to give a massive shout out to Kimberly Dyer, the backbone of my office, for her work on this project.  She has done all of the data organizing and analysis.  If I’m being honest, this week I’m just riding the coat tails of greatness.

Although our current math placement protocol is set up to place students across a range of math courses, a large proportion of students end up placing into either pre-calculus or calculus I.  Students with a math placement score of 20 or below are assigned to pre-calculus and students with a 25 or above are assigned to calculus I or higher.  But for the students who score between 21-24, we tell them to consult with advisers and others to determine which math course – pre-calculus or calculus I – is the best fit for them.

All else being equal, I think it’s safe to say that on average we would expect students who earn a 21 or a 22 to enroll more often in pre-calculus and students who earn a 23 or 24 would enroll more often in calculus I.  Unfortunately . . . .

Math Placement Score

Enrolled in Pre-Calculus

Enrolled in Calculus I

21

18

25

22

18

34

23

14

27

24

12

40

As you can see in the table above, for all of the placement scores in this ‘tweener group, more students chose to enroll in calculus I than in pre-calculus.  Yet, maybe it’s not a problem because all of these students are able to handle calculus I.  The table below shows the subsequent grades for students at each placement score who chose to take calculus instead of pre-calculus.

Math Placement Score

Earned a B- or better

Earned a D, F, or withdrew

21

32%

36%

22

21%

41%

23

37%

37%

24

55%

20%

Apparently, students who earn scores that would cause most of us to think twice before registering for calculus I are more often taking calculus I anyway.  And the failure rates lay out in pretty stark terms the consequences of that decision.  Clearly, there must be other issues at play that would convince an incoming freshman to choose the more advanced math course when their placement score suggests some caution in considering the more advanced course.

The folks who help with registration at Summer Connection often describe the pressures that students and their parents bring to this issue.  Many students are worried about graduating in four years and therefore want to take the highest level of courses they can take.  Others think that because they took pre-calculus in high school, they should automatically take calculus I – regardless of their assessed degree of preparation as measured by the placement test.  Moreover, some may not want to face that fact that although they may have passed pre-calculus in high school, they didn’t learn as much as they would like to think.

In my mind, this disconnect exemplifies the degree to which incoming students and families don’t grasp the difference between going to college to acquire content knowledge and going to college to develop skills and dispositions.  In their mind, content acquisition is isolated to a given course.  Content learned or not learned in one course is not likely to affect the ability to learn content in another course.  However, we know that content is continually changing, and in today’s world it is practically ubiquitous.  While it is necessary, it is not sufficient, and is only a part of our ultimate educational goal.  For us, content is the mechanism by, or the context within which, we develop skills and dispositions.  Then the content helps us re-situate those skills and dispositions in settings akin to the environments in which students will be expected to excel after college.

This misunderstanding of the point of college – and more specifically the educational outcomes we intend for students who attend Augustana – has major implications for students.  For these kids who perceive college to be about content acquisition, they see it as a sort of intellectual pie eating contest, where it makes complete sense to bite off more than you can chew to get what you can and gobble your way to the finish line regardless of whether or not you happen to throw up along the way or stir up an indigestional nightmare at the end.  On the contrary, if students understand that college is about developing skills and dispositions, I think that they might be more likely to appreciate the chance to start at the beginning that is appropriate for them, savoring each experience like a slow cooked, seven course meal because they know that the culmination of college is made exponentially better by the particular ordering and integrating of the flavors that have come before.

Although we definitely need to emphasize this message from the moment of students’ first interaction with Augustana, convincing students AND their parents to understand and embrace this conceptual turn is not the sole responsibility of admissions or Summer Connections or even LSFY.  For students to grasp the implications of this shift, they need to hear it from all of us repeatedly.  Otherwise, there are too many external pressures that will influence students to engage in academic behaviors that will ultimately harm their development.  We may well need to eliminate the ‘tweener category of math placement scores, but this is not the only situation where that monster raises its ugly head.  However, if we are vigilant, I think we will help many more students deliberately and intentionally suck the marrow out of their four years at Augustana instead of treating like an eating contest.

Make it a good day,

Mark

 

 

Finding the ideal balance between faculty and administrators

During the term break, the Chronicle of Higher Education reviewed a research paper about the impact of an administrator-faculty ratio on institutional costs.  The researchers were seeking evidence to test the long-standing hypothesis that the rising costs in higher education can be attributed to an ever-growing administrator class.  The paper’s authors found that the ideal ratio of faculty to administrators at large research institutions was 3:1 and that institutions with a lower ratio (fewer faculty per administrator) tend to be more expensive.

Even though we are a small liberal arts college and not the type of institution on which this study focused, I wondered what our ratio might look like.  I am genuinely curious about the relationship between in-class educators (faculty) and out-of-class educators (student affairs staff) because we often emphasize our belief in the holistic educational value of a residential college experience.  In addition, since some have expressed concern about a perceived increase in administrative positions, I thought I’d run our numbers and see what turns up.

Last year, Augustana employed 184 full time, tenured or tenure-track faculty and 65 administrators.  Thus, the ratio of faculty to administrators was 2.8 to 1.  If we were to include faculty FTE and administrator FTE (which means we include all part-time folks as one-third of a full time employee and add them to the equation), the ratio becomes 3.35 to 1.  By comparison, in 2003 (the earliest year in which this data was reported to IPEDS), our full time, tenured or tenure-track faculty (145) to administrator (38) ratio was 3.82 to 1.  When using FTE numbers, that ratio slips to 4.29 to 1.

What should we make of this?  On its face, it appears that we’ve suffered from the same disease that has infected many larger institutions.  Over about ten years, the balance between faculty to administrators has shifted even though we have increased the size of the faculty considerably.  But if you consider these changes in the context of our students (something that seems to me to be a rather important consideration), the results seem to paint a different picture.  For even though our ratio of faculty to administrators might have shifted, our ratios of students to faculty and students to administrators have moved in similar directions over the same period, with the student/faculty ratio going from about 14:1 to just over 11:1 and our student/administrator ratio going from about 51:1 to close to 39:1.  Proportionally, both ratios drop by about 20%.

For me, these numbers inspire two questions that I think are worth considering.  First, although the absolute number of administrators includes a wide variety of campus offices, a substantial proportion of “administrators” exist in student affairs.  And there seems to be some disparity between the nature of the educational relationship that we find acceptable between students and in-class educators (faculty) and between students and out-of-class educators (those administrators who work in student affairs).  There’s a lot to sort out here (and I certainly don’t have it all pegged), but this disparity doesn’t seem to match up with the extent to which we believe that important student learning and development happens outside of the classroom.  Now I am not arguing that the student/administrator ratio should approach 11:1.  Admittedly, I have no idea what the ideal student/faculty ratio or student/administrator ratio should be (although, like a lot of things, distilling that relationship down to one ratio is probably our first big mistake). Nonetheless, I suspect we would all benefit from a deeper understanding of the way in which our student affairs professionals impact our students’ development.  As someone who spends most of my time in the world of academic affairs, I wonder whether my own efforts to support this aspect of the student learning experience have not matched the degree to which we believe it is important.  Although I talk the talk, I’m not sure I’ve fully walked the walk.

Second, examining the optimal ratio between faculty and administrators doesn’t seem to have much to do with student learning.  I fear that posing this ratio without a sense of the way in which we collaboratively contribute to student learning just breathes life into an administrator vs. faculty meme that tends to pit one against the other.  If we start with a belief that there is an “other side,” and we presume the other side to be the opposition before we even begin a conversation, we are dead in the water.

Our students need us to conceptualize their education in the same way that they experience it – as one comprehensive endeavor.  We – faculty, administrators, admissions staff, departmental secretaries, food service staff, grounds crew, Board of Trustees – are all in this together.  And from my chair, I can’t believe how lucky I am to be one of your teammates.

Make it a good day,

Mark

 

 

Who are the students who said that no one recommended the CEC to them?

Last week I wrote about the way that our seniors’ responses to the question “Who recommended the Community Engagement Center to you?” might reflect the values that we communicate through out actions even if they aren’t necessarily the values that we believe we have embraced.  At the end of my post I promised to dig deeper into our senior survey to better understand the students who said that no one recommended the Community Engagement Center to them.  During the past several days my students and I have been peeling the data back in all kinds of ways.  Based on prior findings on students’ experiences with major advising and its connection to post-graduate planning, we thought that we might be able to identify some pattern in the data that would give us some big answers.  So we laid out a couple of hypotheses to test for the students who said no one recommended the CEC to them.  We thought:

  • These students would be more likely to intend to go to graduate school
  • These students would be more likely to major in a humanities discipline
  • These students would be less involved in co-curricular activities
  • These students would be generally less engaged in their college experience

Here is what we found.

First, these students weren’t more likely to be headed to graduate school.  This hypothesis was based on an earlier finding that students who intended to go to grad school were more likely to work with a professor to guide them through the application process while students planning to get a full time job would be referred to Career Services.  But our  students were distributed across the post-graduate plan options of grad school and work just like everyone else.  So this first hypothesis was a total bust.

Genius IR Shop : 0  —  Data : 1

Second, these students were not significantly more likely to major in humanities disciplines.  This hypothesis evolved from some earlier conversations with students that suggested less of a natural connection between the career center and the more “pure” liberal arts disciplines.  In the end, while some of the humanities disciplines did seem to appear slightly more often than most pre-professional degrees, there were plenty of students from the natural and physical sciences who also said no one recommended the CEC to them.  So even though there was an initial glimmer of possibility, the reality is that this second hypothesis was also a flop.

(Aspiring to be but clearly not yet) Genius IR Shop : 0  —  Data : 2

Third, we couldn’t find much in our data to support our assertion that these students were less involved in co-curricular activities.  Our originating hypothesis was based on the idea that students who are less social might not end up in situations where the CEC would be recommended as often.  Although these students found slightly fewer student groups that matched their interests, they were still involved in at least one student organizations and clubs as often as other students.  Despite looking at this data through the most friendly lens, we just couldn’t say that this group of students’ responses was a function of their lack of co-curricular involvement.

(Nothing but a bumbling shadow of a) Genius IR Shop : 0  —  Data : 3

At this point in the story, you ought to suspect some stress on my part.  It’s not all that much fun to be wrong repeatedly.  Furthermore, our last hypothesis about a general passivity is qualitatively more difficult to test than simply looking at differences across one particular question.  Nonetheless, my minions and I soldiered on.  We looked across all of the questions on the senior survey, identifying significant differences and looking for trends.  Thankfully, we found a host of evidence to support our last hunch.

We found that the students who said no one recommended the CEC to them were less plugged in to their college experience across the board.  Their responses to every one of the advising questions were significantly lower, their responses to many of the co-curricular experiences questions were significantly lower, and their responses to a number of curricular experience questions both in the major and across the curriculum were significantly lower.

(Salvaging the crumbling remains of my) Genius IR Shop : 1  —  Data : 3

What jumps out to me as a result of this exercise is the importance of our informal educational efforts.  There will always be a subset of students who simply, magically do the things we hope they would do, take the initiative to ask the next question, and get themselves ahead of the curve simply because they are the cream of our crop.  However, there will always be a subset of students who stumble out of the gate, drift passively into the fog, and avoid choices simply because they are . . . human.  Because we have many cream of the crop types, its all too easy to miss those who suffer from being, well, normal.  So to me, this is why we must take the initiative to ask students if they’ve done the things that might seem completely obvious to us, like recommending to them that they should check out the services at the CEC early in their college career – and tell them exactly why this can matter in the broader scheme of their life’s journey.  If we want all of our students – no matter if they are already perfectly formed adults or if they are bumbling, stumbling, grumbling prepubescents masquerading as undergraduates on the cusp of adulthood – to wring every developmental drop out of their college learning experience, then we have to take on a proactive role to ensure that no one gets left out in the cold, especially those who are more susceptible to float off with the current du jour.

Remember, this study isn’t about who did and did not use the CEC.  The question we examined asks “who recommended the CEC to you.”  We asked the question this way specifically to give us feedback on the nature of the experience we are delivering to our students – not just to find out what our students did.  And as it turns out, the degree to which we are proactive educators may be one of the most crucial ways in which we might purposefully guide our more passive students.  Not rocket science?  Maybe.  Worth remembering as we bustle through our own madcap world?  Absolutely.

Make it a good day,

Mark

Recommending Students to the Community Engagement Center

Sometime I worry that I tend to look at our student data through an overly quantitative lens.  I’ll look for significant predictors of specific outcomes or statistically significant differences between two groups.  And as trained, I instinctively take the steps necessary to avoid the most common statistical stumbling blocks such as claiming significant when there is none or mistaking correlation for causation.  But there are times when this propensity to immediately dive deep into the data means that I miss a critical point that sits in plain view, screaming at the big nosed, bearded face looming over it, “Hey, you idiot!  I’m right here!”

One such moment came recently while I was revisiting the simple distribution of seniors’ responses to the question, “Who recommended the CEC (Community Engagement Center) to you?”  Student could select as many options as might apply in their case: faculty within my major(s), faculty outside my major(s), my major adviser, my first year adviser, residential life staff, student activities staff, my parents, another student, other administrators, and finally, no one recommended the CEC to me.

As I stared at the percentages under each response option, I began to think that this question might be the type that holds within it an array of discoveries.  First, the distribution of responses appeared to reflect a set of values that we communicate to students about 1) the role of the CEC on campus and, 2)  the way in which we see our educational efforts as a process of preparing students for life after Augustana.  Second, since the CEC often functions as a student gateway to all sorts of other important educational experiences, I began to wonder if students who indicate that no one recommended the CEC to them might also score lower on a host of other experiences that either might follow from an initial interaction with the CEC or might suggest a broader degree of disengagement.

So here is the question followed by the distribution of students’ responses:

Who recommended the CEC (Community Engagement Center) to you? Check as many as might apply.

  • Faculty within my major(s) – 41.5%
  • My major adviser – 28.1%
  • No one recommended the CEC to me – 23.4%
  • Another student – 21.4%
  • Faculty outside my major(s) – 17%
  • Other administrator – 14%
  • My first year adviser – 11.4%
  • My parents – 9.6%
  • Student Activities staff – 5.2%
  • Residential Life staff – 1.6%

These numbers alone tell us something pretty interesting.  Clearly, recommendations to the CEC tend to come out of students’ academic experience in their major.  First, this suggests that these recommendations probably come later in one’s college career – junior or senior year (sophomore year at the earliest).  Further, these recommendations rarely come from the co-curricular side of the student experience.  Thus, it appears that in general we conceive of the role of the CEC as either, a) a means of resolving an absence of post-graduate career purpose (students in their later years who still don’t seem to know what they want to do after college or students who in the midst of searching for a career plan “B”), or, b) a support service to help students bundle the totality of their college experience in preparation for the job or grad school search.  Either way, the role we see for the CEC seems more retroactive than proactive. It doesn’t appear that we have generally thought of the CEC as a students’ compass with which they might plot out  – from the moment they arrive on campus – their college experience in a way that allows them to move forward with intentionality.  Nor do we appear to have thought much about linking our students’ co-curricular experiences – one of Augustana’s true, albeit often under-appreciated strengths – with the role of the CEC.  All of this doesn’t seem to comport with our belief that a liberal arts college experience is holistic, developmental, and fully integrated; one that starts, from the very beginning, with the end in mind, and one that believes the whole must be greater than the sum of the parts.

Now there may be lots of lengthy explanations for this particular distribution of responses; some of them might even be entirely legitimate.  But it doesn’t change the nature of the values that we appear to be expressing – or not expressing – as portrayed through student-reported experiences.  In addition, 23.4% of our seniors indicated that no one recommended them to the CEC.  Given the array of services that originate out of the CEC, I’d suggest that we would like that number to be much lower than effectively one-quarter of a graduating class.

Admittedly, there were some interesting anomalies in the data and caveats that we should consider.  A few students indicated that no one recommended the CEC to them AND indicated that another student recommended the CEC to them.  And it was during this cohort of students’ career at Augustana that the CVR (Center for Vocational Reflection) merged with a variety of other services including Career Services to create the CEC – making it possible that some students might not have considered their earlier recommendations to the CVR when responding to this question.  But even in the presence of these caveats, we should be willing to ask ourselves whether our students’ experience mirrors the values that we purport to hold.

The other aspect of this particular question that I find interesting is the degree to which the difference in responses to this question (no one recommended the CEC to me vs. someone recommended the CEC to me) might mask statistically significant differences on many other questions in the senior survey.  Now I’m not claiming that there is a direct relationship between this question and all of the others on which student responses also differed.  However, it seems to me highly possible that, like many other situations in life where one unique opportunity correlates with or begets a series of other opportunities that ultimately separates a person from the pack, interaction with the CEC may indeed open up pathways and ways of thinking about the college experience in the same way that color changes the fundamental nature of black and white film.

It turns out that students who said no one recommended the CEC to them differed significantly (in a statistical sense) on many items on the senior survey that involve the advising experience, the broader curricular experience, and the co-curricular experience.  Next week I’ll talk more about what we might learn from this array of differences.

Make it a good day,

Mark

Hiding under the “average” blanket

Higher ed folks often toss around numbers that supposedly describe the quality of a given college or university.  But a funny thing happens on the road to an “average” score.  Although it might approximate everyone, it rarely describes anyone in particular.  So unless a college hires an Institutional Psychic to predict the individual path of each new student (The Nostradamus Endowed Chair of Student Success?), metrics like an average retention rate or a student-faculty ratio don’t tell us as much about a place as we might like – or want – to think.

But this doesn’t mean that the data is useless.  In fact, we can learn a lot about ourselves by looking for differences between subsets of students on a variety of such metrics.  For example, an overall retention rate could – and often does – mask stark differences in persistence between high and lower ability students, high and lower income students, or men and women.  Identifying the nature of those differences could point us toward the solutions that would help us improve what we do.

Over the last several years we’ve increasingly employed this approach to squeeze more useful information out of our student experience data.  Many of you have already seen the way that the experiences of your majors might differ from other Augustana students in your senior survey departmental reports.  Taking the same approach that we use to better understand student retention (dividing students by gender, race/ethnicity, socioeconomic status, and academic preparation/incoming ACT score) reveals a layer of nuance that I believe deepens our understanding of the Augustana experience across diverse students types.  It also helps us use evidence to think about how we might engage specific types of students in specific moments to more carefully mitigate these differences.

As an aside, the differences that we spend most time considering are those that cross a threshold of statistical significance – meaning that there is less than a 5% chance that the likelihood of the observed difference is coincidental (the formula that we used is called a t-test).  In this post I am going to focus on differences between low income and middle/upper income students.  Future posts will consider the differences the emerge across a range of variables including gender, race/ethnicity, and academic preparation.

Comparing low income students with middle/upper income students presents a great example of the complexities this kind of analysis can provide.  We used Pell Grant eligibility as the marker of lower income – its an easy way to categorize financial need, even as it probably over-simplifies the impact of socioeconomic status (SES).  As you look through the items on which differences emerged, think about the possible factors that might produce a statistically significance difference between the two groups’ responses.

Lower income students scored higher than middle/upper income students on several items.

  • My co-curricular activities provided numerous opportunities to interact with students who differed from me in race/ethnicity, sexual orientation, religious beliefs, or social/political values.
  • My out-of-class experiences have helped me connect what I learned in the classroom with real life events
  • In your non-major courses, about how often were you asked to put together ideas or concepts from different courses when completing assignments and during class discussions?
  • My interactions with the librarians helped me improve my approach to researching a topic.
  • Augustana faculty and staff welcomed student input on institutional policy and handbook decisions.
  • When you had questions or concerns about financial issues, were the offices you contacted responsive and helpful?

Conversely, lower income students scored lower than middle/upper income students on one item.

  • My out-of-class experiences have helped me develop a deeper understanding of myself.

The scope of these differences is fascinating.  In some instances low income students’ responses seem comparatively more positive than the rest of the student body.  While this might suggest that some of our efforts are indeed providing a compensatory impact, I think these findings highlight the relative lack of pre-college opportunity that lower income students often must overcome (fewer communal resources like libraries or access to technology, less exposure to some of the ideas fundamental to the liberal arts, etc.).  In other cases, these findings might be evidence of the quality of our effort to be sensitive and inclusive to these students (e.g., the relatively more positive interactions for low income students when asking for help with financial issues).  Understanding the nature of these differences could play an important role in shaping our daily interactions with students who may, unbeknownst to us, come from a lower socioeconomic background.

At the same time, sometimes these differences in responses suggest that some of these students’ experiences are less positive.  Given the small numbers of lower income students at Augustana, it seems likely that they would recognize the extent to which they interact across socioeconomic difference more often than middle/upper income students.  In some cases this might contribute to a sense of marginalization for low income students.  Finally, the difference in responses on the question about “out-of-class experiences develop a deeper understanding of myself” is particularly intriguing.  I’d like to know more about the underlying factors that might influence that difference.

Taken together, these findings replicate the results of many recent studies regarding the impact of social class on college students – an impact that extends far beyond financial constraints.  What have you observed in your interactions with low income students?  Are there things you have done that seem to help these students succeed at Augustana?  As you interact with students this week, I hope these findings expand your sense of the ways in which our students experience Augustana differently, and how our sensitivity to these differences can improve our educational impact.

Make it a good day,

Mark

 

The faculty adviser as a student’s GPS

At Augustana, we have always believed in the importance of faculty advising.  And we have solid evidence to support this approach.  In addition to the many proud stories of students who have blossomed under faculty tutelage, our recent senior survey data and our prior NSSE data both suggest that overall, our students are satisfied with the quality of our advising.  In fact, other NSSE data suggests that faculty ask students important advising questions about career aspirations more often than faculty at similar institutions.

Yet many of us share a gnawing sense that we can, and need, to do better.  Because even though these average scores roughly approximate general satisfaction, the degree of variability that lurks beneath them hides an uncomfortable truth.  For each advising relationship that inspires a student to excel, there are students for who gain little substantive benefit from their advising interactions.

One way to strive for improvement with some measure of confidence is to collectively apply a theoretically grounded framework of advising with a formative assessment feedback mechanism to guide our advising conversations and hone them over time.  One theory of advising, often called developmental advising, positions the adviser as a guide to help students both select and weave together a set of curricular and co-curricular experiences to attain important learning outcomes and post-graduate success.  In many ways, it harkens back to the artisan/apprentice model of learning placed in the context of the liberal arts.  In our senior survey, we included a set of questions informed by this framework to assess the degree to which students experience this kind of advising.  The table below reports the average responses to these questions among students who graduated in 2012.

Question

Mean

St.Dev.

My adviser genuinely seemed to care about my development as a whole person.*

4.13

1.003

My adviser helped me select courses that best met my educational and personal goals.*

3.98

1.043

How often did your adviser ask you about your career goals and aspirations?**

3.55

1.153

My adviser connected me with other campus resources and opportunities (Student Activities, CEC, the Counseling Center, etc.) that helped me succeed in college.*

3.44

1.075

How often did your adviser ask you to think about the connections between your academic plans, co-curricular activities, and your career or post-graduate plans?**

3.31

1.186

About how often did you talk with your primary major adviser?***

3.47

1.106

The response options are noted below.

*1=strongly disagree, 2=disagree, 3=neutral, 4=agree, 5=strongly agree
**1=never, 2=rarely, 3=sometimes, 4=often, 5=very often
***1=never, 2=less than once per term, 3=1-2 times per term, 4=2-3 times per term, 5=we communicated regularly throughout the term

 

First, I think it’s useful to consider the way that each question might enhance student learning and development.  In addition, it is important to note the relationship between questions.  It seems that it would be difficult for a student to respond positively to any specific item without responding similarly to the previous item.  Taken together, this set of questions can function as a list of cumulative bullet points that advisers might use to help students construct an intentionally designed college experience in which the whole is more likely to becomes qualitatively greater than the sum of the parts.

Second, the data we gather from these questions can help us assess the nature of our efforts to cultivate our students’ comprehensive development.  Looking across the set of mean scores reported above, it appears that our students’ advising experiences address optimal course selection more often than they help students connect their own array of disparate experiences to better make the most out of college and prepare for the next stage of their lives.

Yet, if we were to adopt this conception of advising and utilize future senior survey data to help us assess our progress, I am not sure that continuing to converting each question’s responses to a mean score helps us move toward that goal.  The variation across students, programs, student-faculty relationships, and potential pathways to graduation doesn’t lend itself well to such a narrowly defined snapshot.  Furthermore, suggesting that we just increase an overall mean score smells a lot like simply adding more advising to all students instead of adding the right kind of advising at the just the right time for those who need it the most.

A more effective approach might be to focus on reducing the percentage of students who select particular responses to a specific item.  For example, in response to the question, “How often did your adviser ask you to think about the connections between your academic plans, co-curricular activities, and your career or post-graduate plans?” 25% of the 2012 graduating students indicated “never” or “rarely.”  It is entirely possible to reduce that proportion substantially without markedly increasing an average score.  For example, if we were to find a way to ask every student to consider the questions outlined in the senior survey once per term while at the same time focusing less on whether students indicate “often” or “very often,” we might find that the proportion of students indicating “never” or “rarely” drops considerably while the mean score remains about the same.  More importantly, I would suggest that at the end of the day we might have become more effective (and dare I say more efficient) in making the advising relationship a positive and influential piece of the educational experience without exhausting ourselves in the process.

As we embark on our HLC Quality Initiative to improve our advising efforts, I hope we will think carefully about the way that we utilize our data to understand our progress.  Our goal is to improve our educational effectiveness – not just move a number.

Make it a good day,

Mark

 

 

 

 

 

Faculty impact on students’ preparation for life after college

In our new senior survey we included two items that serve as useful measures of an Augustana education.

  • “If you could relive your college decision, would you choose Augustana again?”  (five response options from “definitely no” to “definitely yes;” scored 1 to 5)
  • “I am certain that my post-graduate plans are a good fit for who I am right now and where I want my life to go.”  (five response options from “strongly disagree” to “strongly agree;” scored 1 to 5)

While these two items should not be misconstrued (and were not intended to function) as a comprehensive accounting of our success or failure, they do provide some sense of 1) the value our students place on their Augustana experience, and 2) the impact of that experience on their immediate future.  Here are the average scores from our May 2012 graduates.

Question

# of Responses

Mean

Std. Deviation

Certainty of fit regarding post-graduate plans

497

4.06

0.888

Likelihood of choosing Augustana again

491

4.19

0.895

Even though there might be an audience for whom these scores are particularly important (perspective students, board members, accrediting organizations, etc.), for those of us engaged in the rough-and-tumble work of educating, these numbers don’t tell us much about how we might improve what we do.  For this purpose we need a different type of analysis that tests the relationship between experiences and outcomes.  Furthermore, we must keep in mind that the implications of whatever we find are inevitably going to be nuanced, complicated, and potentially booby-trapped.  One of the critical and maddening mistakes that folks often make in translating student-derived data into actionable change is that they too easily succumb to the belief that there exists a magic wand – or worse still, that they are the magician.

It turns out that among this most recent group of graduates there were four specific student experiences that increased the likelihood of a student saying that they “definitely” would choose Augustana again and that they “strongly agree” that their post-graduate plans are a good fit.  I’ll spare the technical stuff for the sake of the statistophobic (you know who you are!); let me just note that we utilized several statistical procedures to give us a legitimate degree of confidence in the validity of our findings.  Of course, if you really want to know all of the gory details, just shoot me an email or post a comment below.

One of those influential experiences involves the role of faculty in helping students achieve their post-graduate plans. Students were asked to respond to this statement:

“Faculty in this major knew how to help me prepare to achieve my post-graduate plans.”  (five response options from “strongly disagree” to “strongly agree;” scored 1 to 5)

Students’ responses to this question produced a statistically significant positive effect on both outcome questions.  In other words, as students’ belief that “faculty in their major knew how to help them prepare to achieve their post-graduate plans” increased, the likelihood of 1) definitely choosing Augustana again, and 2) being very certain that their post-graduate plans were a good fit went up.

So how do we translate this into plausible and sustainable improvement?  It would be easy to resort to naive platitudes (“Hey everyone – prepare your students better, ok?”).  Instead, I’d like to suggest three interconnected ideas that might help us deconstruct the nature of faculty influence on students’ post-graduate preparations and maybe identify a few simple ways to enhance your students’ preparation for life after college.

Different Strokes for Different Folks

The lines connecting a given major to a particular career vary and blur considerably across disciplines.  For example, the array of post-graduate plans among humanities majors might seem almost infinite compared to those among business or education majors.  However, we can help students think about connecting their career aspirations to their day-to-day and term-by-term actions so that they are better positioned to seek out the right experiences, ask the right questions, and make the right impression when the opportunity arrives.  Simply asking students to articulate these connections at the very least encourages them to seek their own answers; and in the process increases your impact on their successful preparation.

You Don’t Have to Know Everything Yourself

Just as the message can get lost in the delivery, so too can our delivery accentuate the spirit of our message.  When we take the initiative to direct students to other campus offices that can help them think about and prepare to achieve their post-graduate plans (even when students haven’t overtly asked for help), we express the degree to which we want to help our students succeed.  Not only does this effort help students find practical and individualized information, it also increases the likelihood that students will see faculty as go-to resources for other connections that will help them make the most of their college experience.

From Whence Do Your Students Come?

We all have stories of discovering the extent to which students don’t know what they don’t know.  In many cases, this knowledge gap is shaped by prior assumptions that students bring with them to college.  Maybe they’re from a small town.  Maybe they’re a veteran.  Maybe they’ve just transferred from another school.  Knowing these kinds of details about our students’ background can provide important insights into why someone might not pursue participation in an experience that would seem to be ideal for them.  This knowledge can also help us identify the combination of experiences that best fits each student – but we will never be able to help those students connect the dots if we don’t understand the context from whence they come to college.

As you work with your students this week, remember that they don’t just see you as professor – they see you as a guide.

Make it a good day,

Mark

Does a double major learn more?

One of the arguments raised repeatedly throughout the calendar discussion was the importance we place on multiple majors.  While there were numerous rationales in support of double majors, one of them was that increased access to gaining a double major reflects our commitment to a fundamental principle of liberal arts education and the emphasis we place on becoming more well-rounded intellectually, culturally, and personally.

 

Although this argument sounds wonderful, I heard less data to support the core claim that a double major was somehow preferable to a single major or a major and a minor.  This might well be so in terms of employability and flexibility in an uncertain job market.  But do students who double major make larger gains on the educational outcomes of a liberal arts education than those who do not double major?  Does earning a double major somehow produce greater broad-based learning gains?

 

I examined the Wabash National Study data from the 2006 cohort.  Furthermore, I restricted my analysis to students at the eleven small liberal arts colleges in that cohort. I didn’t investigate whether certain combinations of majors were more advantageous than others primarily because I didn’t hear anyone seriously advocate for one combination over another, although there seems to be a second claim floating around that truly interdisciplinary double majors are somehow better than intra-disciplinary double majors – an assertion we can test if this first analysis holds much water.

 

The table below shows nine educational and developmental outcomes of a liberal arts education and whether being a double major correlates with a larger gain between the first year and the fourth year.

 

Double Major Status Had No Impact

Double Major Status Had An Impact

Critical Thinking

Intellectual Curiosity

Moral Reasoning

Intercultural Maturity

Attitude toward Literacy

Civic Engagement

Academic Motivation

Leadership

Psychological Well Being

Based on these findings, it initially appears that double majoring provides some educational benefit, impacting two of the nine outcomes.  However, the size of the effect on intellectual curiosity and intercultural maturity is actually quite small.  Furthermore, in the two cases where an initial significant finding appears, the impact of being a double major vanishes once I introduce student experience such as diverse interaction (in the test of intercultural maturity) and integrative learning experiences (in the test of intellectual curiosity) into the equations.

 

Based on this evidence, it’s hard to make the case that double majoring – by itself – is necessarily significantly beneficial in the context of learning outcomes.  Again, this doesn’t mean that it couldn’t be beneficial in the very important context of job acquisition.  But it appears that this cow’s sacred status may require a bit more scrutiny before we summarily celebrate our embrace of the double major.

 

Make it a good day!

 

Mark