The Educational Benefits of Student Employment

One clear trend among college students during the past several decades is the increasing proportion of students who maintain a job while attending school.  At Augustana, more than half of our students work on campus, while many more hold jobs off campus.  Typically, this phenomenon has been cast as a detriment to the college student experience since – as the argument goes – the obligations of work take away from the time that students might spend involved in co-curricular activities or studying for their courses.  I have sometimes heard folks talk about the ideal student employment as a position where the student can do their homework while sitting at a desk.  However, I’d like to suggest that work – especially if it is conceived as an educational experience – can be powerfully beneficial to our students’ development.

A few weeks ago I met with our Student Affairs senior staff to talk about ways that we can use our student data to support their work.  Soon our conversation turned to the possible educational impact of the Community Adviser (CA) position on the students who hold these jobs.  It’s time-intensive work that can sometimes be especially challenging when sorting through the whims and wiles of first year students.  And even though this position might seem to be a hybrid of co-curricular involvement and student employment, the requirements of the position obligate CAs to forego other opportunities on and off-campus.  So we thought it would be useful to test whether or not students who hold CA positions gain some unique educational benefit from the experience.

We chose to compare responses of CAs and non-CAs on one question from the senior survey that asks students to respond to the statement, “My co-curricular involvement helped me develop a better understanding of my leadership skills.”   The response options ranged from “strongly disagree” (1) to “strongly agree” (5).  The CA’s average response score was a 4.63, while the non-CA’s average response was a 4.26.  Statistically, this difference proved to be significant despite the small number of CAs (20) out of the total number of responses (511).  This suggests that there might indeed be something about a CA position that provides a unique educational benefit for those students.

While this specific finding might not qualify as the most rigorous quantitative analysis, it replicates other research on the educational benefits of student employment.  After examining the impact of work across the 2006 cohort of the Wabash National Study, my colleagues and I found that students who worked made gains on several aspects of leadership skills that non-working students did not (you can read the full study here).  Furthermore, the more hours per week that students worked, the larger the educational gain.  This held true even after we accounted for students’ other co-curricular involvement.

Now I’m not suggesting that co-curricular involvement is somehow frivolous.  There are lots of powerful educational benefits that can come from involvement in a variety of activities.  But these findings suggest that maybe work shouldn’t be considered a detriment to the student experience.  In fact, I would suggest that each of us who oversee student workers have an opportunity to uniquely influence their development in important ways.  We only miss that opportunity if we don’t conceive of the employment opportunity as a learning experience.  In the same way that we would like to develop our students as autonomous learners, we should hope to develop our student employees as autonomous workers.  That means giving them more than a simple checklist of things to do and instead, asking them to help solve problems and contribute to the quality of the working environment.

So I hope that you will take the time to think about your student workers as students, and see your role in overseeing their work as an educational one.

Make it a good day,

Mark

How much could we realistically improve retention?

While we consider a variety of measures to assess our educational effectiveness, we focus on our retention rate (the proportion of full-time first year students who return for a second year) for some pretty crucial reasons.  First, it’s a legitimate proxy for the quality of our educational and socially-inclusive environment.  Second, as a tuition-dependent institution every student we lose represents lost revenue; and there is real truth to the old adage that it costs more to recruit students than it does to retain them.  So every year we calculate our retention rate, hold it up next to the last five or ten years-worth of numbers and ask ourselves:

Did we do a good job of retaining students?

Most of the time, we end up telling ourselves that our retention rate falls somewhere between “decent” and “pretty good” – especially considering all of the things we can’t control.  But this conversation always leads us to the next question; one that is substantially more difficult to answer:

What should our retention rate be?

And that is where people in charge start to daydream and folks in the trenches start to cringe.  Because it’s all too common for a small group of folks – or even one folk – to arbitrarily decide on the institution’s goal for 1st-to-2nd year retention without any sense of whether or not that number is a reasonable goal.  And there’s nothing more corrosive to an educational organization’s long-term quality than assigning an unrealistic goal to the people you depend on to accomplish it.  So over the last few months, I’ve been wondering how we could get closer to figuring out what Augustana’s ideal retention rate should be.  I don’t know if I have an answer yet – or if there really is a right answer – but I’d like to share some numbers and consider their implications.

Since research on retention suggests that a primary predictor of student success is a student’s incoming academic ability or preparation, it seems reasonable to use our students’ ACT score as a starting point to test whether or not we could realistically expect to improve our retention rate.  If most of the students that we lose are also those who enter with low ACT scores, it suggests that the students we lose depart because they are academically unprepared and it’s therefore more likely that we’re already pretty close to our optimum retention rate.  However, if most of the students we lose enter with ACT scores comparable to our average freshman ACT score, then it’s likely that we still have room to improve.  And if this latter possibility proves to be so, we could consider a few additional factors and come closer to identifying a “ceiling” retention rate from which we could begin to choose a plausible goal.

To begin this process, we took the two most recent cohorts for which we can calculated retention rates (2010 and 2011) and broke down the students who departed before the beginning of their second year by incoming ACT scores.  The table below shows the number of students in each of three different categories – the bottom quartile (<22), the middle 50% (22-28), and the top quartile (>28) – that departed before the second year.

cohort

<22 ACT

22 – 28 ACT

>28 ACT

2010

28

54

13

2011

17

72

15

Clearly, in both of these cohorts the majority of the students who left entered with ACT scores in the middle 50% rather than the bottom quarter.  Thus, to the degree that ACT score is a proxy for pre-college academic preparation, it appears that there might be some room for us to realistically improve our 1st-to-2nd year retention rate.

However, ACT score doesn’t necessarily reflect the degree to which a student has the personality traits and personal habits (persistence, time management, motivation, etc.) to succeed in college.  And there are plenty of students who enter with low ACT scores and thrive at Augustana.  So another way to explore this data is to consider the number of students who left in good academic standing.  Even though good academic standing at Augustana is a 2.0, in an effort to be conservative in this analysis, I set the bar at a GPA of 2.5.

From the 2010 cohort, 48 of the students who left departed with a GPA above a 2.5.  From the 2011 cohort, 58 students fit into this category.  Again, both of these numbers suggest some degree of opportunity for improvement.  I emphasize caution here because there are many reasons why students depart that are beyond our control (health issues, financial exigency, or family emergencies).  In addition, some students leave for non-academic reasons that aren’t accounted for in this rudimentary analysis.  So we would be wise to estimate a number substantially below the 48 or 58 students noted above.

Where does that leave us?  Well, I would suggest that a reasonable starting point would be to build out from the 2010 cohort.  As it stands, our retention rate with that group was 87.6% – the highest on record.  If we assume that, with some combination of improved programming , advising, and student support, half of those 48 students could have been retained, that means that we could estimate an additional 24 students – or an increase of about 3 percentage points in our retention rate.  That would put us at an optimum retention rate – a best possible scenario – of between 90% and 91%.

How does that compare to colleges like to us?  A 90% retention rate would be significantly higher than colleges like Augustana that enroll a similarly student profile.  What kind of financial investment would this require?  Although that is an even more difficult question to answer, the comprehensive effort necessary to improve our relatively strong retention rate would not be free and would likely require some tradeoffs.

Two final thoughts stick out in my mind.  First, while we might have some room to improve, I’d suggest that the we aren’t that far away from our optimum rate.  Second, since there are as many moving parts in this equation as there are students at risk of departure, effective change may result from subtle shifts in institutional culture just as much as it might be influenced by a new program or policy.

So can we improve our average retention rate? Probably.  Will it be easy?  Probably not.  Is it the right thing to do?  Of course.  But we had better not assume that we will see a surge in revenue even if we are successful.

Make it a good day,

Mark

 

Wrestling with Creativity as a Student Learning Outcome

Before the holiday break, I described the evidence from our overall IDEA scores that our students’ Progress on Relevant Objectives (PRO) scores had increased substantively in the past year.  It is clear from looking at our data that this didn’t happen by accident and I hope you have taken a moment or two to take pride in your colleagues.  Admittedly, it is gratifying to see that all of the effort we have put toward maximizing our use of the new IDEA course feedback forms pay off.  So in the spirit of that effort, I want to highlight one other piece of data from our most recent overall report – the low proportion of courses that selected “Developing Creative Capacities” as an essential or important learning objective – and to advocate for more emphasis on that objective.

Of the 12 different learning objectives on the IDEA faculty forms, “Developing Creative Capacities” was selected by only 16% of the courses offered during the fall term – the least common selection (by comparison, 69% of courses indicated “gaining factual knowledge” as an essential or important learning objective).  As you might expect, “developing creative capacities” was chosen almost exclusively by fine arts courses, seemingly reflecting a traditional conception of creative capacities as something reserved for artistic expression.

Yet, as a liberal arts college, it seems that “developing creative capacities” should represent a central element of our educational goals and the culmination of a liberals arts education.  The parenthetical description of “creative capacities” in that objective includes “writing,” “inventing,” and “designing.”  Of course, these skills transcend any specific discipline.  Every time a student tries to make an argument with language, portray a concept visually, solve a problem that doesn’t have a singular solution, or articulate the implications of multiple sources of information on a particular point, their ability to do so hinges on these skills.

Moreover, in the updated version Bloom’s Taxonomy, “creating” is the highest cognitive domain.  Not unlike synthesizing, creating requires each of the skills listed in the preceding levels of the taxonomy (remembering, understanding, applying, analyzing, and evaluating).  It strikes me that this broadened definition of creating could apply to virtually all senior inquiry projects or other student work expected of a culminating experience.  For a more detailed discussion of creating as a higher-order skill, I’d suggest the IDEA paper that examines Objective #6.

So how do we infuse “developing creative capacities” more fully into our students’ educational experience?  I regularly hear faculty talk about the difficulty that many students exhibit when trying to synthesize disparate ideas and create new knowledge.  It’s complicated work, and I’ll bet that if we were to look back on even the best of our own undergraduate work, we would likely cringe in most cases at what we might have thought at the time was the cutting edge of genius.  Thankfully, this objective doesn’t say, “Mastering Creative Capacities.”  This learning outcome is developmental and will likely be something that most students miss at least as often as they hit.  But three ideas come to mind that I’d like to propose for your consideration . . .

  1. Students need practice.  This starts with simple experiences connecting ideas and deriving insights from those connections.  Students will surely be less capable of successfully wielding this key skill when it is needed if they haven’t explicitly been asked to develop it through previous courses and experiences.
  2. Students won’t take risks if they don’t trust those who ask them to do it.  Developing creative capacities requires learning from all manner of failure.  Students won’t take the kinds of risk necessary to make real progress if there isn’t space for them to fall down and get back up – and a professor who will help them to their feet.
  3. Eventually, you just have to jump.  If nothing else, we are experts at paralysis by analysis.  Although there is always a critical mass of information or content knowledge that students must know before they can begin to effectively connect ideas or form new ones, we sometimes get caught trying to cover more material at the expense of developing thinking skills in students.  Often, it is through trying to integrate and connect ideas without having all of the pieces that teaches the importance of seeking new knowledge and the awareness that there might be details critical to the development of an idea that we don’t yet know.

As you look at the role of your courses in the collective scheme of our students’ growth, I hope you’ll consider the possibility of adding this learning objective.  You may find that you are already doing many of the things in your course that make this happen.  You may find that you need to take a few risks yourself in the design of your course.  Whatever you decide, I hope you will consider the ways that you help students develop creative capacities as complex, higher-order thinking skills.  For our students to succeed in the world they will inherit, I would suggest that our collective future depends on the degree to which we develop their creative capacities to solve problems that we have not yet even seen.

Make it a good day,

Mark

 

Big Data, Intuition, and the Potential of Improvisation

Welcome back to the second half of winter term!  As nice as it is to walk across campus in the quiet calm of a fresh new year (ignoring the giant pounding on top of the library for the moment), it’s a comfort to see faculty and students bustling between buildings again and feel the energy of the college reignited by everyone’s return.

Over the last several weeks, I’ve been trying to read the various higher ed opinionators’ perspectives on MOOCs (Massive Open Online Courses) and the implications they foresee for colleges like Augustana.  Based on what I’ve read so far, we are either going to 1) thrive without having to change a thing, 2) shrivel up and die a horrible death sometime before the end of the decade, or 3) see lots of changes that will balance each other out and leave us somewhere in the middle.  In other words – no one has a clue.  But this hasn’t stopped many a self-appointed Nostradami (Nostradamuses?) from rattling off a slew of statistics to make their case: the increasing number of students taking online courses, the number of schools offering online courses, the hundreds of thousands of people who sign up for MOOCs, the shifting demographics of college students, blah blah blah.  After all, as these prognosticators imply, historical trends predict the future.

Except when they don’t.  A recent NYT article, Sure, Big Data Is Great, But So Is Intuition, highlights the fundamental weakness in thinking that a massive collection of data gathered from individual behaviors (web-browsing, GPS tracking, social network messaging, etc.) inevitably holds the key to a brighter future.  As the article puts it, “The problem is that a math model, like a metaphor, is a simplification. This type of modeling came out of the sciences, where the behavior of particles in a fluid, for example, is predictable according the laws of physics.”  The article goes on to point out the implications of abiding by this false presumption, such as the catastrophic failure of financial modeling to predict the world-wide economic collapse of 2008.  I particularly like the way that the article summarizes this cautionary message.  “Listening to the data is important, they [experts interviewed for the article] say, but so is experience and intuition.  After all, what is intuition at its best but large amounts of data of all kinds filtered through a human brain rather than a math model?”

This is where experience and intuition intersect with my particular interest in improvisation.  When done well, improvisation is not merely random actions.  Instead, good improvisation occurs when the timely distillation of experience and observation coalesces through intuition to emerge in an action that both resolves a dilemma and introduces opportunity.  Improvisation is the way that we discover a new twist in our teaching that magically “just seemed to work.”  Those moments aren’t about luck; they materialize when experience meets intuition meets trust meets action.  Only after reflecting on what happened are we able to figure out the “why” and the “how” in order to replicate the new innovation onto which we have stumbled.  Meanwhile, back in the moment, it feels like we are just “in a zone.”

Of course, improvisation is no more a guarantee of perfection than predictive modeling.  That is because the belief that one can somehow achieve perfection in educating is just as flawed as the fallacy of predictive modeling.  Statisticians are taught to precede findings with the phrase “all else remaining constant . . . ” But in education, that has always been the supremely ironic problem.  Nothing remains constant.  So situating evidence of a statistically significant finding within the the real and gnarly world of teaching and learning requires sophisticated thinking borne of extensive experience and keen intuition.

Effective improvising emerges when we are open to its possibilities – individually and collectively.  It’s just a matter of letting our experience morph into intuition in a context of trust that spurs us to act.  Just because big data isn’t the solution that some claim it to be doesn’t mean that we batten down the hatches, pretend that MOOCs and every other innovation in educational technology don’t exist, and keep doing what we’ve always done (only better, faster, smarter, more, more, more . . . ).  Effective improvising is always preceded by intuition that is informed by some sort of data analysis.  When asked why they did what they did, successful improvisers can often explain in detail the thought processes that spurred them to take a particular action or utter a particular line.  In the same way, we know a lot about how our students learn and what seems to work well in extending their learning.  Given that information, I believe that we have the all of the experience and knowledge to improvise successfully.  We just need to flip the switch (“Lights, Action, Improv!”).

Early in the spring term, I’ll host a Friday Conversation where I’ll teach some ways to apply the principles of improvisation to our work.  Some of you may remember that I did a similar session last year – although you may have repressed that memory if you were asked to volunteer for one of the improv sketches.

In the mean time, I hope you’ll open yourself up to the potential of improvisation.  Enjoy your return to the daily routine.  It’s good to have you back.

Make it a good day,

Mark

 

 

Reveling in our IDEA results: A gift we gave to our students and each other

We spend a lot of time talking about the things that we would like to do better.  It’s a natural disposition for educators – continually looking for ways to perfect what is, at its core, a fundamentally imperfect enterprise.  As long as we keep in mind that our efforts to perfect are really about improvement and not about literal perfection, this mindset can cultivate a healthy environment for demonstrably increasing our educational effectiveness.

However – and I admit that I’m probably a repeat offender here – I don’t think we spend enough time reveling in our success.  Often we seem to jump from brushfire to brushfire – sometimes almost frantically so.  Though this might come from a genuinely honorable sense of urgency, I think it tends to make our work more exhausting than gratifying.  Conversely, taking the time to examine and celebrate our successes does two things.  First, it bolsters our confidence in our ability to identify a problem, analyze its cause(s), and implement a successful solution – a confidence that is vital to a culture of perpetual improvement.  Second, it helps us more naturally approach problems through a problem-solving lens.  There is a lot of evidence to show that examining the nature of a successful effort can be more beneficial than simply understanding every painful detail of how we screwed up.

So this last week before Christmas break, I want to celebrate one such success.  If I could hang mistletoe over the campus, I’d likely start doling out kisses (the chocolate kind, or course).  In the four terms since we implemented the IDEA Center course feedback process, you have significantly increased the degree to which students report learning in their courses.  Between fall of 2011 and fall of 2012, the average Progress on Relevant Objectives (PRO) score for a course has increased from a 3.8 to a 4.1.  In addition, on 10 of the 12 individual IDEA learning objectives, students in Augustana courses during the fall of 2012 (last term) reported higher average learning progress scores than students from the overall IDEA data base.  More specifically, the average learning gains from our own courses last term were higher than our overall Augustana average from the previous three terms on 10 out of 12 IDEA learning objectives.

Looking deeper into the data, the evidence continues to support the conclusion that our faculty have steadily improved their teaching.  Over four terms, faculty have reduced the number of objectives they select and narrowed the gap (i.e., variance – for those of you jonesing for statistical parlance) between progress on individual objectives chosen for a given course.  This narrowing precision likely indicates an increasing clarity of educational intent on the part of our faculty.  Moreover, this reduction in selected learning objectives has not come at the expense of higher order thinking objectives that might be considered more difficult to teach.  On the contrary, the selection of individual learning objectives remains similarly distributed – and equally effective – across surface and deep learning objectives.  In addition, students’ responses to the questions regarding “excellent teacher” and “excellent course” went up from 4.2 to 4.3 and from 3.9 to 4.0, respectively.  Finally, when asked whether “as a result of this course, I have more positive feelings about this field of study,” students’ average responses increased from 3.9 to 4.0.

Are there some reasons to challenge my conclusions?  Maybe.  While last year’s participation in the IDEA course feedback process was mandated for all faculty in an effort to develop institutional norms, only about 75% of courses participated this fall.  So it’s possible that the courses that didn’t participate in the fall would have pulled down our overall averages.  Or maybe our faculty have just learned how to manipulate the system and the increased numbers in both PRO scores, individual learning objectives, and teaching methods and styles are nothing more than our improved ability to game the system.

To both of these counter-arguments, in the spirit of the holiday I say (respectfully) . . . humbug.  First of all, although older faculty are traditionally least likely to employ course evaluations (as was the case this fall), I think it is highly unlikely that these faculty are also our worst instructors.  On contrary, many of them are master teachers who have found long ago that they needed to develop other methods of gathering course feedback that matched their own approach to teaching.  Moreover, even if there were some courses taught by senior faculty in which students would have reported lesser degrees of learning, there were courses with lower PRO scores taught by faculty from all classifications.  Second, while there might be some potential for gaming the IDEA system, what I have seen some people refer to as “gaming” has actually been nothing but intentionally designed teaching.  If a faculty member decides to select objective 11, “learning to analyze and critically evaluate ideas, arguments, and points of view,” and then tells the students that this is a focus of the course, asked students to develop this skill through a series of assignments, discussions, projects, or papers, and then explains to students when and how they were making progress on this objective . . . that all sounds to me like plain ol’ good teaching.  So if that is gaming the system or teaching to the test, then (in the words of every kid who has ever played football in the street), “GAME ON!”

Are there other data points in last term’s IDEA aggregate report that we ought to examine and seek to improve?  Sure.  But let’s have that conversation later – maybe in January.  Right now, let’s revel in the knowledge that we now have evidence to show the fruits of our labor to improve our teaching.  You made the commitment to adopt the IDEA course feedback system knowing that it might require us to step up our game.  It did, and you responded in kind.  Actually, you didn’t just meet the challenge – you rose up and proved yourselves to be better than advertised.  So congratulations.  You thoroughly deserve it.  Merry Christmas.

Make it a great day,

Mark

 

 

Grades and Assessing Student Learning (can’t we all just get along?)

During a recent conversation about the value of comprehensive student learning assessment, one faculty member asked, “Why should we invest time, money, and effort to do something that we are essentially already doing every time we assign grades to student work?”  Most educational assessment zealots would respond by launching into a long explanation of the differences between tracking content acquisition and assessing skill development, the challenges of comparing general skill development across disciplines,  the importance of demonstrating gains on student learning outcomes across an entire institution, blah blah blah (since these are my peeps, I can call it that).  But from the perspective of an exhausted professor who has been furiously slogging through a pile of underwhelming final papers, I think the concern over a substantial increase in faculty workload is more than reasonable.  Why would an institution or anyone within it choose to be redundant?

If a college wants to know whether its students are learning a particular set of knowledge, skills, and dispositions, it makes good sense to track the degree to which that is happening.  But we make a grave mistake when we require additional processes and responsibilities from those “in the trenches” without thinking carefully about the potential for diminishing returns in the face of added workload (especially if that work appears to be frivolous or redundant).  So it would seem to me that any conversation about assessing student learning should emphasize the importance of efficiency so that faculty and staff can continue to fulfill all the other roles expected of them.

This brings me back to what I perceive to be an odd disconnect between grading and outcomes assessment on most campuses.  It seems to me that if grading and assessment are both intent on measuring learning, then there ought to be a way to bring them closer together.  Moreover, if we want assessment to be truly sustainable (i.e. not kill our faculty), then we need to find ways to link, if not unify, these two practices.

What might this look like?  For starters, it would require conceptualizing content learned in a course as the delivery mechanism for skill and disposition development.  Traditionally, I think we’ve envisioned this relationship in reverse order – that skills and dispositions are merely the means for demonstrating content acquisition – with content acquisition becoming the primary focus of grading.  In this context, skills and dispositions become a sort of vaguely mysterious red-headed stepchild (with apologies to step-children, red heads, and the vaguely mysterious).  More importantly, if we are now focusing on skills and dispositions, this traditional context necessitates an additional process of assessing student learning.

However, if we reconceptualize our approach so that content becomes the raw material with which we develop skills and dispositions, we could directly apply our grading practices in the same way.  One would assign a proportion of the overall grade to the necessary content acquisition, and the rest of the overall grade (apportioned as the course might require) to the development of the various skills and dispositions intended for that course.  In addition to articulating which skills and dispositions each course would develop and the progress thresholds expected of students in each course, this means that we would have to be much more explicit about the degree to which a given course is intended to foster improvement in students (such as a freshman level writing course) as opposed to a course designed for students to demonstrate competence (such as a senior level capstone in accounting procedures).  At an even more granular level, instructors might define individual assignments within a given course to be graded for improvement earlier in the term with other assignments graded for competence later in the term.

I recognize that this proposal flies in the face of some deeply rooted beliefs about academic freedom that faculty, as experts in their field, should be allowed to teach and grade as they see fit. When courses were about attaining a specific slice of content, every course was an island.  17th century British literature?  Check.  The sociology of crime?  Check.  Cell biology?  Check.  In this environment, it’s entirely plausible that faculty grading practices would be as different as the topography of each island.  But if courses are expected to function collectively to develop a set of skills and/or dispositions (e.g., complex reasoning, oral and written communication, intercultural competence), then what happens in each course is irrevocably tied to what happens in previous and subsequent courses.  And it follows that the “what” and “how” of grading would be a critical element in creating a smooth transition for students between courses.

In the end it seems to me that we already have all of the mechanisms in place to embed robust learning outcomes assessment into our work without adding any new processes or responsibilities to our workload.  However, to make this happen we need to 1) embrace all of the implications of focusing on the development of skills and dispositions while shifting content acquisition from an end to a means to a greater end, and 2) accept that the educational endeavor in which we are all engaged is a fundamentally collaborative one and that our chances of success are best when we focus our individual expertise toward our collective mission of learning.

Make it a good day,

Mark

 

The post-Thanksgiving haze

Believe it or not, I try to have a life outside of educational assessment and improvement of student learning.  That means – for example – participating in all of the normal stuff that people do over the Thanksgiving holiday.  So over the past five days I’ve packed suitcases, adapted to changes in travel plans, made conversation with all manner of family, and wished I hadn’t eaten __________.  I find it a bit troubling that although I spend an inordinate amount of time thinking about learning from past behaviors to improve future behaviors I can’t seem to learn from my previous mistakes regarding serving size, mashed potatoes, and gravy.

All this is simply to say that I didn’t write a thing last weekend.  Sorry.  And my fingers might actually now be too fat to fit onto a normal keyboard.  So you’ll have to wait til next week for another post.

Usually, I write about data findings that are ambiguous in some way.  This week, I can only write about something that was delicious.  Literally.  And most of me now regrets that second helping.  Actually, maybe the regret is really about the third helping . . .

Make it a good day,

Mark

Assessing our current process of math (mis)placement

Nobody likes placement tests.  For incoming students, they revive the specter of being evaluated on material they have already forgotten.  For our Summer Connections staff, they become the perpetual reason that students don’t complete the registration process properly.  And for faculty, placement tests seem to miss a growing proportion of students that quickly appear in over their head in class even though the tests “placed” those students in it.

Over the last few weeks, based on questions asked by the math faculty and some very thoughtful conversations and suggestions on their part, we have been taking a hard look at our math placement process.  We compared it with alternative methods of placement and tracked students over each of the last four years to see how they did in the math courses they took.  We’ve found all kinds of interesting tidbits that have spurred some important solutions that I think will help our students in the years to come.  But one piece of data stood out to me that I wanted to share concerning (a) the difference between our incoming students’ perception of college and the way that we would like them to engage it, and (b) the ramifications of that difference.

Before launching into this post, however, I have to give a massive shout out to Kimberly Dyer, the backbone of my office, for her work on this project.  She has done all of the data organizing and analysis.  If I’m being honest, this week I’m just riding the coat tails of greatness.

Although our current math placement protocol is set up to place students across a range of math courses, a large proportion of students end up placing into either pre-calculus or calculus I.  Students with a math placement score of 20 or below are assigned to pre-calculus and students with a 25 or above are assigned to calculus I or higher.  But for the students who score between 21-24, we tell them to consult with advisers and others to determine which math course – pre-calculus or calculus I – is the best fit for them.

All else being equal, I think it’s safe to say that on average we would expect students who earn a 21 or a 22 to enroll more often in pre-calculus and students who earn a 23 or 24 would enroll more often in calculus I.  Unfortunately . . . .

Math Placement Score

Enrolled in Pre-Calculus

Enrolled in Calculus I

21

18

25

22

18

34

23

14

27

24

12

40

As you can see in the table above, for all of the placement scores in this ‘tweener group, more students chose to enroll in calculus I than in pre-calculus.  Yet, maybe it’s not a problem because all of these students are able to handle calculus I.  The table below shows the subsequent grades for students at each placement score who chose to take calculus instead of pre-calculus.

Math Placement Score

Earned a B- or better

Earned a D, F, or withdrew

21

32%

36%

22

21%

41%

23

37%

37%

24

55%

20%

Apparently, students who earn scores that would cause most of us to think twice before registering for calculus I are more often taking calculus I anyway.  And the failure rates lay out in pretty stark terms the consequences of that decision.  Clearly, there must be other issues at play that would convince an incoming freshman to choose the more advanced math course when their placement score suggests some caution in considering the more advanced course.

The folks who help with registration at Summer Connection often describe the pressures that students and their parents bring to this issue.  Many students are worried about graduating in four years and therefore want to take the highest level of courses they can take.  Others think that because they took pre-calculus in high school, they should automatically take calculus I – regardless of their assessed degree of preparation as measured by the placement test.  Moreover, some may not want to face that fact that although they may have passed pre-calculus in high school, they didn’t learn as much as they would like to think.

In my mind, this disconnect exemplifies the degree to which incoming students and families don’t grasp the difference between going to college to acquire content knowledge and going to college to develop skills and dispositions.  In their mind, content acquisition is isolated to a given course.  Content learned or not learned in one course is not likely to affect the ability to learn content in another course.  However, we know that content is continually changing, and in today’s world it is practically ubiquitous.  While it is necessary, it is not sufficient, and is only a part of our ultimate educational goal.  For us, content is the mechanism by, or the context within which, we develop skills and dispositions.  Then the content helps us re-situate those skills and dispositions in settings akin to the environments in which students will be expected to excel after college.

This misunderstanding of the point of college – and more specifically the educational outcomes we intend for students who attend Augustana – has major implications for students.  For these kids who perceive college to be about content acquisition, they see it as a sort of intellectual pie eating contest, where it makes complete sense to bite off more than you can chew to get what you can and gobble your way to the finish line regardless of whether or not you happen to throw up along the way or stir up an indigestional nightmare at the end.  On the contrary, if students understand that college is about developing skills and dispositions, I think that they might be more likely to appreciate the chance to start at the beginning that is appropriate for them, savoring each experience like a slow cooked, seven course meal because they know that the culmination of college is made exponentially better by the particular ordering and integrating of the flavors that have come before.

Although we definitely need to emphasize this message from the moment of students’ first interaction with Augustana, convincing students AND their parents to understand and embrace this conceptual turn is not the sole responsibility of admissions or Summer Connections or even LSFY.  For students to grasp the implications of this shift, they need to hear it from all of us repeatedly.  Otherwise, there are too many external pressures that will influence students to engage in academic behaviors that will ultimately harm their development.  We may well need to eliminate the ‘tweener category of math placement scores, but this is not the only situation where that monster raises its ugly head.  However, if we are vigilant, I think we will help many more students deliberately and intentionally suck the marrow out of their four years at Augustana instead of treating like an eating contest.

Make it a good day,

Mark

 

 

Finding the ideal balance between faculty and administrators

During the term break, the Chronicle of Higher Education reviewed a research paper about the impact of an administrator-faculty ratio on institutional costs.  The researchers were seeking evidence to test the long-standing hypothesis that the rising costs in higher education can be attributed to an ever-growing administrator class.  The paper’s authors found that the ideal ratio of faculty to administrators at large research institutions was 3:1 and that institutions with a lower ratio (fewer faculty per administrator) tend to be more expensive.

Even though we are a small liberal arts college and not the type of institution on which this study focused, I wondered what our ratio might look like.  I am genuinely curious about the relationship between in-class educators (faculty) and out-of-class educators (student affairs staff) because we often emphasize our belief in the holistic educational value of a residential college experience.  In addition, since some have expressed concern about a perceived increase in administrative positions, I thought I’d run our numbers and see what turns up.

Last year, Augustana employed 184 full time, tenured or tenure-track faculty and 65 administrators.  Thus, the ratio of faculty to administrators was 2.8 to 1.  If we were to include faculty FTE and administrator FTE (which means we include all part-time folks as one-third of a full time employee and add them to the equation), the ratio becomes 3.35 to 1.  By comparison, in 2003 (the earliest year in which this data was reported to IPEDS), our full time, tenured or tenure-track faculty (145) to administrator (38) ratio was 3.82 to 1.  When using FTE numbers, that ratio slips to 4.29 to 1.

What should we make of this?  On its face, it appears that we’ve suffered from the same disease that has infected many larger institutions.  Over about ten years, the balance between faculty to administrators has shifted even though we have increased the size of the faculty considerably.  But if you consider these changes in the context of our students (something that seems to me to be a rather important consideration), the results seem to paint a different picture.  For even though our ratio of faculty to administrators might have shifted, our ratios of students to faculty and students to administrators have moved in similar directions over the same period, with the student/faculty ratio going from about 14:1 to just over 11:1 and our student/administrator ratio going from about 51:1 to close to 39:1.  Proportionally, both ratios drop by about 20%.

For me, these numbers inspire two questions that I think are worth considering.  First, although the absolute number of administrators includes a wide variety of campus offices, a substantial proportion of “administrators” exist in student affairs.  And there seems to be some disparity between the nature of the educational relationship that we find acceptable between students and in-class educators (faculty) and between students and out-of-class educators (those administrators who work in student affairs).  There’s a lot to sort out here (and I certainly don’t have it all pegged), but this disparity doesn’t seem to match up with the extent to which we believe that important student learning and development happens outside of the classroom.  Now I am not arguing that the student/administrator ratio should approach 11:1.  Admittedly, I have no idea what the ideal student/faculty ratio or student/administrator ratio should be (although, like a lot of things, distilling that relationship down to one ratio is probably our first big mistake). Nonetheless, I suspect we would all benefit from a deeper understanding of the way in which our student affairs professionals impact our students’ development.  As someone who spends most of my time in the world of academic affairs, I wonder whether my own efforts to support this aspect of the student learning experience have not matched the degree to which we believe it is important.  Although I talk the talk, I’m not sure I’ve fully walked the walk.

Second, examining the optimal ratio between faculty and administrators doesn’t seem to have much to do with student learning.  I fear that posing this ratio without a sense of the way in which we collaboratively contribute to student learning just breathes life into an administrator vs. faculty meme that tends to pit one against the other.  If we start with a belief that there is an “other side,” and we presume the other side to be the opposition before we even begin a conversation, we are dead in the water.

Our students need us to conceptualize their education in the same way that they experience it – as one comprehensive endeavor.  We – faculty, administrators, admissions staff, departmental secretaries, food service staff, grounds crew, Board of Trustees – are all in this together.  And from my chair, I can’t believe how lucky I am to be one of your teammates.

Make it a good day,

Mark

 

 

Busy is as busy does . . .

Hey Folks,

This is the time of the term when everyone conjures up whatever remaining powers they have left to slog through finals, grade furiously, and put the term out of its misery.  Or, if you have a slightly more optimistic view of life (and I hope you do), you are overcome with a surge of pride in your students for all they have learned, all they have endured, and all they have become over ten short weeks.  See, that wasn’t so hard now, was it?

To be honest, I’m not inclined to say much this week only because I don’t think many of you have the time to read my blathering about some little data point that has me all atwitter.  And aside from that somewhat uncomfortable image, the last thing I want this blog to become is long, myopic, and just too much.

So I’ll throw this out into the cybertron and let you do what you want with it.  I’ve been privileged to be involved with a number of senior inquiry and service-learning projects this term.  I’ve been very impressed and even proud of the work that I’ve seen these students produce.  They’ve thought carefully about their research, wrestled with tough problems, dealt with mishaps and unpredictability, and throughout have remained honest, genuine, and intent on doing their best work.  Was it all perfect?  Of course not.  Was is supposed to be?  no.  But did I see growth that should make a college proud?  Damn straight.

Even though I am constantly talking about ways that we might improve, it is important to remind ourselves that we often do very good work.  And we deserve the chance to step back from time to time and soak it all in.  You put your heart into the work of making young people better.  And in many cases you help students realize a little bit more of who they aspire to become – even when they don’t fully know who that is or why it might be important.

So – grade like a banshee.  Then relax like a champion.  You deserve it.

Make it a good day,

Mark