The gnarly problem of effective feedback

All too often we talk about feedback as if it’s something that either happens or doesn’t. Students get feedback or they don’t. Faculty give feedback or they don’t. Moreover, all too often I think it’s easy for people like me to unintentionally imply that if students would just get the right feedback at the right time, they would respond to it like budding valedictorians.

However, the concept we are really talking about is much more complicated than just simple information given in response to information. At its fullest, effective feedback encompasses a recursive sequence of precisely timed instructor actions intertwined with positive student responses that produces a change in both the quality of the student effort AND the quality of the student work. Yet despite our best efforts, we know that we have only partial control over this process (since the student controls how he or she responds to feedback) even as we agonize over our contribution to it. So it doesn’t help when if feels like what we hope for and what we get are two very different things.

In this context it’s no wonder that raising the issue of effective feedback can cut close to the quick. All of us do the work we do because we care about our students. To those who burn the midnight oil to come up with just the right comments for students, suggesting that we could improve the quality of the feedback we provide to students could easily come off as unfair criticism. To those who think that there isn’t much point in extended feedback because students today rarely care, raising the issue of faculty feedback seems like preaching to the (wrong) choir.

I, for one, have not always been precise enough in my own language about the issue of effective feedback. So I ought to start by offering my own sincere mea culpa. The conversations we’ve had on campus over the last month about gathering more comprehensive data on our students’ progress early in the term have helped me think much more carefully about the concept of feedback and the ways that we might approach our exploration of it if we are to get better at what we do. With that in mind, I’d like to share some recent data from our freshmen regarding feedback and suggest that we explore more deeply what it might mean.

For the last several years we’ve asked freshmen to respond to the statement, “I had access to my grades or other feedback early enough in the term to adjust my study habits or seek help as necessary.” The five response options ranged from “strongly disagree” to “strongly agree.”

Two events combined to start our consideration of a question like this. First, changes in federal financial aid law steepened the ramifications for dropping classes, making it critical that students know their status in a course prior to the drop date. In addition, we had been hearing from a number of people who work with struggling students that many of those students hadn’t realized they were struggling until very late in the term. Recognizing the pervasiveness of willful blindness among many of those same struggling students, it took us a while to phrase this question in a way that at least allowed for the difference between students who simply never looked at their grades or other relevant feedback versus students who never received a graded assignment until the second half of the term.

Here is the distribution of responses from last year’s mid-year freshman survey.

I had access to my grades or other feedback early enough in the term to adjust my study habits or seek additional academic help.

strongly disagree 62 16%
disagree 111 30%
neutral 75 20%
agree 104 28%
strongly agree 24 6%

What should we take from this? Clearly, this isn’t the distribution of responses that we’d all like to see. At the same time, the meaning of this set of responses isn’t so easily interpreted. So here are some suppositions that I think are probably worth exploring further.

Maybe students are, in fact, regularly ignoring specific improvement-focused feedback that they get from their instructors. Maybe they assume that since the assignment is already graded, any comments from the instructor are not applicable to improving future work. Given the “No Child Left Behind” world in which our students grew up, it seems likely that they would need substantial re-educating on the way that they use feedback if the feedback we provide is specifically designed to guide and improve future work.

On the other hand, maybe students are getting lots of feedback, but it isn’t the kind of feedback that would spur them to recalibrate their level of effort or apply the instructor’s guidance to improve future work. Maybe the feedback they get is largely summative (i.e., little more than a grade with basic descriptive words like “good” and “unclear”) and they aren’t able (for whatever reason) to convert that information into concrete actions that they can take to improve their work.

Maybe students really aren’t getting much feedback at all until the second half of the term. If they are taking courses that are organized around a midterm exam, a final paper, and a final exam, then there would be no substantive feedback to provide early in the term. Given the inclination of some (i.e., many) students to rationalize their behaviors in the absence of hard evidence, this combination of factors could spell disaster.

Finally, maybe students are getting feedback that is so thoroughly developmental in nature that it is difficult for the student to benchmark their effort along a predictive trajectory.  In other words, maybe the student knows exactly what they need to do in order to improve a particular paper, but they don’t understand that partial improvement won’t necessarily translate into the grade that they wanted or believed they might receive based on the kindness and empowering nature of the instructor’s comments.

The truth is that all of these scenarios are reasonable and in no way suggest abject failure on the part of the instructor. And it is highly likely that all students experience some combination of these four scenarios through the academic year.

Whatever the reason, our own data suggests that there is something gumming up the works when it comes to creating a fluid and effective feedback loop in which students’ effort and performance is demonstrably influenced by the application of the feedback provided to them.

What should we do next? I’d humbly suggest that we dig deeper. To do that, we need to know more about the kind of feedback students receive, the way that they use or don’t use feedback, the ways that students learn to use feedback more effectively, and the ways that instructors can provide feedback more efficiently.  In other words, we need the big picture. Maybe the new mid-term reporting system will help us with that.  But even if it doesn’t, we still would do ourselves some good to look more closely at 1) the result that we intend from the feedback we give, and 2) the degree to which the feedback we give aligns with that intent.

If history is any predictor of our potential, I think we are more than capable of tackling the gnarly problem of effective feedback.

Make it a good day,

Mark

3 thoughts on “The gnarly problem of effective feedback

  1. Lendol Calder says:

    Excellent post, Mark. This is a very clear explanation of some of the difficulties of providing and receiving feedback.

    I hope we can work toward a future where grades based on a midterm, final, and maybe a one-off paper are recognized as inadequate for the job (though good for overworked professors and therefore, in some cases, the best a teacher can be expected to do). Moving past this design, the problem then becomes how to give meaningful feedback to students early on in a course when the early assignments are heavily stepped and scaffolded, building toward a big final project that counts for a large portion of the final grade. The relatively easy early assignments can lull students into thinking they know more than they really do.

    To solve this, I’ve gone to a design that gives students a “diagnostic” assignment early in the course, in week one or two, that approximates the difficulty of the final project. The diagnostic assignment is evaluated, marked, and graded in the same way as the final assignment, but the grade is not recorded in the grade book. Hence, it’s “diagnostic.” The assignment gets people’s attention very early on, alerting them to the fact that if they don’t work hard at learning in the term, they can expect to receive this grade on their final assignment. The rubric/evaluation outlines for students what they are doing well already and what they don’t know yet how to do. The downside (there’s always a downside)? It’s a lot of work for teachers and students at a moment in the term when, traditionally, not a lot is expected.

    • Great thoughts, you two. I have very little to add except to echo Lendol’s point about the diagnostic writing assignment. My LSFY 101 classes lend themselves to this particularly well in that students are already assigned a paper on the Augie Reads selection, due the first day of class–I treat these much like Lendol does (“grading” them but not deducting any points), with the added step that I allow the students to revise these at term’s end in order to demonstrate to me (and themselves) the knowledge and skill they’ve (I hope) gained throughout the class.

Comments are closed.