Because of the break-neck pace of our work lives, we tend to look for pre-determined processes to address problems instead of considering whether or not there is another approach that might increase the chances of a successful long-term solution. This makes sense since pre-determined processes often feel like they help to solve complicated problems by giving us a vetted action plan. But if we begin defaulting to this option too easily, we can sometimes create more work for ourselves just because we absentmindedly opted for “doing it the way we’re supposed to do it.” So I thought it might be worthwhile to share an observation about our efforts to improve our educational effectiveness that could help us be more efficient in the process.
We have found tremendous value in gathering evidence to inform our decisions instead of relying on anecdotes, intuition, or speculation. Moreover, the success of our own experiences seems to have fostered a truly positive sea-change both in terms of the frequency of requests for data that might inform an upcoming discussion or decision as well as the desire to ask new questions that might help us understand more deeply the nature of our educational endeavors. So why would I suggest that sometimes “assessing might be the wrong thing to do?”
First, let’s revisit two different conceptions of “assessment.” One perceives “assessment” as primarily about measuring. It’s an act that happens over a finite period of time and produces a finding that essentially becomes the end of the act of measuring. Another conception considers assessment as a process composed of various stages: asking a question, gathering data, designing an intervention, and evaluating the effectiveness of that intervention. Imagine the difference between the two to mirror the difference between a dot (a point in time) and a single loop within a coil (a perpetually evolving process). So in my mind, “measurement” is a singular act that might involve numbers or theoretical frameworks. “Assessment” is the miniature process that includes asking a question, engaging in measurement of some kind, and evaluating the effectiveness of a given intervention. “Continuous improvement” is an organizational value that results in the perpetual application of assessment. The focus of this post is to suggest that we might help ourselves by expanding the potential points at which we could apply a process of assessment.
Too often, after discovering the possibility that student learning resulting from a given experience might not be what we had hoped, we decide that we should measure the student learning in question. I think we expect to generate a more robust set of data that confirms or at least complicates the information we think we already know. Usually, after several months of gathering data (and if all goes well with that process) our hunch turns out to be so.
I’d like to suggest a step prior to measuring student learning that might get us on track to improvement more quickly. Instead of applying another means of measurement to evaluate the resultant learning, we should start by applying what we know about effective educational design to assess whether or not the experience in question is actually designed to produce the intended learning. Because if the experience is not designed and delivered effectively, then the likelihood of it falling short of its expectations are pretty high. And if there is one truth about educating that we already know, it’s that if we don’t teach our students something, they won’t learn it.
Assessing the design of a program or experiences takes a lot less time than gathering learning outcome data. And it will get you to the fun part of redesigning the program or experience in question much sooner.
So if you are examining a learning experience because you don’t think it’s working as it should, start by tearing apart its design. If the design is problematic, then skip the measuring part . . . fix it, implement the changes, and then test the outcomes.
Make it a good day,
Mark
“if we don’t teach our students something, they won’t learn it.” Roger that, Kemo Sabe. I wonder, though: does valuable learning happen sometimes in spite of us, against us, in the absence of design? We’ve all had the experience of students returning after graduation to tell us lessons they learned from us that we were unaware of teaching.
Its always important to remind ourselves that learning can’t be confined to a linear framework. That’s an easy mistake to make, especially for people like me who are constantly thinking about creating a more effective learning experience and being able to demonstrated that improvement. However, I’m not sure I’d pose the potential value of accidental learning in contrast to intentionally designed educating. By definition, accidental learning ought to be just as likely to occur in a well-designed learning environment as it might result from a completely improvised or random learning environment. A well designed course is flexible enough to allow for improvised exploration – we should be careful not to equate improved designed with tighter constraints on the time allotted to each activity or exercise of a given class session.