Triangulating our assessment of quantitative literacy

Whether we like it or not, the ability to convey, interpret, and evaluate data affects every part of our personal and professional lives. So it’s not a surprise to find quantitative literacy among Augustana’s nine student learning outcomes. Yet, of all those outcomes, quantitative literacy may be the most difficult to pin down. First of all, this concept is relatively new when compared to other learning outcomes like intercultural competence or critical thinking. Second, there isn’t nearly the range measurement mechanisms – surveys or otherwise – that capture this concept effectively. And third, quantitative literacy is the kind of skill that is particularly susceptible to social desirability bias (i.e., the tendency to believe that you are better at a desirable intellectual skill than you actually are).

Despite the obstacles I noted above, the Assessment for Improvement Committee (AIC) felt like this was an outcome ripe for the assessing. First, we’ve never really measured quantitative literacy among Augustana students before (it wasn’t addressed in the Wabash National Study when we participated between 2008 and 2012). Second, it isn’t clear that we know how each student develops this skill, as we have defined it in our own college documents, beyond what a student might learn in a “Q” course required by the core curriculum. As a result, it’s entirely possible that we have established a learning outcome for all students that our required curriculum isn’t designed to achieve. Uh oh.

In all fairness, we do have one bit of data – imperfect as it is. A few years ago, we borrowed an idea from the National Survey of Student Engagement (NSSE) and inserted a question into our senior survey that asked students to respond to the statement, “I am confident in my ability to interpret numerical and statistical quantities,” giving them five response options that ranged from “strongly disagree” to “strongly agree.”

Since we began asking this question, about 75% of seniors have indicated that they “agree” or “strongly agree” with that statement. Unfortunately, our confidence in that number began to wain as we looked more closely at those responses. For that number to be credible, we would expect to see that students from majors that have no quantitative focus were less confident in their quantitative abilities than students from majors that employed extensive quantitative methods. However, we found the opposite to often be the case. It turned out that students who had learned something about how complicated quantitative methods can be were less confident in their quantitative literacy skills than those students who had no exposure to such complexities, almost as if knowing more about the nuances and trade-offs that can make statistics such a maddeningly imperfect exercise had a humbling effect. In the end it appeared that in the case of quantitative literacy, ignorance might indeed be bliss (a funny story about naming another bias called the Dunning-Kruger Effect).

So last year the AIC decided to conduct a more rigorous study of our students’ quantitative literacy skills. To make this happen, we first had to build an assessment instrument that matched our definition of quantitative literacy. Kimberly Dyer, our measurement ninja, spent weeks pouring over the research on quantitative literacy and the survey instruments that had already been created to find something that fit our definition of this learning outcome. Finally, she ended up combining the best of several surveys to build something that matched our conception of quantitative literacy and included questions that addressed interpreting data, understanding visual presentations of data, calculating simple equations (remember story problems from grade school?), applying findings from data, and evaluating the assumptions underlying a quantitative claim. We then solicited faculty volunteers who would be willing to take time out of their upper-level classes to give their students this survey. In the end, we were able to get surveys from about 100 students.

As you might suspect, the results of this assessment project provided a bit more sobering picture of our students quantitative literacy skills. These are the proportions of questions within each of the aforementioned quantitative literacy categories that students who had completed at least one Q course got right.

  • Interpreting data  –  41%
  • Understanding visual presentations of data  –  41%
  • calculating simple equations  –  45%
  • applying findings from data  –  52%
  • evaluating assumptions underlying a quantitative claim  –  51%

Interestingly, students who had completed two Q classes didn’t fare any better.  It wasn’t until students had taken 3 or more Q classes that the proportion of correct answers improved significantly.

  • Interpreting data  –  58%
  • Understanding visual presentations of data  –  59%
  • calculating simple equations  –  57%
  • applying findings from data  –  65%
  • evaluating assumptions underlying a quantitative claim  –  59%

There are all kinds of reasons that we should interpret these results with some caution – a relatively small sample of student participants, the difficulty of the questions in the survey, or the uneven distribution of the student participants across majors (the proportion of STEM and social science majors that took this survey was higher than the proportion of STEM and social science majors overall). But interpreting with caution doesn’t mean that we discount these results entirely. In fact, since prior research on students’ self-reporting of learning outcomes attainment indicates that students often overestimate their abilities on complex skills and dispositions, the 75% of students who agree or strongly agree is probably substantially higher than the proportion of graduates who are actually quantitatively literate. Furthermore, since the proportion of students who took this survey was skewed toward majors where quantitative literacy is a more prominent part of that major, these findings are more likely to overestimate the average student’s quantitative literacy than underestimate it. Triangulating these data with prior research suggests that our second set of findings might paint a more accurate picture of our graduates.

So how should we respond to these findings? To start, we probably ought to address the fact that there isn’t a clear pathway between what students are generally expected to learn in a “Q” course and what the college outcome spells out as our definition of quantitative literacy. That gap alone creates the condition in which we leave students’ likelihood of meeting our definition of quantitative literacy up to chance. So our first question might be to explore how we might ensure that all students get the chance to achieve this outcome; especially those students who major in disciplines that don’t normally include quantitative literacy skills.

The range of quantitative literacy, or illiteracy as the case might be, is a gnarly problem. It’s not something that we can dump onto an individual experience and expect that box to be checked. It’s hard work, but if we are serious about the learning outcomes that we’ve set for our students and ourselves, then we can’t be satisfied with leaving this outcome to chance.

Make it a good day,

Mark