The dynamics of tracking diversity

Over the past few weeks I’ve been digging into an interesting conundrum regarding the gathering and reporting of “diversity” data – the percentage of Augustana students who do not identify as white or Caucasian.  What emerges is a great example of the frustratingly tangled web we weave when older paradigms of race/ethnicity classification get tied up in the limitations of survey measurement and then run headlong into the world in which we actually live and work.  To fully play out the metaphor (Sir Walter Scott’s famous text, “Oh what a tangled web we weave, when first we practice to deceive”), if we don’t understand the complexities of this issue, I would suggest that in the end we might well be the ones who get duped.

For decades, questions about race or ethnicity on college applications reflected an “all or nothing” conception of race/ethnic identification.  The response options included the usual suspects – Black/African-American, White/Caucasian, Hispanic, Asian/Pacific-Islander, and Native American, and sometimes a final category of “other” – with respondents only allowed to select one category.  More recently, an option simply called “two or more races” was added to account for individuals who might identify with multiple race/ethnic categories, suggesting something about our level of (dis)interest in the complexities of multi-race/ethnic heritage.

In 2007, the Department of Education required colleges to adopt a two-part question in gathering race/ethnicity data.  The DOE gave colleges several years to adopt this new system, which we implemented for the incoming class of 2010.  The first question asks whether or not the respondent identifies as Hispanic/Latino.  The second question asks respondents to indicate all of the other race/ethnicity categories that might also apply.  The response choices are American Indian, Asian, Black/African-American, Native Hawaiian/Pacific-Islander, and White, with parenthetical expansions of each category to more clearly define their intended scope.

While this change added some nuance to reporting race/ethnicity, it perpetuated some of the old problems while introducing some new ones as well.  First, the new DOE regulations only addressed incoming student data; it didn’t obligate institutions to convert previous student data to the new configuration – creating a 3-4 year period where there was no clear way to determine a “diversity” profile.  Second, the terminology used in the new questions actually invited the possibility that individuals would classify themselves differently than they would have previously.  Third, since Augustana (like virtually every other college) receives prospective student data from many different sources that do not necessarily comport with the new two-part question, it increased the possibility of conflicting self-reported race/ethnicity data.  Similarly, the added complexity of the two-part question increased the likelihood that even the slightest variation in internal data gathering could exacerbate the extent of inconsistent responses.  Finally, over the past decade students have increasingly skipped race/ethnicity questions, as older paradigm of racial/ethnic identification have seemed increasingly less relevant to them.  This means that the effort to acquire more nuanced data could actually accelerate the increasing percentage of students who skip these questions altogether.

As a result of the new federal rules, we currently have race/ethnicity data for two groups of students (freshmen/sophomores who entered after the new rules were implemented and juniors/seniors who entered under the old rules) that reflect two different conceptions of race/ethnicity.  Although we developed a crosswalk in an attempt to create uniformity in the data, for each additional wrinkle that we resolve another one appears. Thus, we admittedly have more confidence in the “diversity” numbers that we reported this year (2011) than those we reported last year (2010).  Moreover, the change in questions has set up a domino effect across many colleges where, depending upon how an institution tried to deal with these changes, an individual institution could come up with vastly different “diversity” numbers, each supported by a reasonable analytic argument (See this recent article in Inside Higher Ed).

Recognizing the enormity of these problems, IPEDS only requires that the percentage of students we report as “race unknown” be less than 100% during the transition years (in effect allowing institutions to convert all prior student race/ethnicity data to the unknown category). And lets not even get into the issues of actual counting.  For example, the new rule says that someone who indicates “yes” to the Hispanic/Latino question and selects “Asian” on the race question must be counted as Hispanic, but someone who indicates “no” to the Hispanic/Latino question and selects both “Asian” and “African-American” to the race question must be counted as multi-racial.  Anyone need an aspirin yet?

But we do ourselves substantial harm if we get hung up on a quest for precision.  In reality, the problem originates not in the numbers themselves but in the relative value we place on those numbers and the decisions we make or the money we spend as a result.  Interestingly, if you ask our current students, they will tell you that they conceive of diversity in very different ways than those of us who came of age several decades ago (or more).  Increasingly, for example, socio-economic class is becoming a powerful marker of difference, and a growing body of research has made it even more apparent that the intersection of socio-economic class and race/ethnicity produces vastly different effects across diverse student types.

I am in no way suggesting that we should no longer care about race or ethnicity.  On the contrary, I am suggesting that if our conception of “diversity” is static and naively simplistic, we are less likely to recognize the emergence of powerfully influential dimensions on which difference also exists and opportunities are also shaped.  Thus, we put ourselves at substantial risk of treating our students not as they are, but as we have categorized them.  Worse still, we risk spending precious time and energy arguing over what we perceive to be the “right” number under the assumption that those numbers were objectively derived, when it is painfully clear that they are not.

Thanks for indulging me this week.  Next week will be short and sweet – I promise.

Make it a good day,

Mark