Constraining Exploration: The Downside of Evaluation

A new post on Retraction Watch, “Peer review isn’t good at ‘dealing with exceptional or unconventional submissions,’ says study,” quotes the authors of the study of the title:

Because most new ideas tend to be bad ideas, resisting unconventional contributions may be a reasonable and efficient default instinct for evaluators. However, this is potentially problematic because unconventional work is often the source of major scientific breakthroughs.

This should be embroidered into samplers, etched onto the marble bases of statues, written by that old biplane in the sky above Coney Island, embossed on the covers of textbooks, sprayed as graffiti on the side of the George Washington Bridge and encoded as a Google banner ad.

We no longer know how to appropriately evaluate either learning or scholarship–or even art. If we ever did. We reduce it all to “outcomes” or “numerical assessment” or pass off responsibility to “peer review.” We only trust what we can define or count, or what comes from those we deem (often for rather arcane reasons) “experts.”

Evaluation, as it has evolved to the present day, is inherently regressive and constraining. It has no place for the unconventional or anything that challenges received knowledge.

In its post, Retraction Watch is discussing an article by Kyle Silera, Kirby Leeb, and Lisa Bero in Proceedings of the National Academy of Science, “Measuring the effectiveness of scientific gatekeeping.” The authors write:

Our research suggests that evaluative strategies that increase the mean quality of published science may also increase the risk of rejecting unconventional or outstanding work.

The same could be said of the standardized tests now so prevalent in American public schools and of the assessment paradigms being foisted on academic departments in colleges and universities. In all three cases, we are sacrificing the chance of the breakthrough, the surprising and the unconventional for incremental improvement–if that. The improvement lessens each cycle, the gatekeeping eventually resulting in stultification, “new” items becoming nothing more than the old presented slightly differently. A defined universe, after all, limits exploration within it simply through that definition–just as a Scantron test limits possible answers through a paucity of choice. We are creating a paint-by-numbers conception of the universe that leaves little real room for the really artistic or for the groundbreaking erasure, movement or addition of its lines and numbers.

What has happened to us?

Quite simply, we’ve grown fearful, especially fearful of failure. Worse, we’ve forgotten the value of failure, that it is not something to fear but something making real success possible. So, we’d rather be satisfied with the most constipated version of success possible than risk even the most minimal failure on the chance of creating something indubitably new. So, we want to create high-school graduates who are, for all intents and purposes, identical. So, we yearn for college classes stamped from molds. So, we cherish scholarship that challenges as little as possible.

We want to make sure, it seems, that our world changes as little as possible.

Not only is that a bore but, in current intellectual and physical climates, it is an invitation to disaster.

2 thoughts on “Constraining Exploration: The Downside of Evaluation

  1. And just how is the “unconventional” but “breakthrough” report in field X supposed to make its presence known among a million other reports, if it isn’t recommended by someone of some current standing in X? The entire point of peer review is serve as a filter to screen out the garbage, so that anything that makes it through the system has at least some chance of being found of interest by someone else working in X.

    What is there that could possibly work in place of peer review? No suggestion in the study reviewed by Retraction Watch. No suggestion by Retraction Watch (though one of the comments suggests better-educated reviewers). No suggestion in this response to Retraction Watch. Just publish everything online? We already *do* that–it’s called the Web. Funny thing, though: It’s utterly useless for providing actual information unless there’s some mechanism that advances the helpful above the chaff.

    Well, what about search-engine rankings? Shall we rank our science articles by secretive algorithms based on how many people look at them or something? Or shall we, for each field X, hire experts in X–called editors–whose job it is, each to gather many unpaid volunteers, also expert in X, so that anyone who wants to can submit their report one of the editors in hopes that that editor’s stable of volunteers will vet the report as passable? And then the community of X can decide collectively or individually the worthiness of each editor’s enterprise, so that anyone wanting to know something about X and go to the fruits of well-regarded X-editors and see what has passed those editors’ hurdles.

    That’s peer review.

    • The problem with that, Steve, is that it is inherently backward-looking. The “experts” established their reputations in the past and are (at least emotionally) somewhat beholden to the golden ideas of their own youths. They are rarely willing to step outside of what has become “received wisdom.” Also, there’s not an either/or, here. There are plenty of means of evaluation that do not rely on either algorithms or peer review alone.

      My concern is that when we make standardized testing or assessment or peer review the “gold standard,” we limit ourselves unnecessarily–and, ultimately, to our detriment, creating cultures of mediocrity certified by processes with no room for the extraordinary or the innovative.

      The fears that academics are feeling today reflect those of journalists a decade ago. In a book chapter I contributed some years back, I argued that journalism would soon be unable to provide its own gatekeeping and would have to rely on a new structure from without, from “citizen journalists” who could pick up the function without the obligations to the profession that professional journalists carry. We’ll see. The situation in academia is certainly different, but it does no more good for academics to circle the wagons than it has for journalists.

      If traditional peer review (or standardized testing and assessment) is to have a useful place, it has to become something other than the single metric of success. Instead of demanding a certain number of peer-reviewed articles, for example, for reappointment, departments could (and many are) start expanding their conceptions of what counts as legitimate scholarly work–just as colleges move beyond SAT scores for admissions and as course assessment should look beyond rubrics and “outcomes.”

      Expertise has value, but not alone. Physicians, too, are beginning to learn this. Why not academics?

Comments are closed.