One of the ways faculty are intimidated and coerced into accepting codified curricula is through the specter of not living up to assessable “outcomes” (I use the scare quotes because the word has become one of those cant words of educational “reform”—another word in the category—that have become so popular in some quarters, especially administrative ones). “Assessment” has become a major part of faculty activity—one that it is required by the accreditors, or so we’re told, and we can’t do anything about it.
The question is: Why?
Another is: Why can’t we do anything about it?
In a piece posted on Chronicle.com last Friday, Erik Gilbert, associate dean of the Graduate School at Arkansas State University and a professor of history, asks a couple of others: “Does Assessment Make Colleges Better? Who Knows?”
And, actually, I think we do know. Gilbert’s questions are merely rhetorical. He knows, we all do, that quantification, that rankings, are not necessarily the best tools for establishing what someone knows. Tests are useful, of course they are. So are quantifiable goals. But they make a poor basis for determining if something as ethereal as a college education is adding to student lives. At best, they move education into the mechanical. At worst, they make gaming the system—the assessments, the outcomes—necessary and learning no more than a side benefit.
There might be an argument for assessment if it had any kind of provable track record—if the assessment had been assessed and found valuable. Gilbert looked into this:
Has anyone looked into whether assessing student-learning outcomes over many years has made American colleges, or students, better in some way? Has anyone tried to compare institutions with different approaches to assessment? I am a historian so I am not familiar with the education research, but as best I can tell from a literature search and from asking people in the field the answer is “no.”
Apparently, our evaluators from the accrediting agencies just feel that assessment improves education.
Faculty members have been grumbling about assessment for years, but rarely have we stood up and challenged it. We accept grading rubrics, among other things, that we know are flawed or that don’t encompass the fullness of what we are trying to do. We know, in our hearts, that this is close to meaningless:
People who work in assessment complain that faculty treat it as merely a compliance issue; that we just tick the boxes and don’t use the data to improve student learning. No doubt this it true. Advocates may be able to point to modest improvements in student learning in specific programs or courses with evidence generated by assessment instruments, but this is worryingly similar to surgeons patting themselves on the back for taking out tumors without checking to see if their interventions are affecting mortality rates.
We’ve known, intuitively, that assessment does little for anyone except those who like to show spreadsheets in executive meeting rooms. We know what it takes to create an effective learning environment—and that’s enthusiasm. Examination of student progress in light of learning outcomes can certainly help a little bit, as Gilbert points out, but it’s certainly not enough to change the real outcome, the “mortality rate” (its opposite, actually) of education.
Gilbert frames his argument in terms of the real assessment of colleges, one that starts when high-school students and their parents begin examining campus. Nobody, he points out, looks at institutional assessment programs. I don’t think even graduate programs do, when evaluating candidates for admission. One of the things in my favor when I applied to the University of Iowa was (I was later told) that no one from Beloit (my undergraduate school) had, in anyone’s memory, failed to complete their graduate studies in English at Iowa. Experience counted; no one, I am sure, looked into the details of Beloit’s classes. And I suspect that’s still the case. A school’s reputation is based on what its students do once they leave, on what its faculty do on campus, and on the facilities and programs available to augment the learning process. Not on assessment.
Gilbert ends his essay with this: “It’s time for us to demand that the accreditors who are driving assessment provide evidence that it offers benefits commensurate with the expense that goes into it. We should no longer accept on faith or intuition that learning-outcomes assessment has positive and consequential effects on our institutions — or students.” I absolutely agree.