The Assessment Myth

One of the ways faculty are intimidated and coerced into accepting codified curricula is through the specter of not living up to assessable “outcomes” (I use the scare quotes because the word has become one of those cant words of educational “reform”—another word in the category—that have become so popular in some quarters, especially administrative ones). “Assessment” has become a major part of faculty activity—one that it is required by the accreditors, or so we’re told, and we can’t do anything about it.

The question is: Why?

Another is: Why can’t we do anything about it?

In a piece posted on Chronicle.com last Friday, Erik Gilbert, associate dean of the Graduate School at Arkansas State University and a professor of history, asks a couple of others: “Does Assessment Make Colleges Better? Who Knows?

And, actually, I think we do know. Gilbert’s questions are merely rhetorical. He knows, we all do, that quantification, that rankings, are not necessarily the best tools for establishing what someone knows. Tests are useful, of course they are. So are quantifiable goals. But they make a poor basis for determining if something as ethereal as a college education is adding to student lives. At best, they move education into the mechanical. At worst, they make gaming the system—the assessments, the outcomes—necessary and learning no more than a side benefit.

There might be an argument for assessment if it had any kind of provable track record—if the assessment had been assessed and found valuable. Gilbert looked into this:

Has anyone looked into whether assessing student-learning outcomes over many years has made American colleges, or students, better in some way? Has anyone tried to compare institutions with different approaches to assessment? I am a historian so I am not familiar with the education research, but as best I can tell from a literature search and from asking people in the field the answer is “no.”

Apparently, our evaluators from the accrediting agencies just feel that assessment improves education.

Faculty members have been grumbling about assessment for years, but rarely have we stood up and challenged it. We accept grading rubrics, among other things, that we know are flawed or that don’t encompass the fullness of what we are trying to do. We know, in our hearts, that this is close to meaningless:

People who work in assessment complain that faculty treat it as merely a compliance issue; that we just tick the boxes and don’t use the data to improve student learning. No doubt this it true. Advocates may be able to point to modest improvements in student learning in specific programs or courses with evidence generated by assessment instruments, but this is worryingly similar to surgeons patting themselves on the back for taking out tumors without checking to see if their interventions are affecting mortality rates.

We’ve known, intuitively, that assessment does little for anyone except those who like to show spreadsheets in executive meeting rooms. We know what it takes to create an effective learning environment—and that’s enthusiasm. Examination of student progress in light of learning outcomes can certainly help a little bit, as Gilbert points out, but it’s certainly not enough to change the real outcome, the “mortality rate” (its opposite, actually) of education.

Gilbert frames his argument in terms of the real assessment of colleges, one that starts when high-school students and their parents begin examining campus. Nobody, he points out, looks at institutional assessment programs. I don’t think even graduate programs do, when evaluating candidates for admission. One of the things in my favor when I applied to the University of Iowa was (I was later told) that no one from Beloit (my undergraduate school) had, in anyone’s memory, failed to complete their graduate studies in English at Iowa. Experience counted; no one, I am sure, looked into the details of Beloit’s classes. And I suspect that’s still the case. A school’s reputation is based on what its students do once they leave, on what its faculty do on campus, and on the facilities and programs available to augment the learning process. Not on assessment.

Gilbert ends his essay with this: “It’s time for us to demand that the accreditors who are driving assessment provide evidence that it offers benefits commensurate with the expense that goes into it. We should no longer accept on faith or intuition that learning-outcomes assessment has positive and consequential effects on our institutions — or students.” I  absolutely agree.

11 thoughts on “The Assessment Myth

  1. It seems to me that institutional assessment at the course level is primarily used to evaluate faculty not to actually improve courses or outcomes. And most colleges and universities are not really structured in a way to effectively assess programs or the overall educational experience, and even if they could do effective assessment, they are not structured in a way to systematically implement changes to address whatever was discovered. In my own situation, my department has decided to look at three years of program assessments that we have been required to submit to our administration (which was required to collect them by the accrediting agency.) But nobody asked us to actually look at the data (as a department) or do something with the assessments. And I don’t think actually using the data from the assessments will make it any easy to get resources if we feel we need to make changes.

    • This is one of the myths of assessment–and not just at the college level–that it is used to improve the “product.” Actually, it is a means (poorly utilized) of culling the faculty.

  2. I think there’s another casualty that needs to be considered: our syllabi. In my job, I used to articulate course transfer requests. What might course X at Y institution be worth here? And to do that, I read syllabi–hundreds–from across the country and indeed world. And what I can say, without a moment’s hesitation, is that the incorporation of SLO’s into the syllabi made them much harder to read, much more bureaucratic, much less about the excitement of learning and much more about box checking and busy-work making. This may seem trivial, but I don’t think it is. The syllabus is a crucial document. It’s a contract between instructors and students. And if the first thing we say to the students is, “read and understand this document,” and they then see a lifeless, Orwellian instrument that goes on for 30 pages while really saying nothing of college-level substance, we have a problem. We encourage them to pretend to read the syllabus — which of course many of them do anyway, but this gives the snowball a shove — and the class gets off on an intellectually dishonest foot.

    Now I imagine you might say, “but didn’t you as a course articulator welcome the listing of specific skills? didn’t it at least make your job easier?” No, it didn’t. Because SLO’s are almost universally airy, pretentious, and obscure. I’ve seen classes described in terms that make their students sound like Roman Jakobson: “Student will master the knowledge of change, will acquire ability to understand major cultural systems, will discern truth from falsehood.” To actually figure out what students likely really learned and did, you have to pay attention to two things: the readings and the homework. And then the SLO’s just get in the way.

    • Thanks, and right on the money, as far as I am concerned.

      In response to the growth of syllabi, I keep mine to two pages, the front and back of one sheet. That way, I resist the pressure to add more and more of what is, essentially, what singer/songwriter R. P. St. John once called “simply idle chatter from beyond the door.”

      I would add one more thing to “the readings and the homework,” the discussion.

    • That’s a very different kind of assessment. In fact, one has nothing to do with the other. What you are talking about is assessment of individual accomplishment. What the other is talking about is the aggregate and generalized (cross class) standard. The former is based as much on judgment as on quantification; the latter tries to strip judgment completely from the equation.

      You are making a common mistake, one that many make in k-12 discussions when they conflate high-stakes standardized testing with the testing that’s a tool in the individual classroom, created and monitored by the individual teacher. They just aren’t the same thing–as any teacher will tell you.

    • At the school where I taught we were specifically told that grades and test score COULD NOT be used as assessment. At least in one iteration. The rules for doing assessment were completely overhauled approximately every two years, so what we were told we must do one time we were told we must not do the next time. As a result, we had to seem as much time revising assessment procedures as we did implementing them. And they never became useful.

  3. Excellent post! Gilbert’s article is important and persuasive. I addressed some of the same issues in a post last month about Arne Duncan’s push for accreditation “reform” here: https://academeblog.org/2015/07/28/arne-duncan-accreditation-and-barking-watchdogs/. Another recent critique of assessment came in George Siemens’ account of a Dept. of Education meeting on “quality and innovation,” which I discussed earlier this month in this post: https://academeblog.org/2015/08/06/education-innovation-quality-and-disruption/. Wrote Siemens: “In spite of the data available, decision making is still happening on rhetoric. We don’t understand the higher education market analytically – i.e. scope, fund flows, student flows, policy directives, long term impact, – well nationally and internationally. I want to hold both universities and corporate sectors to accountability in their claims of impact. We can’t do that without a far better data infrastructure and greater analytics focus.”

  4. Pingback: California Task Force Recommends Replacing ACCJC | The Academe Blog

  5. Pingback: The Accreditation Wars: Where are the Faculty? | The Academe Blog

Comments are closed.