Artificial Intelligence and the Problem of Fakery

BY DAVID PICKUS AND ROBERT NIEBUHR
The words "fake or real" appear in typescript on a white pay in a manual typewriter.
Artificial Intelligence (AI) is just a tool, they say. For universities, the idea is that this tool enhances learning. Furthermore, the promise continues that AI is a great equalizer that will increase access to knowledge.

Our objection to AI in universities does not center on the march of technology, nor does it fear an expansion of learners. Every momentous innovation in information dissemination has evoked some conservative opposition. However, our opposition is to the reality behind AI’s promises, not to change per se.

The problem stems from a stubborn fact about human nature, namely that the incentives we say we respond to are different from the ones that really move us. Universities, whatever their talk of being on the cutting edge, stake their legitimacy and economic survival on a system of incentives centering on the evaluation of classroom assignments. Following this system, students earn degrees and make their way in the world.

Yet, such a system never counted on AI, whose real attraction (for most) is providing a simulacrum of honest effort. Saying that one is teaching responsible use and one does not intend to abet cheating misses the point. The urge to avoid exertion and get something for nothing is deeply ingrained. Foreboding about AI on campus is about fakery, plain and simple. Other aspects of AI beyond the academic context may be unsettling or exhilarating, depending on one’s perspective. Yet, in the classroom, authentic effort versus fakery is the central, unavoidable issue.

Naturally, it is well and good that AI promises to enhance learning outcomes. But the foundation of the university, at least as we once knew it, is that students do their own work and that they be assessed under the same presumption. To be sure, this system never functioned as well as intended, despite safeguards and checks. Meanwhile, technological developments have long fostered a series of interesting gray areas that skirt the line between assisting and cheating. But too much dishonesty undermines the contest. As AI would doubtlessly tell us, “It is well known that the weight of even a single straw suffices to collapse the camel’s back.”

Thus, the university’s long-standing model has never faced a situation where some people go about doing assignments in a roughly traditional way, while others both subvert the (now) old-fashioned sense of working toward a degree and also render themselves increasingly incapable of doing anything other than AI playacting.

The latter group not only would resent any effort to enforce responsible-use rules for AI but would also experience deep fear, even panic, if thrown back on their own resources. Most, or all, instructors can guess the consequences if these students were suddenly expected to actually perform at their pretended level of rigor. However, confronting students in the name of rigorous standards is hard. It is also hard to imagine that universities can carry a large body of academically idle students indefinitely. Even before AI, severe and prescient warnings were sounding about students graduating unready and “adrift.” But now we face the prospect of a growing number of college graduates whose capacities are entirely what AI “assists” them in doing. What will become of this cohort makes for interesting, but also sad, speculation.

Simultaneously, there is another group that also requires urgent attention. This is the, not inconsiderable, number of students who do not like the ongoing dishonesty but precisely because of that are falling into a crisis of their own. The notion that cheating “only hurts oneself” is entirely wrong. If enough people are doing it, it is only a matter of time before more people feel like suckers for making additional effort and taking more intellectual risks. Worse, once they start to suspect that the instructor’s feedback is also AI-generated, the feeling of demoralization grows apace.

Such deflated students may not drag down institutions in an obvious, or even intended, fashion, but their quiet withdrawal delegitimizes institutions even more thoroughly. Coursework and everything that goes with it is justified by student engagement and effort. If those primarily carrying the burden of effort stop seeing the point of their effort, then university classrooms will become unsustainable—no matter how innovative the methods and cutting-edge the subject may be. What will become of these students who grow disaffected should be a matter of urgent concern.

Once classroom instruction becomes untenable, the basic model of a university must be replaced. The old model presumed that dishonesty, no matter how common, was an outlier and that students and faculty members basically did their own work. Even if we say AI is in the process of altering what it means to be honest, as far as we can tell, every human system falls apart when faking effort becomes indistinguishable from the real thing. While our definitions of effort will likely change as technology changes, the core requirement of individual labor remains. It may come to pass that AI spots ways to detect itself, and the present iteration of the problem will be (for the moment) contained. But it is not difficult to imagine a future where AI’s appeal consists of providing better ways to outfox the latest efforts to interdict AI cheating. What’s coming is an arms race.

In facing this dilemma, we must not forget that technology continues to be a boon. Moreover, attempts to contain or censor people’s desire for knowledge backfire eventually. There’s a tipping point of fakery beyond which university classrooms, as we know them, cease to be viable, precisely because institutions of higher learning are fragile. The long-standing presumption of classroom integrity is something like the trust we place in currency. We believe collectively that the system is sound, has legitimacy, and matters for the future. Such collective belief can make desired goals of augmenting our knowledge come true. But the opposite holds as well. Once we stop believing that instruction really works and grades are truly earned, all bets are off.

David Pickus is associate professor of history at the American University in Vietnam.

Robert Niebuhr is teaching professor and honors faculty fellow at Barrett Honors College, Arizona State University

 

 

One thought on “Artificial Intelligence and the Problem of Fakery

  1. Outstanding article! Prescient in vision, with a cautionary yet balanced and realistic assessment of the impact of AI on higher education (in the US specifically). I also appreciated the rhetorical flirtation with floridity that fell into lean efficiency and enhanced sharpness. Delivers the point most effectively an in a way that somehow “sticks” with the reader.

Comments are closed.