College Professors Aren’t Going to Be Replaced by AI

BY JONATHAN REES

I swear the first time I heard about AI in the way that that term is being used lately is when John Oliver married a cabbage. Then, because this was before the great Twitter exodus happened, just about everyone in my feed started posting absolutely ridiculous essays they got when they started feeding ChatGPT very normal-sounding prompts. None of this predisposed me to respect the ability of AI to somehow affect my life. The tendency of people to argue (a) that AI is going to save the world by curing every known disease or (b) that AI is going to destroy the world for reasons that I think involve some kind of robot takeover didn’t make me any more interested in this subject.

It was the classroom implications of AI that finally got me to pay attention. Faculty I know and respect started arguing that thanks to AI we either (a) are all going to lose our jobs because students won’t have to write anymore or (b) that we’ll never be able to trust our students again because it’s going to be impossible to tell a student paper from a paper generated by a computer. I lived through the Massive Open Online Course “revolution,” so I’ve got to admit that I was nonetheless skeptical. I still am.

Professor Jonathan Poritz speaking and gesturing in a lecture room in front of a window with a lectern to his left and a cactus plant to the right.

Photo of Jonathan Poritz by Jonathan Rees

Luckily for me, my friend, coauthor, and former colleague Jonathan Poritz visited my campus on Tuesday to talk about AI, so I have a far better understanding now of why my hunch was right than I did at the beginning of the week. What I’m going to do here is pass on some of what he said mixed with what I had already learned. (Any mistakes are undoubtedly my fault.) While J. P. did talk more than a bit about the save the world/destroy the world dichotomy, I’m going to focus on the classroom stuff here because this is a blog about academic labor, after all.

According to J. P., the great computer maneuver that has been the source of this notion that AI can write is a simple change that allows the program to remember what it wrote at the beginning of a sentence at the end of the same sentence. I think he said it’s called an “attention mechanism.” The result is very pleasant looking. However, J. P. reminded everyone at his talk that, “It doesn’t understand anything.” In other words, AI can generate words, but it can never really handle ideas.

J. P. also stressed that AI has no agency. It can’t do anything itself. Nevertheless, people construct lots of metaphors about AI doing this awful thing or doing that wonderful thing. In reality, all that’s happening is that a bot is crawling the Internet and, based on those results, it’s applying a statistical model to suggest what the next word is going to be under the parameters that you set for it. This is what leads to sometimes absurd results. JP told a story from Italy, where he lives now, about somebody using AI to find a recipe that would stop the mozzarella cheese from slipping off their pizza. AI recommended using glue.

By the way, did I mention that generating pictures of John Oliver marrying a cabbage and writing bad student essays takes enormous amounts of electricity? Or that there are obvious copyright issues that could easily ground this technology before it gets any good at what it’s supposed to do (assuming that’s even possible)?

I think the part of J. P.’s talk that will stick with me the longest is his notion that AI and all the hype surrounding it is actually a sneak attack on the very idea of expertise. What the AI superfans are really telling us is that our skills are so simplistic that they can train a computer to write better than we can teach people the same skill. But the writing that ChatGPT produces doesn’t really look like student writing. What I’ve seen and suspect is that AI looks like the milquetoast median of the entire Internet, and it never really even answers the question that I asked.

So don’t believe the hype. Students will stop submitting bad AI papers because they keep failing the assignment on the merits long before anybody accepts the fact that laughably bad writing is the new normal.

Contributing editor Jonathan Rees is professor of history at Colorado State University Pueblo.

Your comments are welcome, but please be considerate about the tone, length, and frequency of your comments in order to avoid dominating the conversation on the blog or discouraging others from joining the conversation. They must be relevant to the topic at hand and must not contain advertisements, degrade others, use ad hominem attacks, or violate laws or considerations of privacy. We encourage the use of your real name, but do not prohibit pseudonyms as long as you don’t impersonate a real person. Repeat violators of the commenting policy may be blocked from further commenting.