BY JONATHAN REES

I have seen exactly one use of AI of which I actually approve. My friend Alegria fed our faculty handbook into NotebookLM, a Google product that can set bounds on what your artificial intelligence digests. Our handbook is an absolute monstrosity. It gives me a headache just to think about it because it is impossible to find any of the information in it that I want. But now Alegria can ask it “How many semesters do you have to teach before you get your next sabbatical?”—and it will actually tell her the answer with no fuss.
On the other hand, this attempt to “rewrite history” by feeding AI primary sources and prompts in order to build a new narrative about the California gold rush is not a healthy way to use NotebookLM:
Right away, [Steven] Johnson recognized that she would make a great character. He took note in particular of the fact that Lebrado returned to the valley near the end of her life. “I’m like, What’s the [expletive] structure of ‘Titanic’?” he joked. The book could open with what Johnson imagined was Lebrado’s emotional return to the valley at nearly 90 years old, before zooming back in time—to her childhood, to a broader cast of characters and the violent drama of the 1850s.
Johnson wasn’t sold on this idea yet. But he marveled at how A.I., operating with mostly open-source texts and a tiny amount of human labor, had delivered him a concept that he absolutely could use. “Everything I’ve just showed you is, like, 30 minutes of work,” he said.
Audrey Waters explains the exact problem with this approach here:
I’d argue the interest in using “AI” for brainstorming is surely connected to the decline in reading—reading long-form materials, that is, not text messages and status updates and 200 word blog posts, which, yes, people do a lot of but, no, is not the same as reading a book or even a magazine/journal article. As we spend less time undertaking the challenging cognitive labor of reading, we become less adept at both deciphering complex language and thought and constructing complex language and thought in turn. We have nothing that interesting to say (to write) because we have nothing interesting to think about because we have read nothing substantive.
The difference between our faculty handbook and Steven Johnson’s research is that reading is supposed to be part of my job as a historian. So is understanding my rights under the faculty handbook, but ours is deliberately opaque so that nobody will ever read it.
The use of AI by instructors for tasks like grading, on the other hand, is even more pernicious than what Johnson is doing:
Rob Anthony, part of the global faculty at Hult International Business School, told Fortune that automating grading was becoming “more and more pervasive” among professors.
“Nobody really likes to grade. There’s a lot of it. It takes a long time. You’re not rewarded for it,” he said. “Students really care a lot about grades. Faculty don’t care very much.”
That disconnect, combined with relatively loose institutional oversight of grading, has led faculty members to seek out faster ways to process student assessments.
There is no better way to make your job obsolete than to let computers do your work for you and then hope you won’t be caught.
Set a good example. Organize your academic workplace so that you can raise standards all around. Don’t contribute to letting them slip in every corner of academia.
Contributing editor Jonathan Rees is professor of history at Colorado State University–Pueblo.


