ChatGPT and Academic Labor

BY JILL R. EHNENN AND CAROLYN BETENSKY

AI-generated image of computer creating professor

AI-generated image performed by pixray/replicate.

Over the past few weeks, three scholars from political science and English departments—Corey Robin (political science, Brooklyn College and CUNY), Ted Underwood (English, University of Illinois Urbana-Champaign), and Eleanor Courtemanche (English, University of Illinois Urbana-Champaign)—have offered incisive and poignant reflections on what ChatGPT means to them, and us, as educators. In ““How ChatGPT Changed My Plans for the Fall,” Robin revises his earlier assessment of its capabilities. Previously he had determined that AI was incapable of writing what he would call a good essay in response to one of his prompts. Now, his fourteen-year-old daughter has managed to use ChatGPT to produce a paper even Robin considers decent. The advanced capacities of ChatGPT have led him to decide that he will be grading only in-class assignments in the future. Setting aside the usual questions regarding the hows and whys of policing one’s students’ work, Robin mourns the looming loss to students of something more than the mere ability to write papers:

Good work was never about writing good papers. It was about being able to order your world, to take the confusion that one is confronted with, and turn it into something meaningful and coherent. And to know that that doesn’t just happen spontaneously or instinctively; it’s a practice, requiring, well, work.

He adds that for him, writing creates the conditions for thinking to happen at a remove from the thinker:

The only thing, in my life, that has even come close to what writing forces me to do is psychoanalysis, not therapy, but five days on the couch, with your analyst behind you saying almost nothing. Only on the couch have I been led to externalize myself, to throw my thoughts and feelings onto a screen and to look at them, to see them as something other, coldly and from a distance, the way I do when I write.

In “AI and Academic Integrity: What Kind of Crisis?” Eleanor Courtemanche focuses on what academic integrity even means in the age of ChatGPT.  She suggests that AI-generated writing points to a crisis of accountability in our culture even larger than the internet free-for-all that preceded it:

The words produced by AI by contrast are not really responsible for themselves, but can be harvested for free, like anything else on the internet. The internet age has been a great era of cut-and-paste, and of garbage journalism that rips off the work of responsible reporters, but it’s different when you’re not cutting and pasting words that were at one point linked to someone else’s name. The AI has no name, cannot be held accountable, and so far is not reliably linked to knowledge that is true.

Courtemanche concludes:

Maybe AI is best understood as a free experimental lab, like an irresponsible Wikipedia without facts, equally suited to produce verbal angels or monsters. But when you put your name on a piece of writing—if you’re a human—you should be prepared to be held accountable.

Finally, in his reflection (“We Can Save What Matters about Writing—at a Price”) on Robin’s essay and on a working paper coauthored by Modern Language Association and the Conference on College Composition and Communication, Underwood appreciates Robin’s concerns for the future of students’ ability to engage in self-critique and the working paper’s suggestion that instructors foreground the process of writing. He points out, however, that ChatGPT could conceivably develop meta-cognitive capabilities itself. And then where would we be? Ultimately, he argues,

We can only save what matters about writing if we’re willing to learn something ourselves. It isn’t a good long-term strategy for us to approach these questions with the attitude that we (professors) have a fixed repository of wisdom—and the only thing AI should ever force us to discuss is, how to convey that wisdom effectively to students. If we take that approach, then yes, the game is over as soon as a model learns what we know. It will become possible to “cheat” by simulating learning.

It’s the “learning something ourselves” part of Underwood’s prescription that we want to dwell on here. Underwood contends that we will “need to devise new kinds of questions for advanced students—questions where the answering process can’t be simulated because no one knows what the answer is yet.” But this learning we’re going to have to embark on and these new ways to teach and assess we’ll need to come up with are no minor matters. At what price, indeed? Who is going to train us, and who will be paying for it? And even more importantly: is this new frontier in pedagogical orientation what we really need—let alone want—to be spending our time exploring right now?

In addition to everything else that’s been said about it, ChatGPT represents a serious labor issue for faculty. Most professors—especially those in the arts and humanities–are already overworked and underpaid, even those of us with tenure, and especially those of us not at R1 institutions. What would it mean to set healthy boundaries in this instance instead of putting in more labor?

Underwood’s essay is deeply thoughtful and well worth reading. But many university administrators are calling on faculty, in distinctly unnuanced ways, to figure out on our own how to proceed with the ethical and practical implications of these new challenges to academic integrity. While institutions may provide workshops or guidelines like the MLA’s (which still leave the headache of the labor to us), colleges and universities seem unwilling to take a stand on the ethics of AI. Meanwhile, the internet is filled with articles about how to use AI without getting caught, and the The Washington Post counsels “What to do when you’re accused of AI cheating.” What’s going to happen when a teacher gets into a tussle with a student over AI-related issues? Will the chair stand behind the faculty member? The dean? The student conduct board? What will constitute “proof” of academic dishonesty? In a customer-service-oriented environment, where student evaluations of teaching can—in spite of their demonstrated unreliability and biased nature—do serious damage to a career, let us not discount the emotional labor of worrying about these ethical conundrums.

To be sure, we do not want to suggest that faculty should never reflect upon or modify their teaching. Far from it! But it would be nice if we could change our pedagogical approaches on our own terms, and not merely in reaction to forces we are expected to adapt to. Having seen during the pandemic what demands for instantaneous change can do to our labor conditions, we fear the injunction to change our instruction in response to ChatGPT is just a new recipe for (more) exploitation and burnout.

In our view, if we are to learn new things in order to teach our students effectively, faculty need to be supported so that their pedagogical priority (when they want it to be) can be the labor of developing new content. Perhaps some faculty had planned to refresh their courses so that they reflect vibrant conversations about the latest content in the field. Perhaps some will want to create new syllabi that dovetail with their research interests, bringing enthusiasm and cutting-edge knowledge into the classroom, and creating opportunities for students to get involved with research. Perhaps some of us had planned to revamp our courses to include content from underrepresented voices and traditions or to learn more about content that will better reach students who have experienced hardship, marginalization, or disability. These latter goals are especially urgent in the context of recent political pressures undermining DEI efforts, especially  legislation that polices faculty efforts to deliver inclusive, multicultural curricula. All of these worthy, content-oriented goals involve considerable labor—labor that can’t get done (at least not without burning the candle at both ends) if faculty are saddled, with little real guidance or support, with the work of figuring out what to do about ChatGPT.

On the issue of generative AI, as with any other academic matter, our administrators, faculty senates, and unions need to work with us to come up with ethically-grounded policies that protect and respect our labor, our expertise, and our academic freedom. Without the support and collaboration of administrators, faculty will be forced to develop piecemeal approaches to the new technology. We all must work together to meet this formidable challenge.

Jill Ehnenn is professor of English at Appalachian State University, where she is also a former Faculty Senate Chair.

Contributing editor Carolyn Betensky is professor of English at the University of Rhode Island, a former AAUP Council member, and a cofounder and executive committee member of Tenure for the Common Good.

2 thoughts on “ChatGPT and Academic Labor

  1. I would like to point out that concern about Generative AI’s disruption in educational settings isn’t a unique phenomenon but rather part of a longstanding pattern of concern whenever new technological tools emerge. For example, consider mathematical calculators. When they were introduced into classrooms, many educators and parents worried that students would lose their grasp of fundamental arithmetic, becoming overly reliant on these electronic devices. Similarly, the rise of spell check in word processors sparked concerns about potential declines in literacy levels and the erosion of meticulous proofreading. Both, however, provided opportunities for significant productivity increases, along with a commensurate change in delivery of instruction.

    Delving further back, the essence of a liberal education in earlier centuries was grounded in rigorous memorization. Students were often required to commit lengthy passages of poetry, plays, and philosophical treatises to memory. This was seen as a hallmark of a well-educated mind, emphasizing both the value of retention and the depth of understanding that came from such exercises. We no longer seem to consider this a fundamental requirement of a well-rounded education.

    Over time, our benchmarks for what constitutes valuable learning have evolved. With each technological innovation, education faces a conundrum: how to balance the new with the old. We must strike a balance between preserving foundational educational tenets while also embracing the potential enhancements brought about by innovation.

    As with calculators and spell check, our current challenge with Generative AI tools is not about outright resistance or unquestioning adoption. Instead, it’s about discerning integration, understanding how these tools can complement rather than supplant essential learning processes. The heart of education remains the cultivation of critical thinking, creativity, and a deep understanding of content. If we approach AI with this perspective, we can navigate its inclusion just as we have with past technological advances, ensuring that the core values of education remain intact. One thing is sure: Generative AI is here to stay, and our students are better served learning how to use it than to eliminate it from the academic classroom.

    • David Smith raises good and appropriate points about AI, but I wonder how exact is his analogy to various tools such as calculators and spell check that have sparked concern in the past. Typewriters (and now “word processing”) may have led to a decline in handwriting, but a typewriter never authored an essay on “Hamlet”. A calculator certainly allows me to perform various daily arithmetic tasks without remembering how to do long division, but that calculator cannot prioritize my spending choices, whether in buying food or deciding whether a more expensive resort for 4 days would be better than a cheaper hotel for 7 days. I think the problem most of have with AI is that it threatens to replace understanding, creativity, insight, problem-solving, etc, not merely the means to type these up with no spelling errors.

      I’ll also confess that, having recently retired from teaching, I am just as glad not to have to deal with these issues in courses and classrooms!

Comments are closed.