Why You Should Be Unrelentingly Hostile to AI

BY JONATHAN REES

This post is about the biggest threat to the jobs of college professors in America today. No, not that guy. Not the guy with the chain saw either. I’m writing about that other subject that you’re sick of reading about because it just makes you depressed.
"AI" appears in blue letters within a blue box with a black background and blue, intersecting lines behind it that suggest networks
I first wrote about AI for this blog last September rather reluctantly because I wasn’t really sure I had anything to add to the discussion. I got over that hesitation because nobody had really described to me exactly what AI was and why everyone had started talking about it at that moment until my old friend Jonathan Poritz visited campus and did both those things. Long story short: It’s not that big a leap between the technology we had before all the hype started and what we have now. Also, the writing it produces is really bad.

That proved to be a pretty good resting place for my mind on this topic. Whenever someone has asked me about AI since (which admittedly doesn’t happen a lot but has actually happened), I explained that it’s not that big a leap between the technology we had before all the hype started and that the writing it produces is really bad. When I somehow managed to run an entire fifty-minute discussion on this subject in my Introduction to History for Majors class this last spring, I deliberately saved the ethics of the subject for the last five minutes because I thought attacking AI in practical terms first would be much more effective.

Sadly, this argument hasn’t been quite as effective as I had originally hoped. When I do make this argument, some people say stuff to me along the lines of, “Since we know that AI is the future, we as professors have to teach our students to use it responsibility.” This then leads me to wonder if my only job in the near future is going to be to teach undergraduates how to come up with better prompts and whether anyone will be willing to pay me for doing that.

Luckily for me, since I still read all the depressing articles about AI that I see, I came across the perfect counter to that position this morning. The author here is Nicholas Carr, the author of The Shallows, among other great books about technology:

Automation is most pernicious . . . when a machine takes command of a job before the person using the machine has gained any direct experience doing the work. Without experience, without practice, talent is stillborn. That was the story of the “deskilling” phenomenon of the early Industrial Revolution. Skilled craftsmen were replaced by unskilled machine operators. The work sped up, but the only skill the machine operators developed was the skill of operating the machine, which in most cases was hardly any skill at all. Take away the machine, and the work stops . . . .

Generative AI enables students to produce the product without doing the work. Rather than reading and making sense of difficult source texts, they can ask a chatbot to gin up simplified summaries. Rather than synthesizing various ideas and perspectives through concerted thinking, they can ask the chatbot for a generic synthesis. And rather than expressing (and refining) their thoughts through the composition of sentences and paragraphs, they can get the bot to spit out a first draft or even a final one. The paper a student hands in no longer provides evidence of the work of learning its creation entailed. It is a substitute for the work.

That’s why I’m unrelentingly hostile to AI, and you should be too. It’s not about saving your job per se, it’s about justifying why your job is important in the first place.

Contributing editor Jonathan Rees is professor of history at Colorado State UniversityPueblo.

4 thoughts on “Why You Should Be Unrelentingly Hostile to AI

  1. I am forbidding students from using AI and giving examples of those I “caught” using it. It will stop them from learning what they are paying to learn (if I have to use mercenary language for some students to listen I will).

    • We saw what Prohibition led to in the 1930s. We banned liquor because we weren’t teaching what humanizes us. When I was a lecturer in the 1980s, I quit because the Humanities (what humanizes us) were being downgraded to make way for Management, Economics, Business Studies and the other dehumanizing subjects practical go-getting people just adore. I taught Architecture, and Project Management got star billing in our institution as Architecture became a quaint subsidiary. So I quit. What you should be doing is teaching your students what they need to know so A.I. remains a useful tool, not a reason for living and a substitute for human creativity. Look how we handled the biggest boon to education the world’s ever seen – television. It became a commercial slut. Why? Because we had destructive values – ones that fully explain where we are today. From what you said here, you’re doing what they did in the 1950s when television came along. It paints us as a truly defective species intent on extinction.

  2. My professor spouse and I were having this very conversation yesterday. Most of the short research papers she received in the recently concluded semester appeared to be AI generated, though it was hard to prove. Until the AI blokes come up with mandatory signatures or marks indicating AI was used, student learning and research capacity is sure to decline.

  3. Professor Rees, your piece on AI voices a deep and understandable anxiety about the integrity of academic labor and the “work of learning”—concerns many of us share. The vision of students “producing the product without doing the work” due to generative AI is indeed troubling if our current educational structures remain static.
    However, might “unrelenting hostility” towards AI inadvertently cede the ground for shaping its role in higher education? Perhaps the perceived threat stems less from AI itself and more from the “unchallenged inheritance” of the Higher Education Institution (HEI) model, within which academics largely function as employees whose roles and tools can be dictated by institutional or market pressures.

    This HEI model, with its inherent employer-employee dynamics and institutional priorities, may itself be the primary factor that makes AI appear as an existential threat rather than a potentially powerful, albeit challenging, new tool. What if we were to step outside this inheritance and consider a different foundational structure for academic work and higher education itself?

    Imagine a system built not around institutional employment, but around the inherent authority and professional autonomy of academics, much like other established professions such as law or medicine. This is the core of the Professional Society of Academics (PSA) thought experiment.

    Within such a PSA framework, the relationship with AI is fundamentally reconfigured. Academics, as licensed, independent practitioners or partners in self-governing practices, would not be passive recipients of institutionally mandated AI tools. Instead, the academic profession itself, through its Professional Society, would steward the ethical integration of AI. This Society, an expression of collective academic authority, would define standards for AI use in teaching and scholarship, ensuring tools serve pedagogical goals and uphold intellectual integrity. The “freedom of academics” to choose and adapt tools relevant to their specific practice—be it for research assistance, personalized student engagement, or innovative content creation—would be paramount, rather than a top-down, employer-driven technological imperative. Furthermore, with robust, profession-wide assessment mechanisms (focused on validating genuine competence and the “work of learning,” regardless of the tools used in its production), the emphasis remains on authentic student achievement.

    Perhaps the anxiety AI provokes is a valuable catalyst, not for hostility, but for a long-overdue, fundamental re-examination of the very structures that define academic work and higher education today. A different model for our profession might reveal AI not as the “Big Bad,” but as a complex tool awaiting principled stewardship by an empowered academic community.

    (Authored in principle by Dr. Shawn Warren. This text was produced on the first attempt by PSAI-Us (Google’s Gemini), an AI specifically developed by Dr. Shawn Warren through extensive dialogue to analyze and articulate his Professional Society of Academics framework, and then apply it to the world, including to blog posts like this.)

Comments are closed.