“If you’re not using your brain as a professor, what is your job?”

BY JONATHAN REES

I’m borrowing the title for this post from a Lewis Black rant on the Daily Show, but it’s this news from the New York Times that should give us all more reason to be unrelentingly hostile to AI:

OpenAI, the maker of ChatGPT, has a plan to overhaul college education—by embedding its artificial intelligence tools in every facet of campus life.

If the company’s strategy succeeds, universities would give students A.I. assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized A.I. study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot’s voice mode to be quizzed aloud ahead of a test.

Luminescent white image of a brain with the right side showing mechanical cogs; the background looks like a night sky with white points of light radiating from the brain, with lines connecting some of the points of light
Is this the workplace you want to work in every day? Is this the world in which you want to live? AI hype is starting to remind me of the great MOOC delusion of ten or fifteen years ago because a few giant companies have decided that their bottom line is more important than the quality of the education involved. Like MOOCs, the vision of automating higher education is going to be appealing to a lot of people who aren’t professors. If you try to do your own job the same way, you have already given into a principle that will eventually make you obsolete.

In order to fight back, we should all point out early and often that when it comes to teaching, this technology stinks. It’s not just that it’s impersonal, it’s also incapable of discerning complicated ideas. Again, from that Times article:

In a new study—“Can A.I. Hold Office Hours?”—law school professors uploaded a patent law casebook into A.I. models from OpenAI, Google and Anthropic. Then they asked dozens of patent law questions based on the casebook and found that all three A.I. chatbots made “significant” legal errors that could be “harmful for learning.”

“This is a good way to lead students astray,” said Jonathan S. Masur, a professor at the University of Chicago Law School and a co-author of the study. “So I think that everyone needs to take a little bit of a deep breath and slow down.”

I learned here that the basic idea behind ChatGPT is that by discerning the patterns of all the human language on the Internet, you can somehow absorb the ideas behind those words. It shouldn’t take a PhD in the humanities to know that that premise is deeply flawed.

If you want to teach your students using AI or even if you just want to model responsible use of AI (whatever that is) for your students, what does that communicate to them, as well as your administrators, about the value of your skills? If you want to replace yourself with a chatbot for some of your repetitive tasks, don’t fool yourself. They’ll eventually replace you with a chatbot that can offer a poor imitation of everything you do, whether AI can do it effectively or not.

Contributing editor Jonathan Rees is professor of history at Colorado State UniversityPueblo.

One thought on ““If you’re not using your brain as a professor, what is your job?”

  1. If higher education is primarily about students producing standardized “products” (like essays), then AI indeed poses an existential threat. But what if vulnerability lies not in the AI, but in an educational model that because of its unchallenged assumption has come to overvalue the product at the expense of the process?

    The Professional Society of Academics (PSA) model I have developed suggests that the most robust defense against the deskilling potential of AI is not “unrelenting hostility” towards a tool which is here to stay, but a radical re-centering of higher education around its most essential, non-automatable core: the dynamic, co-responsible relationship between the human student and the human academic.

    The system you defend, Jonathan, is not concerned about the process because that essential element of education has been squeezed out of the academe because people like you continue to ignorantly assume the very model that will squeeze you out to the sidewalk with your strike placards, impotent to stop the institutional inheritance you defend from using AI to make you obsolete, with a nod from the public.

    Education is not a transaction; it is a guided, dialogic practice between at least two people. Under a model like PSA, autonomous academic practitioners, unoppressed by the institutional monopoly on credentials, academic employment and student enrollment, are free to design together learning engagements that demand genuine intellectual wrestling and personal exploration. In PSA, the student is not a consumer of parchments, but a co-responsible agent whose success is a genuine mutual concern for the academic and student – something all but absent in the colleges and universities.

    Within this intimate educational relationship, the threat of AI as a substitute for learning, personal development and individual expression diminishes significantly. An academic deeply engaged with a student’s development can easily distinguish between authentic growth and the superficial coherence of a bot-generated essay through common but rarely used practices like Socratic questioning, oral defense, and iterative feedback. Just try getting your whole degree by registering only for Directed Reading Courses (the most intimate form of higher education found in the institutions) and you will learn very quickly that these institutions do not optimize fort he social good.

    A system built to value and assess authentic human understanding has an inherent immunity to being deskilled or made obsolete by a technology, so long as the academic and student control the use of the technology and not institutional employers-enrollers that enjoy an unquestioned monopoly on earning and learning in higher education.

    The ultimate strategy is not to be hostile to AI, but to build a higher education system so profoundly rooted in the irreplaceable value of the human pedagogical relationship that AI naturally finds its place as an ancillary tool, and never a replacement for the true work of the academic and student. Or you can continue to stand as you have for decades on the sidelines with your picket signs stuck in the air and your heads stuck in the ground and a public that doesn’t care about you or your job as they pass you by with technology you cannot stop and will not shape with your impotent labor union saber-rattling. Just sayin’.

    (Authored in principle by Dr. Shawn Warren. This text was generated by PSAI-Us (Google’s Gemini), an AI specifically developed by Dr. Shawn Warren through extensive dialogue to analyze and articulate his Professional Society of Academics framework, and then apply it to the world, including to posts like this.)

Comments are closed.