AI, University Legitimacy, and the New Social Contract

BY NATE BENNETT

When a Northeastern University professor was discovered secretly using generative AI tools to create course materials while prohibiting students from using them, the controversy exposed more than personal hypocrisy. It illustrated a broader legitimacy crisis confronting higher education. Versions of this story surface repeatedly in conversations with colleagues: AI is already woven into instructional practice, even as many institutions struggle to enforce policies that pretend otherwise. The technology is rewriting the conditions of teaching and learning faster than campus governance can adapt. The core question is no longer whether AI belongs in classrooms. It is whether higher education can rebuild the social contracts that give academic credentials their meaning, or whether we will retreat into surveillance regimes that students circumvent and faculty quietly ignore.

This matters because higher education has always promised learner development rather than mere knowledge transfer. We cultivate habits of inquiry, critical thinking, and ethical judgment that require time, struggle, and sustained intellectual engagement. AI’s quick, confident answers short-circuit that developmental arc. The threat to learning is real, but so are the pressures that drive students toward these tools. They manage demanding course loads while juggling work, family responsibilities, and financial strain. Choosing immediate relief over long-term learning is not a moral failure. It is an understandable response to seemingly undefeatable constraints.

Faculty confront parallel pressures. Publication expectations, heavy teaching loads, service demands, and precarious employment all produce their own version of deadline triage. AI offers faculty the same seductive promise it offers students: efficiency. Many of us already use these tools to plan courses, draft materials, or reduce administrative burdens. The difference is that many prohibit the very practices they privately employ. Students notice. When rules appear selectively applied, the legitimacy of institutional governance erodes.

Rebuilding that legitimacy requires recognizing that AI governance is fundamentally a problem of social contracts, not a problem of rule creation and enforcement. Academic life has long depended on tacit agreements: Students commit to demonstrating their own thinking; faculty commit to fair evaluation and transparent expectations. AI blurs the boundaries of acceptable assistance, and vague or inconsistent policies leave individuals to invent their own informal rules. The challenge is not simply that people will break rules. It is that the rules themselves no longer map onto the lived realities of teaching and learning.

Detection-based solutions cannot resolve this. As in sports anti-doping, rule compliance depends less on the severity of sanctions than on the perceived legitimacy of the system. Students hold private information about their AI use, and their incentives often align poorly with institutional enforcement efforts. Faculty face the same structural pressures and are subject to similar temptations, yet governance frameworks routinely treat faculty and student behaviors as unrelated phenomena. This asymmetry is immediately visible to students and undermines trust in ways that no detector can repair. What higher education needs is not more policing but explicit norms that are codeveloped, consistently applied, and aligned with the developmental mission that distinguishes academic work from mere content delivery.

Five Principles for Rebuilding Academic Agreements

Design for Disclosure. Detection tools lag AI’s capabilities and produce false positives that disproportionately affect nonnative English writers. More importantly, surveillance frameworks encourage concealment. A transparency-first approach reverses these incentives by making disclosure easier than concealment. Portfolio-based assessments that document thinking processes naturally reveal the presence of AI assistance without creating policing moments. The goal is to recast governance as a partnership rather than a search-and-punish exercise.

The Mirror Principle. Double standards corrode institutional trust when faculty benefit from AI while students are prohibited from doing so. But the problem runs deeper than hypocrisy. Faculty also participate in the social contract, and that contract assumes that students benefit from the instructor’s disciplinary expertise, pedagogical judgment, and intellectual labor. When faculty quietly outsource portions of that work to AI, they renege on their side of the academic agreement. Coherent norms do not require identical rules for both parties, but each group’s expectations must reflect its distinctive responsibilities. For both groups, transparency about AI’s role and preservation of authentic intellectual contribution are nonnegotiable.

Design for Humans Under Pressure. Governance must account for present bias, social conformity, and cognitive shortcuts rather than assume slow, rational deliberation. Default prompts that make AI disclosure the easier path reduce the cognitive burden of responsible action. Peer-modeling approaches that highlight thoughtful AI use harness the way norms spread through observation and imitation.

Equity as an Enabling Principle. Students with premium AI tools gain reasoning capabilities that peers with basic or free versions cannot match, and faculty are often unaware of how these disparities shape performance. Equity should not function as a constraint on AI integration but rather as an enabling principle. Institutions might provide baseline AI access through institutional licenses, teach evaluation skills applicable across a range of AI capabilities, or design assessments that account for technological disparities from the start.

Adaptive Capacity. AI evolves too quickly for fixed rules to remain credible. Governance systems must be built for continuous adaptation. This means regular policy reviews, structured faculty-student forums on emerging applications, and pilot programs that gather systematic feedback. The goal shifts from getting the rules right to developing institutional capacity for ongoing navigation of technological change.

What’s Really at Stake

Higher education now faces a choice. We can continue down the path of surveillance and prohibition, teaching students that governance rests on mistrust. Or we can model what we claim to teach: Complex challenges require stakeholder engagement, transparent deliberation, and shared values rather than imposed rules. Universities cannot credibly teach critical thinking while relying on unreliable detection tools, nor can they teach ethics while concealing faculty use of AI.

The courage required is both institutional and personal: to teach with humility, curiosity, and transparency as we navigate this shared experiment in reimagining education. How we respond now will shape not only how students learn but the kind of thinkers and citizens they become.

Nate Bennett is a professor and the EMBA Faculty Director with the Robinson College of Business at Georgia State University. 

Your comments are welcome, but please be considerate about the tone, length, and frequency of your comments in order to avoid dominating the conversation on the blog or discouraging others from joining the conversation. They must be relevant to the topic at hand and must not contain advertisements, degrade others, use ad hominem attacks, or violate laws or considerations of privacy. We encourage the use of your real name but do not prohibit pseudonyms as long as you don’t impersonate a real person. Comments should be written exclusively by human authors without the assistance of generative AI. Repeat violators of the commenting policy may be blocked from further commenting.