BY BRITT S. PARIS, LINDSAY WEINBERG, AND EMMA MAY
Tomorrow, July 23, the Trump administration plans to release an “AI action plan” that reflects the White House’s priorities for expanding the artificial intelligence industry. It builds on one of the first directives from the second Trump administration, Executive Order (EO) 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which “revokes certain existing AI policies and directives that act as barriers to American AI innovation.” The administration is promising hundreds of billions of taxpayer dollars and removing what little tech regulation exists. Yet used as tech oligarchs intend, AI grinds public-benefit systems to a halt, disenfranchises workers, and expands the surveillance state in service of authoritarian crackdowns on freedom of speech and association. Like all technologies, AI is not neutral, nor is it created in a vacuum. The way AI is owned, guided, and used by oligarchs and other powerful entities has made it a weapon against the public good.
The Trump administration’s direct attacks on people of color, trans and disabled people, immigrants, science, democratic institutions, academic freedom, and higher education highlight the interconnectedness of struggles on multiple fronts. As the tech industry has facilitated and performed these attacks, analyses of the current moment must also critically consider the crisis of corporate technology and its unchecked power.

Britt Paris discusses the work of the AAUP’s ad hoc committee on AI at a 2025 Summer Institute session. Photo by Michael Ferguson.
Instead of panicking, or accepting corporate AI as inevitable in higher education, we need to build solidaristic strategies across education and other sectors, as well as across civil society and grassroots organizations fighting on many fronts, to establish bottom-up power over technology.
To move us forward in this fight, the AAUP’s ad hoc Committee on Artificial Intelligence and Academic Professions has published a new report based on a survey of AAUP members across faculty ranks, job categories, and institution types. Our committee recognizes that what’s at stake with how AI is deployed in higher education is the possibility of informed participation in democracy, as well as labor and education justice in a sector where “faculty working conditions are student learning conditions.”
As corporate AI partnerships like the one announced for the California State University system earlier this year have rippled across higher education, members have indicated a desire for independent oversight of technology procurement and deployment, along with meaningful ways of opting out of technology use, rejecting managerial surveillance, and facilitating worker- and student-centered trainings that do not undermine solidarity between instructors and students and are not based in corporate technology hype.
How Are AI and Educational Technology Similar and Different?
Often, when speaking of educational technology over the last fifteen years, we are thinking of software such as course management systems, which increasingly use large language models to guide their data-intensive features. AI is a marketing term to sell data-intensive models for analyzing information or providing recommendations based on data collected across these educational technology platforms and even from educational institutions themselves. Many times AI features are incorporated into legacy education technology without the knowledge of users. Generative AI is exemplified in ChatGPT, which uses these same data-intensive infrastructures to combine data streams, seeming to create new text, video, and images from data collected without people’s knowledge or consent and with significant social and environmental costs.
Eighty-one percent of respondents in our ad hoc committee’s survey indicated that they lacked control around educational technology, even before the introduction of AI. Members report there is little or no involvement on their campuses in technology contracts and decision-making from anyone who has stepped foot in a classroom or does research. The majority of educational technologies are unproven and rarely advance learning outcomes. Predictive analytics have been used to make discriminatory recommendations, such as to push minoritized students onto “easier” academic tracks. Respondents noted that AI imposed by the university causes more work for faculty and opens the door for more surveillance of students and faculty.
Institutions trade enormous sums of money to tech companies for unproven and extractive technology, which has ramped up through AI partnerships in the last seven months. All the while, the money could be better spent on better facilities, job security, pay equity, and much more.
What Can We Do?
Based on what we found from engaging with members around technology and AI, we suggest building out robust worker and student education about the impact of technology on working and learning conditions. Each AAUP chapter should establish worker and student committees or governing bodies that can review procurement decisions, hold administrators accountable for their decision-making, and correct technology policy failures to serve the educational mission of the institution. These bodies should be comprised of students, faculty of all ranks, and staff and have power to oversee, negotiate, and even refuse tech procurement and deployment decisions at their institutions.
Taking members’ concerns into account, we have developed a wish list to be adapted by academic senates for nonbargaining units and by bargaining unit legal professionals for each bargaining chapter’s institutional context. We also suggest building out tech advocacy units to confront harmful legislation and, more importantly, to combat uncritical and exploitative uses of AI and technology in education.
AI in higher education is more than an issue of tech deployment—it highlights the need to foster solidarity across sectors, job categories, and institutions to fight the devaluation of human work and lives.
Join the Fight
The issue is not whether you personally use Microsoft CoPilot to help with slogging through emails, and it’s not about punishing students. Rather, it is about the value of your work, being paid appropriately for it, the importance of learning and intellectual curiosity, being able to have control over your working conditions, and caring about the future of participation in a democratic society.
We have been blown away by AAUP members’ analyses of power around technology. What we have learned from academic professionals underscores that they are intimately familiar with these technologies’ benefits, shortcomings, and harms. We are organizing to engage members in deciding whether and which technologies are implemented in their institutions and how they are used in their research, teaching, and service. Together, we can establish student- and instructor-centered policy and claim power over technology.
In response to The Trump AI Action Plan, the AAUP has signed on in support of the People’s AI Action Plan, which emphasizes public oversight of technology. Read the full AAUP report Artificial Intelligence and Academic Professions here.
Britt S. Paris is associate professor of library and information science at Rutgers University–New Brunswick, a member of the Rutgers AAUP-AFT Executive Council, and chair of the AAUP’s ad hoc Committee on Artificial Intelligence and Academic Professions.
Lindsay Weinberg is clinical associate professor at Purdue University’s John Martinson Honors College, where she is director of the Tech Justice Lab. She is the vice president of the AAUP chapter at Purdue and a member of the ad hoc committee on AI.
Emma May is a doctoral candidate in library and information science at Rutgers University, a Rutgers AAUP-AFT member, and a member of the ad hoc committee on AI.


