The Impact of Artificial Intelligence on Higher Education

BY JOSHUA KNICKERBOCKER

"AI" appears in blue letters within a blue box with a black background and blue, intersecting lines behind it that suggest networksIn early 2023, artificial intelligence (AI) made headlines when one of the most popular AI-powered tools, ChatGPT, passed the medical licensing boards, as reported in an Insider story. The ability of AI to pass difficult examinations easily and reliably has prompted many scholars to question the implications for academia. ChatGPT is a language model that is designed to generate human-like responses to natural language inputs and that pulls information from text data, which include books, articles, and websites from across the internet. ChatGPT peruses the entirety of the internet within seconds and develops a unique response, different from students’ (and faculty members’) approaches to research. AI-powered tools are no match for individual intelligence as the responses are generated from decades of compiled information. This breakthrough raises the question, What are the implications of ChatGPT on higher education?

Reflecting on academic pedagogy, faculty members should be prepared for the transition to an AI-powered student body. Specifically, how will AI affect the delivery and reliability of assignments as methods for evaluating? Multiple-choice examinations are widely used in academia as they can easily mimic standardized examinations required for licensure or graduation. ChatGPT can be utilized to answer any possible exam or faculty-generated question within seconds and produce very accurate results. Institutions of learning must structure and align their pedagogical approaches towards reasoning over standard multiple-choice examinations. As part of the transition to post-AI academia, faculty should, at minimum, require that all multiple-choice examinations be taken within a protected and locked-down platform such as Respondus lockdown browser. Although there are inherent limitations in these tools, outlined in a recently published article, remote proctoring and lockdown browsers have proven efficacy against academic dishonesty. Ironically, AI tools (AI remote proctoring) may be the best defense against AI tools (including ChatGPT). Luckily, many colleges already practice these methods, but they should now be mandatory because a student could simply request ChatGPT, in a second browser, to answer the exam question through a simple copy and paste.

Voicebot.ai has begun, and will continuously update, a list of institutions that have taken the approach of outright banning ChatGPT, along with others that are utilizing the AI tool as a model for discussing academic dishonesty. Banning specific technologies, such as ChatGPT, is unrealistic. Educators are unable to recognize when a student uses an AI tool to write academic papers. As mentioned in a recently published paper, ChatGPT “used existing publications to generate 50 research abstracts that were able to pass the plagiarism check performed by a plagiarism checker, an AI-output detector, and human reviewers.” Therefore, thwarting ChatGPT’s power becomes nearly impossible with written assignments since it allows students to simply input a writing assignment prompt: for example, “Write me a 200 word essay on plagiarism utilizing evidenced-based journals that were published within the last five years.” ChatGPT responds, in twenty-eight seconds, with a unique document and well-written, two-hundred-word, essay on the many facets of plagiarism with up-to-date and relevant citations. Furthermore, this paper would pass plagiarism checks with over 80 percent original language. Meanwhile, the student could merely edit the text AI generates. A simple syllabus statement forbidding students to utilize ChatGPT or other AI tools will not suffice. In response to academic dishonesty in the past, faculty have used plagiarism tools such as “safe-assign.” Unfortunately, ChatGPT’s work is unique and cited, making plagiarism difficult, or even impossible, to prove.

To understand adequate solutions, one must accept ChatGPT’s weakness. These AI-powered models are excellent at utilizing data to draw conclusions, but they are less effective at applying reasoning. This means that the tool’s power, for now, lacks in the ability to interpret vague findings that is unique to human intelligence. Therefore, recommendations for writing courses are double-pronged. The first, less effective, method would require students to utilize a lockdown browser with remote proctoring while writing their papers. Adjustments within the Respondus lockdown browser would allow students access to research databases such as EBSCOhost. However, ChatGPT and its successors are here to stay. More realistically, faculty members should consider the routine institution of oral presentations in conjunction with written assignment submissions. A major goal of academia is empowering students to properly utilize all available research tools, including ChatGPT. Faculty members should execute classroom activities in a variation on the flipped-classroom approach, having students compose papers at home and using class time for questions and answers or panel discussions about the papers. This way, even students who produce papers with the assistance of ChatGPT would reap the benefits of understanding the papers’ rationale by presenting the argument of the written assignment publicly and discussing the “why” behind it. An approach that forces constructive engagement will allow students to utilize ChatGPT as another means of research.

There is limited ability for faculty to enforce prohibition of AI-powered tools such as ChatGPT. Understanding their inevitability is paramount to enlisting early adoption within curricula and the administration of educational knowledge. Faculty must be prepared to think beyond the routine. Ironically, utilizing ChatGPT may allow both students and faculty members to focus on critical-thinking and clinical-judgment skills, which allows students a more robust educational experience. The major call, in the face of these new developments, is for higher education to be ready and prepared for AI.

Joshua Knickerbocker is assistant professor in the School of Nursing at Southern Connecticut State University.