Peer Review, Academic Integrity, and AI

BY GARY TOTTEN

Colorful collage with "AI" inside a black circle in the middle surrounded by images including a human profile, brains, maps, laptops, and abstract designs.A highlight of my work as a journal editor is the pleasure of helping authors, especially early career researchers and graduate students, interpret conflicting peer review reports and improve their arguments and writing. The important work of academic journals to disseminate cutting-edge research in a timely manner and thus advance their fields is impossible without peer reviewers. A recent experience heightened my appreciation for peer reviewers’ central role in addressing threats to honesty in academic writing posed by AI and its adjacent technologies.

A peer reviewer who was reviewing a submission noticed that it was unusually close in content to an article that this reviewer had published earlier in the year. The similarities were striking: sentences that were similar in construction and with the same examples and quoted material, although slightly, and oddly, skewed from the original or with different (and incorrect) page number citations. The reviewer wondered if the submission was a version of the original published article that had been run through an AI program. In the editorial office, we briefly considered whether the submission represented a version of the article in a language other than English that had been processed through a translation program, although this seemed less likely than the use of AI. We probably would not have discovered the possible use of AI without the good fortune (due to my managing editor’s careful selection of peer reviewers) of having invited the reviewer whose article was manipulated in this way.

This discovery was unlikely because even when the article was processed through an antiplagiarism program, the software report did not raise any red flags. We know that antiplagiarism programs in general do not offer an effective bulwark against AI-generated writing, and some worry that software that purports to be able to detect AI-generated prose could be used to falsely accuse students of academic dishonesty in a higher education setting. Even without this peer reviewer’s unique perspective on the submission, I am certain that the input of other peer reviewers, and my own evaluation of the submission as editor, would have resulted in a decision to reject the manuscript. Yet with the insights of the reviewer, the important contribution of peer reviewers to academic integrity became strikingly clear. Even so, it is regrettable how the incident understandably eroded the reviewing author’s trust in academic peers and in the publication process.

Considering how we might have avoided, or could avoid in the future, a similar situation, a number of questions arose. Is it possible to detect these kinds of threats to academic integrity through the peer review process? Do technologies exist beyond antiplagiarism software to help identify such breaches? What are appropriate consequences for those who resort to such dishonest practices? On this last question, my short-term solution was to inform the submitter of the manuscript about the decision and the reasons for it in a direct and firm manner, emphasizing the journal’s, and profession’s, valuing of honesty and professionalism in publishing. The first two questions are more complicated. We cannot be sure to assign a reviewer to a submission that will turn out to be the very person whose work has been plagiarized, and I am not aware of current software that would allow us to identify AI-generated prose without fear of drawing incorrect conclusions. We know that AI prose is often stilted and unnatural, repetitive, or incoherent, so these are additional features for reviewers and editors to track.

This experience enhanced my appreciation for the relationship between peer review and academic integrity. The necessarily and deeply human activity of academic peer review is our best defense against academic dishonesty and a compelling argument for participating in peer review when asked. I recognize that peer review is unpaid and often unrecognized work that is unlikely to contribute significantly to promotions, salary increases, or enhanced reputations—all outcomes, each with their problems, that we are socialized to seek in the neoliberal university. Understandably, overworked faculty are sometimes reluctant to take on peer review, but it is work well worth undertaking to ensure the competent and honest review of scholarship. My recent experience emphasizes the potential for AI to jeopardize informed discourse in the public sphere and the central role that peer reviewers can play in fortifying such discourse.

This experience also causes me to think more carefully about the ways that we might train future generations of peer reviewers—including graduate students—to distinguish original and legitimate research and writing from AI-generated or otherwise plagiarized work. Some of this training could be accomplished through sessions at academic conferences. I can imagine a panel on how to approach the work of peer reviewing in general but also how to spot the potential use of AI in a manuscript through some of the features described above, such as grammatical issues and incoherence. In undertaking this training, we will need to be cautious to not replicate the problems of AI-detecting software—that is, to not falsely accuse authors of using AI. One way to address this complication would be for journal editors to include information about the features of a manuscript that might indicate AI has been used, and, in instructions to peer reviewers, ask them to note if any of these features are present. The editor should then follow up through their own thorough review of the manuscript, using online tools or other technologies as appropriate to investigate potential problems. My approach would be to err on the side of caution when deciding if an author had used AI, for to reach such a conclusion, one would need a preponderance of clear evidence.

Academic journals are a key piece of the work of scholarly discovery and innovative research, and attending to the threat of AI to this scholarly enterprise should be a priority for us all, as it will fortify our freedom to continue to create, review, and publish knowledge in our fields.

Gary Totten is professor in the Department of English at the University of Nevada, Las Vegas, and editor-in-chief of the journal MELUS: Multi-Ethnic Literature of the United States.

One thought on “Peer Review, Academic Integrity, and AI

  1. Incorrect references are an important clue to an AI generated article or paper. Generative AI programs have been found to make up references using legitimate journals or books, but citing incorrect volumes or pages. Unfortunately, AI detection programs are not very helpful in detection. Turnitin.com has an AI checking feature as part of its suite, but it is still in development. I have found from working with students that using a grammar checker can trigger AI indications in Turnitin.

    I ran one student paper that triggered a 100% AI indication in Turnitin through three online AI detectors that were tested and deemed most accurate by a business publication. Indications ranged from zero to 50%. It looks like we humans will need to be the AI judges for at least the near future. We will, as you noted, need to be careful and gather evidence to avoid false accusations. Otherwise we risk damage to ourselves, others, or the process of scholarly work.

Your comments are welcome, but please be considerate about the tone, length, and frequency of your comments in order to avoid dominating the conversation on the blog or discouraging others from joining the conversation. They must be relevant to the topic at hand and must not contain advertisements, degrade others, use ad hominem attacks, or violate laws or considerations of privacy. We encourage the use of your real name, but do not prohibit pseudonyms as long as you don’t impersonate a real person. Repeat violators of the commenting policy may be blocked from further commenting.