ChatGPT, an AI-powered conversationalist, has transformed how we work, learn, and write. Many use the tool since it’s easily accessible. However, problems have developed in distinguishing AI written content from human-written content, especially in academic and professional writing settings. In this Cell article, Desaire et al. construct a machine-learning tool that identifies human academic scientific writing from ChatGPT writing with over 99% accuracy.
To do this, they trained a model using 64 articles published in the journal Science and 128 ChatGPT essays. After creating the model, they constructed two test sets containing 30 human writings and 60 ChatGPT writings. The model was then instructed to differentiate between AI-generated writing and human writing by examining paragraph complexity and sentence variety. The model was able to sort lines and words more effectively than an already publicly available AI detector tool, GPT-2 Output Detector. Their model was exploiting writing patterns, as ChatGPT gave more generic information and used vague phrases such as “others” and “researchers,” whereas human authors named the scientists they were discussing. Human authors also regularly used words like “however,” “but,” “although,” “this” and “because” more often.
This new tool may help prevent the use of AI-based text-generating tools in forbidden circumstances. Although the use of AI-based tools for academic writing has not been deemed unethical, like plagiarism, the consequences of broadly adopting tools like ChatGPT for scientific writing are met with both enthusiasm and fear. On the one hand, AI might simplify research paper writing and inspire fresh ideas. On the other hand, others argue that such technology may hinder independent thinking and creativity in writing. Furthermore, due to the small sample size of the dataset, the authors’ tool only offers promising potential for the creation of an AI-writing detection tool in academic writing. Only time will tell what academia’s position is, and much may be said if there is a demand for tools like the authors’.
This article was written by Heather Desaire, a professor in the Department of Chemistry at the University of Kansas in Lawrence, KS.
Managing Correspondent: Marwa Osman
Original Journal Article: Distinguishing academic science writing from humans or ChatGPT with over 99% accuracy using off-the-shelf machine learning tools
Image credit: Mohamed Hassan from Pixabay
Which AI detector tool does Harvard University use to check essays like Statement of Purpose and Personal Statements? Some Institutions use Turnitin while others use OriginalityAi.
“The paradox is that at the same time we’ve developed machines that behave more and more like humans, we’ve developed educational systems that push children to think like computers and behave like robots.”
—Joichi Ito