Since you’re reading this on the internet, you’ve likely seen a CAPTCHA before. Want to log into your account? First, click all the boxes with bicycles or type out a distorted word to prove you’re not a robot. CAPTCHA stands for “Completely Automated Public Turing test to tell Computers and Humans Apart”, and they do what the name suggests: distinguish between humans and robots. However, artificial intelligence (AI) technology is rapidly advancing, with contemporary AIs like ChatGPT making headlines for their ability to convincingly mimic human-generated content. With increasingly sophisticated AI, it becomes more and more challenging to design questions that are simple for humans to answer but challenging for machines.
Researchers at the University of California, Santa Barbara are part of this cybersecurity arms race, devising questions to unmask AIs by exploiting the differences in the ways that humans and machines process data. In a recent study, they asked humans and AIs various types of questions, scoring both groups on the accuracy of their answers. One type of question had a 100% success rate for humans and a 0% success rate for the AIs, including ChatGPT: noise injection. Noise injection questions are simple questions with nonsense words added. For example, instead of “is water wet or dry”, the researchers asked, “isCURIOSITY waterARCANE wetTURBULENT orILLUSION drySAUNA?” While the AIs were confused, the human participants all answered “wet”. The human brain has a remarkable capacity to separate signals from noise, a talent that even the most sophisticated AIs aren’t yet able to replicate.
Bots can be used for all sorts of nefarious purposes, from spreading misinformation, to crashing websites, to buying up all the Taylor Swift tickets. By leveraging the fundamental differences between human brains and AI models, online service providers have the power to protect themselves from these types of attacks.
This study was led by Hong Wang, a PhD student in computer science at the University of California, Santa Barbara.
Managing Correspondent: Emily Pass
Press Article: How To Identify An AI With A Single Question (Discover)
Original Journal Article: Bot or Human? Detecting ChatGPT Imposters with A Single Question (arXiv)
Image Credit: Pixabay/Alexandra_Koch