In the ever-evolving world of artificial intelligence (AI), researchers are trying to determine whether AI can truly mimic human intelligence. One of the most famous tests in this field is the Turing test, named after the famous mathematician and computer scientist Alan Turing. This test involves separating a person and an interlocutor and asking them to determine whether they are communicating with a human or an AI. Although AI has already proven its superiority in passing online CAPTCHA tests, the question remains: will it be able to pass the Turing test?
Recent experiments have explored this question by directly testing chatbots, such as Chat GPT and Google Bard, against human experts. In one notable study, over a million people participated in a game called “Human or Not Human” where they had to determine whether they were talking to an artificial intelligence or a human. The game asked participants to choose either a playmate or the AI, and the results were intriguing. The study is published in the journal arXiv
To make the conversation more interesting and challenging for users, the researchers created a variety of chatbots with unique stories. For example, one chatbot was tasked with convincing others that it was itself from the future. This gave the game an extra level of complexity and fun.
Over the course of a month, the researchers collected more than 10 million guessed answers from 1.5 million unique users, providing a significant amount of data for analysis. From this data, they identified different types of players who excelled in different aspects of the game. Some were adept at recognizing their fellow players, while others convincingly signaled their humanity or masterfully impersonated bots.
Interestingly, humans paid close attention to typos and slang, believing that these linguistic quirks were less likely to be recognized by a machine. However, artificial intelligence chatbots were programmed to mimic these traits, blurring the lines between human and machine. Humans trying to prove their humanity often used more slang, misspellings, and personal and emotional responses.
A hallmark of this approach is the use of profanity, expressing controversial opinions, and asking questions that AI bots generally don’t answer. Surprisingly, the use of crude language revealed the player’s humanity 86.7% of the time.
Overall, humans correctly guessed the identity of their interlocutors in only 68% of games. When confronted with a chatbot opponent, they were right 60% of the time, while they correctly identified the human interlocutor 73% of the time.
This study sheds light on the challenges of distinguishing between AI and humans in conversational interaction. It highlights the complex capabilities of chatbots that mimic human-like behavior and the difficulties people face in accurately identifying their interlocutors.
While AI has made significant strides in passing various tests, including the CAPTCHA, the Turing test remains more challenging. As AI advances, researchers and developers will undoubtedly strive to bridge the gap between human and artificial intelligence.