Another week, another AI chatbot.
This week Snapchat launched My AI, a customised version of OpenAI’s ChatGPT and Elon Musk signalled his intentions to build one.Â
Artificial intelligence (AI) writing technology underpinned by large language models are certainly impressive. And they are creating a great deal of anxiety among writers, academics and people concerned about intellectual property rights.Â
So detecting AI text is important, and apparently it’s not that hard.
A team of researchers at the University of Pennsylvania has demonstrated humans can learn to detect AI-generated text in a peer reviewed paper at the February 2023 meeting of the Association for the Advancement of Artificial Intelligence.Â
“AI today is surprisingly good at producing very fluent, very grammatical text,” says study co-author Liam Dugan. “But it does make mistakes. We prove that machines make distinctive types of errors — common-sense errors, relevance errors, reasoning errors and logical errors, for example — that we can learn how to spot.”
The study uses data collected using Real or Fake Text?, an original web-based training game.
The game begins with a sample of text written by a human. It then progressively adds text, one sentence or paragraph at a time, asking users to identify the point at which the machine takes over, and asks for reasons.
The reasons people gave for guessing the author was a machine differed depending on the writing genre. Common sense was more likely to apply in recipes than news articles. Irrelevant material was more likely in short stories than speeches.
If the player selects “machine-generated,” the game round ends and the true author – machine or human – is revealed.
Read more Cosmos coverage of AI chatbots:
- Google announces Bard, its answer to AI chatbot phenomenon ChatGPT
- Is Google’s AI chatbot LaMDA sentient? Computer says no
- ChatGPT banned in some schools, but many experts say it can improve education
The study results show that participants scored significantly better than random chance, providing evidence that AI-created text is, to some extent, detectable.
The study showed high variability in the skills of individual players.
Certain genres of writing were easier to detect than others. For example players spotted AI generated recipes more readily than stories or news articles. The study says that’s because contradictions were easier to spot and recipes often assume implied knowledge, something language models struggle to get right.
“Our method not only gamifies the task, making it more engaging, it also provides a more realistic context for training,” says Dugan. “Generated texts, like those produced by ChatGPT, begin with human-provided prompts.”
The game teaches players the kinds of errors which characterise AI chatbots. You can try it yourself here.