Why Humans Might Be Mimicking AI, Too

Wooden alphabet blocks isolated on white background
getty
There’s a strange new study from the University of Florida that has linguists and grammarians talking – about words. For a while now, we’ve been so laser-focused on the Turing test, the idea that AI models are imitating human speech and activity, that we may not have noticed how much we want to imitate them.
Specifically, in a study paper called Model Misalignment and Language Change: Traces of AI-Associated Language in Unscripted Spoken English, researchers found that human respondents are actually making more use of certain words that are at the top of an AI lexicon.
One of those words, for example, is the word “delve,” which is a more action-oriented synonym for “explore” that for many of us used to suggest a miner digging in a mountain for gold. It turns out that multiple LLMs like to use the word to talk about intangible explorations and discoveries – and humans are taking notice.
“Selected rapid shifts in word usage do occur and are typically traceable to real-world events,” write the authors of the paper. “This contrasts with the recent, large-scale shifts observed in certain domains, particularly in education and science, which appear to not be triggered by external world events. Words such as ‘delve,’ ‘intricate,’ and ‘underscore’ have seen sudden spikes in usage, especially within academic writing.”
The Classroom and the Street
That idea, that the changes are most prominent in academic writing, comes up a lot, but the shift has been observed beyond academia, too.
“The study showed that AI is shaping how we talk, if not the topics themselves,” wrote Eric Hal Schwartz at Techradar, showing how even as words like ‘delve’ are getting more use, synonyms to them are getting less. “How we form sentences, choose words, and try to be formal are all affected since ChatGPT’s release in 2022. Words AI chatbots might overuse, their signature style, are becoming a staple of human conversation, at least when talking about science and technology … Notably, simpler synonyms aren’t coming up as much. The researchers found it much more likely that someone would say underscore, not accentuate, or delve, but not explore. People are absorbing specific stylistic tics from AI-generated text, it seems. It’s a bit like when a kid starts using a term they hear someone they admire use a lot, only in this case, it’s adults imitating a language model trained on billions of words.”
What are some more of those words?
The AI Glossary
I was looking over this longer list of AI-speak terms at HumanizeAI. Some of them, like “game-changer,” are equal-opportunity annoying. Others, like pain points and thought leaders, seem like things that AI got from business buzzword dictionaries. So is it art imitating life, or life imitating art? Then there are longer phrases, like “treasure trove,” or “root cause analysis,” or “enhanced user experience” that either seem stilted or inauthentic in human writing.
It’s strange that we have to use this kind of gatekeeping to distinguish ourselves from the bots, but that seems to be where we’re at right now.
“When you are selling a product, presenting research, or just writing an academic essay, you have to sound original and yet assure your readers you know what you are talking about,” writes Anup Chaudhari, in chronicling what this kind of bloodless writing can do to an audience. “With AI-generated content, people are quick to identify the gap that makes them question your credibility. Even though you have done your research and are sincere about your product placement, these phrases and words can be an overkill. It might make your users feel vulnerable in trusting you.”
Humans in the Digital World
That speaks to what might be a broader problem than who’s imitating who: if we fail to distinguish between human and AI conversants, it could open up a whole flood of potential problems.
Part of the issue is that, in the same years where AI has been evolving to pass Turing tests in the digital written world, that is overwhelmingly where humans have gone to communicate. It’s not at all uncommon to find that you text dozens of people throughout a given day and hardly ever talk on the phone. But that means there’s even more opportunity for confusion.
“Very recently (in terms of human history) we have taken to engaging in written conversations with each other via text messages, chat boxes, emails, and social media messages,” wrote Rick Claypool at Public Citizen back in 2023. “Technological advances mean we can instantly converse in real time with others who are located anywhere else in the world. Of course, we can’t always trust that the people we are conversing with through texts are who they say they are, as safety warnings for young internet users routinely note. But prior to the creation of text-generating machines, you could at least assume that if you’re having a conversation with someone on the internet, that someone is a person. The power of conversational A.I. systems to imitate human language threatens to eviscerate that basic social understanding.”
Our parroting of LLM diction is just a single facet of this conundrum. Over at The Conversation, a set of authors cover some other aspects of human-like models, with the tag line: “on the Internet, nobody known you’re not AI.” LLMs, they say, are “masters at role-play,” and often more persuasive to humans than, well, other humans. The writers ask: could this be a problem?
The intuitive answer would be a resounding: yes.
It might seem like an armchair issue to quibble about the words that people now use in speech, and debate whether AI is putting words in our mouths. The inherent coexistence of these smart systems in our world is opening up more pressing concerns. Let’s keep an eye out as 2025 concludes.