thumbnail

The Delve Problem

Meru Gokhale
13th Jan 2026

Let‘s start with a word: delve. Before 2023, It was a perfectly acceptable word. Barely used, and when it was, it was usually with intention. Now, suddenly, it‘s everywhere. Why? 

Large language models favour ‘delve’ when they need a verb meaning ‘to examine.’

But the rise of ‘delve’—and the broader adoption of what we might call ‘AI vocabulary’—isn‘t just a quirk of statistical probability. I think it also points to a more insidious consequence: the algorithmic homogenisation of language.

What does that mean? 

The answer, I’d say, is not found in meaning alone, but in evolution. In magazine publishing, there were always ‘trending’ or more favoured words that readers expected to see. This lent an air of professionalism and also helped file articles into genres so regular readers could easily find what they were looking for. 

Fast-forward a little. Think about how writers producing online content were, and still are, encouraged to optimise for search engines: to use specific keywords, to structure sentences in ways algorithms will pick up and promote, to mimic the style of content that already ranks highly.

Clearly, this is not a recent phenomenon. So, why does it matter now? The internet had already begun to foster an environment that favours similar, trending language—a more obvious form of bias consolidation than what came before. But now, combined with AI vocabulary, the effects are becoming more visible, and seem to be resulting in the worst possible outcome of algorithmic preference—everything starts to sound the same. 

Think of a mixture being filtered. You distill it, then filter it, repeating the process through different stages. If everyone is doing exactly that, won’t you all be left with the same substance?

These words—‘foster,’ ‘garner,’ ‘showcase’—are simply perfectly optimised keywords defined by their utility in large datasets. They‘re the kind of words algorithms seem to favour because they appeared frequently in training data and were later reinforced by human reviewers.

Writers may not realise that they need to be conscious, and cautious, of this. The AI’s tokenised language shows up repeatedly in the text it outputs. 

More examples include:

Verbs: embark, elevate, foster, unleash. 

Adjectives: robust, seamless, cutting-edge, pivotal, meticulous. 

Nouns: tapestry, landscape, realm, synergy, paradigm.

The unique voice and perspective of individual writers is inevitably lost in this kind of rewriting. It’s easy to justify what AI produces by saying it ‘could be a writing style.’ The question that should follow is whether it’s your writing style.

Readers are beginning to encounter a bland, uninspired monoculture of content. That means they aren’t reading everything you have to say. They’re deciding, within seconds, whether a piece is worth their time—while reserving the right to scroll away the moment they read a sentence that convinces them to move on.

Your voice is what connects readers to you. Which is why it’s essential you not only cultivate it, but also preserve it.