Categories
Software Work Writing

Reblog: good takes on writing with LLMs

I read these two pieces a few weeks ago and they were still kicking around in my head so I re-found them to share. They are nice complements to my 2023 post about LLMs being good coders and useless writers. They argue that, in fact, LLM writing is often worse than useless.

Link 1: Using LLMs at Oxide. This is the best guide I’ve seen for expectations related to LLM usage at a particular workplace. It acknowledges LLMs as valuable tools while focusing on their ultimate purpose, serving humans. It’s good throughout, but the can’t-miss section is 2.4, LLMs as Writers. Here’s an excerpt:

To those who can recognize an LLM’s reveals (an expanding demographic!), it’s just embarrassing — it’s as if the writer is walking around with their intellectual fly open. But there are deeper problems: LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too? The reader can’t be sure — and increasingly, the hallmarks of LLM generation cause readers to turn off (or worse).

Finally, LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.

If, however, prose is LLM-generated, this social contract becomes ripped up: a reader cannot assume that the writer understands their ideas because they might not so much have read the product of the LLM that they tasked to write it. If one is lucky, these are LLM hallucinations: obviously wrong and quickly discarded. If one is unlucky, however, it will be a kind of LLM-induced cognitive dissonance: a puzzle in which pieces don’t fit because there is in fact no puzzle at all. This can leave a reader frustrated: why should they spend more time reading prose than the writer spent writing it?

Link 2: Your Intellectual Fly Is Open, linked in the above quote. It’s a short post. My favorite chunk:

When you use an LLM to author a [LinkedIn] post, you may think you are generating plausible writing, but you aren’t: to anyone who has seen even a modicum of LLM-generated content (a rapidly expanding demographic!), the LLM tells are impossible to ignore. Bluntly, your intellectual fly is open: lots of people notice — but no one is pointing it out. And the problem isn’t merely embarrassment: when you — person whose perspective I want to hear! — are obviously using an LLM to write posts for you, I don’t know what’s real and what is in fact generated fanfic. You definitely don’t sound like you, so…​ is the actual content real? I mean, maybe? But also maybe not. Regardless, I stop reading — and so do lots of others.

I see this from a few people in my professional network. It’s brutal.

“Your intellectual fly is open” is a good comparison to say “we see something embarrassing, we’re just not saying it” but it’s not strong enough in terms of the impact. Once I see someone I know writing through AI without disclosing it, I permanently distrust what they say from then on.

I was prompted to write this post when at a friend’s recommendation I listened to a podcast episode, AI and I: Why Opus 4.5 Just Became the Most Influential AI Model. The episode was okay, and I didn’t like the second episode of that show I tried. But I was struck by something the guest, Paul Ford, said. He spends much of the show discussing how he uses LLMs all day for coding and research. He’s building an AI-based product. But when it comes to writing, he said the bottom-line limitation of using AI is simple: “it’s not me*.”

It’s 2026 and I stand by my 2023 take. I double down on it, in fact: current LLM coding tools are leaps and bounds better than they were in 2023. When I wrote that post, Claude 3 had not yet been released, to say nothing of Claude Code, Github Copilot, Agent mode, etc.

But generating code is writing for machines. And LLMs still aren’t useful for writing to humans.

*I’m quoting that line from memory. I’m not going to re-listen to fact-check myself but please correct me if I got it wrong.

Leave a Reply

Your email address will not be published. Required fields are marked *