AI - The Last Temptation Of The Endless Drafts Folder
Signal Or Slop? I make a startling confession
I learned to speak Spanish partially by using Google Translate, and to find my way around Barcelona using CityMapper, both tools which will, if you let them, entirely hijack your brain to the point where you have outsourced everything to the machines.
Yet if you don't let them, and you use them like obedient assistants or training wheels on a bike, maintaining the intention to learn yourself, you will do. Now I no longer need to use Google Translate in that language (apart maybe from some regional slang). It was my brain doing the work, but I was using a powerful tool to go further, faster. The tool accelerates you; it doesn’t replace you. But the temptation to outsource everything is real, and growing.
What does it mean to create something, now that software can do so much of the actual work for us? When you use a drum machine, for example, you can just press a button and coherent music comes out. You pick the pattern, maybe tweak a few settings, and the thing plays itself. I’d guess a lot of early adopters felt two things at once: awe at what had just become possible, and a sort of sheepish guilt. Am I still making music, or just switching things on?
So now it's time to come clean. I co-wrote this article with a Large Language Model, and I feel uneasy about it. What I actually did was: go for a run, have various ideas while I was out, then I wrote all the ideas contained here as bullet points using my tool of choice, Obsidian, and asked an AI plugin plus a custom system prompt to turn it into an essay. Then I went over all of it several times, adding, deleting and editing as I went. For example this paragraph was entirely written by me. The next two were too. But not the one after that. No paragraph is untouched by me, but some of them are mostly AI.
I've been inspired by the ideas contained in 'The AI Middle Way' by
, and other thinkers here on Substack that I respect, and who are using AI in responsible ways to help them with their thinking, or writing, or both. I also respect the views of people who say that AI is bad, it's copyright infringement, it's cheating, it's just noise, we shouldn't use it, and so on. But I know that many similar arguments were made when different technologies came out, and the Luddites always ended up losing. So I am guessing that the future will be one with AI in it, rather than without. The question is how do we use it, rather than will we use it? We will.
Something like this issue has been with us for a long time, in fact. Most of the 'old master' painters we revere had assistants to fill in the sky or the background for them. They took the credit, because it was their composition, their ideas, and they did the main (and most tricky) parts.
It's even more pronounced, of course, with contemporary artists. Some of them, like Damien Hirst, might never actually touch the art object itself. Hirst comes up with an idea for a row of coloured dots and someone else produces it. When it sells for a fortune at auction, his name gets most of the credit, and he gets most of the money. If the idea is what counts, the details of fabrication can get fairly fuzzy. The most extreme version - and I think we can agree this is taking things too far - is Salvador Dalí, at the end of his life, signing blank canvases which other people went on to fill in (so that there is now a proliferation of fake Dalís on which the signature is genuine).
Something similar is happening with writing now because of LLMs. I write a lot, and most of it is bad, or at least only intended for my personal use. You should see my Obsidian vault. I do not have trouble having ideas, coming up with stuff. What I do struggle with is finishing articles. There is a sort of anti-gravity field which kicks in the nearer I find myself starting to consider something 'finished'. Every alteration I make to the text after that point gets harder and harder, every decision becomes exponentially more consequential, every flaw I find whispers seductively, 'just leave it in drafts and come back later, eh?'. And sometimes, often in fact, I never do come back later. I'm sure other writers or artists can relate.
So it’s extremely tempting to use AI to help me finish those drafts. Or clean things up, or even supply the bulk of the words outright, if not the ideas. In the old process, I’d sit down, read up on the topic, maybe taking notes from the internet and various books, then write my own draft. And it would probably, if we're being honest, remain a draft.
I am currently working on a series of essays about harvesting knowledge and how poets and philosophers are, perhaps unexpectedly, probably going to be essential in the future. I started with a few ideas, fed them to the model (I usually use Claude Sonnet, but also Deepseek and Gemini). I let it fill things out, sometimes quoting sources I’d probably never have found myself. This gave me more ideas of my own, and I incorporated them.
It's a back and forth process with a non-human intelligence... which still seems pretty mindblowing, although for kids growing up now it is going to be as normal as watching each year be hotter than the previous one (sadly). The ideas are overwhelmingly mine, and I still have to edit for voice and check facts, but in the end, maybe 70% of the actual prose ends up being machine-generated. In fact in this particular article it's more like 30% but I reserve the right for it to be more and still call it 'mine'.
The weird part is that the main ideas, as I say, the skeleton of the argument, are coming from me. I’m the one deciding what the article is for, what it should include, what point it should make. The AI is sort of an eager research assistant crossed with a ghostwriter, something which, again, I never imagined having. I am signing off on its output, as being fit to represent 'me'.
Often it writes something that sparks a further idea in me - so who is really writing at that point? Is this even different from what happens if you build an article out of quotations from the internet and then comment on them? Or a really good editor who tells you that you need to flesh out this or that part? Artists and authors have always had helpers, just that they were embodied humans, for the most part.
I find myself feeling an uneasiness in my body when I think about this, if I’m honest, and that's why I'm (co-)writing this. I think we need to think through this together, and I mean us humans. Should I feel like a fraud, since a percentage of the words aren’t literally mine? On the other hand, I never invented the words I am using, they were just given to me. And where does the “I” actually end and the software begin? This is just the next level of 'extended cognition'.
If I dream a story one night and write it down verbatim in the morning, did I “create” it - or was I just the vessel? Self-doubt comes easily, especially now when the boundary feels new and indeterminate. Probably after another five or ten years, no one will even care. Everyone will be doing it this way, and not even bothering themselves with these questions. But I wanted to write down what it feels like at the start, for posterity.
I also worry I’m “flooding the zone with shit,” as Steve Bannon's delightful phrase goes, or producing “AI slop.” I’d like to tell myself I'm increasing the average quality of what’s out there, not just the quantity. But am I sure?
Maybe it’s a question of where you place the value: in the idea, or the execution, or the curation. Élodi Vedha said it’s mostly about learning to edit well, so that the end result matches your own intent and feeling, which makes sense to me. Editors have always been under-appreciated (speaking as someone who has edited a lot); now everyone has the chance to be their own editor, and it matters more.
I suspect we’re still at the awkward early-adopter phase - like the first days of photography, when people argued about whether it could ever be “real art,” or when the first people sent letters dictated to a secretary. That phase always feels awkward and full of identity crises, but eventually people stop worrying and just get on with it.
Social rules and etiquette around the new tech are constructed. It's now mostly 'bad form' to call someone's phone without warning them first, unless you have a very close relationship or need to be in constant contact. That was never the case before, because there was no option. You had no idea if they might be driving, having sex, or about to throw themselves out of a plane. You just called them. But new social norms were gradually created.
I think the real danger isn’t that we’ll let machines do too much of our thinking. The real danger is that we’ll stop caring which part is us and which part is the tool, and also stop caring whether what we’re making is worth making. There will always be a glut of mediocre things made by lazy grifters who just want to churn out content. But that was already true - most things made by humans were noise, not signal. Most of the things random people actually say to you are boring and predictable.
If anything, the existence of abundant, fast, machine-generated work makes it even more important to have taste, to know what you intended to make, and to shape the raw material so that it truly means something. That’s where the signal comes from. If you use Claude, or any other LLM, and you just press “go” and accept the first draft, it’s probably not going to be much good. It will be fine technically, and better than a lot of human writers, yet that has never been enough, and now that the bar is as low as it gets for mere competence, it certainly will not be enough in the future.
But if you actually have something to say and are willing to revise and criticise the output, the tool can help you say it more clearly and reach a little further. So in that sense, nothing has changed. Tools keep getting better, but making something worth reading - or worth looking at, or worth listening to - still depends on the taste and ambition of the person who wields the tool.
That’s why I think the answer to “am I making more noise or more signal?” depends on how much you care whether what you make has any meaning. If you really do care, you're probably raising the average. And if you don’t, there's never been an easier way to generate a lot of noise. I am obviously firmly in the camp of rejecting 'AI slop', yet the 'slop' part of it comes not from the mere fact of using LLMs, it's from the lack of care about the outcome.
Editors and curators are going to be the creative heroes of the future, along with people who can use their embodied intelligence, integrity and wisdom to generate ideas, and to discern the good ideas from the ones which ought to stay in 'drafts'.
If I don't get cancelled for admitting I use AI to help with getting things out of my drafts and onto the public internet, there will probably be a part two of this article. Please leave any comments and I will put them in the old stew pot for part deux.
Props for diving into this question, and revealing your process around it. I think it's important. I'm grappling with it too.
Hi guy,
I'll start with my own, umm, startling confession: I've yet to read beyond about the photo of "I Really Like Slop," yet I've got the nerve to write a comment!
(I do intend to finish the article, which I was really enjoying, partly because the writing is very good but also because this topic has been on my mind to write about, as well.)
Anyway, I just wanted to ask you: Who wrote the first two paragraphs?
I ask because that "you will do" ending of the second sentence really broke my brain. ("Yet if you don’t let them, and you use them like obedient assistants or training wheels on a bike, maintaining the intention to learn yourself, you will do.")
I ran both paragraphs through Grammarly and it didn't flag that, so then I asked ChatGPT if the grammar was okay on these two opening sentences. I won't repeat what it wrote about sentence #1 (unless you want me to), but it also flagged that "you will do ending."
Here, I'll post what it said, and like me, it's curious what your response will be.
Over the past few weeks, I've been publishing the journal I wrote in jail and using Grammarly and sometimes ChatGPT to edit, but the former certainly misses some big mistakes sometimes. Anyway, very thought-provoking article---what I've read so far has been, at least---I'll finish it later! Here was ChatGPTs response---and again, mostly curious who wrote that, or, if there's some way you can explain how that ending makes sense! you do"
I learned to speak Spanish partially by using Google Translate, and to find my way around Barcelona using CityMapper, both tools which will, if you let them, entirely hijack your brain to the point where you have outsourced everything to the machines.
Yet if you don't let them, and you use them like obedient assistants or training wheels on a bike, maintaining the intention to learn yourself, you will do.
---
CHAT GPT response
Sentence 2:
Original:
Yet if you don’t let them, and you use them like obedient assistants or training wheels on a bike, maintaining the intention to learn yourself, you will do.
Issues:
1. The phrase “you will do” is grammatically off. It’s likely a truncation of “you will do fine” or “you will learn,” but as it stands, it’s unidiomatic and confusing.
2. Minor style: “maintaining the intention to learn yourself” could be clearer as “maintaining your own intention to learn” or “continuing to learn yourself.”
Suggestion:
Yet if you don’t let them, and instead use them like obedient assistants or training wheels on a bike—while maintaining your own intention to learn—you’ll do just fine.