It is (past) time to have a productive conversation about AI and writing
CW: speculative fiction inside baseball
We write for many reasons and in many different contexts. I had to write a “statement of facts” for the DMV recently on a long and complex issue. I don’t know how many people have complained to me recently about having to write a report for work that they know no one will read. Electronic Arts famously joked (not joked) about using a scale to determine whether a game design document was done. The point is that writing serves many different purposes, and not all of them are capturing the inner recesses of the human soul.
I’ve been using AI for awhile now and it’s fairly integrated into several of my life processes. What I want to do right now is ask Claude to give me a summary of recent events that prompted me to write this blog post. For principle’s sake, I’m not going to do that, and you know what? The result is going to be that the opening to this is more tedious than it needs to be.
Mia Ballard is currently, according to some, having her burgeoning career ruined because her publisher, Hachette, pulled her book from publication over a recent investigation into a great deal of its text as being AI generated. The science fiction and fantasy community has had a series of kerfluffles over AI and writing, going back to Clarkesworld closing submissions because they were inundated with AI, and more recently (I pause here to Google; I told you I wanted Claude to do this… in the end, Google failed me, but I remembered which Facebook friend shared it and found it that way – remember I said this process was going to be tediously human?) Erin Underwood came under a bunch of fire for trying to have a conversation about AI and writing.
The Ballard story is still frothing, so we’re in that low-quality-information period that the Daily Stoic recently described as watching the baby being born – it’s messy and not meant for so many people to be involved in. But it’s the middle information age and here we are.
The key is in something Erin Underwood said in the introduction to her File770 letter:
Am I crazy? Will this blow me up? I am so tired of being afraid of our community… I really want to have an open conversation about this, but the vehemence that some people bring to the conversation about AI is scary.
Friends, the bills are coming due.
The inability of the science fiction community to talk about AI in a grown-up way has irritated me for years now, and what’s happening with Mia Ballard is why. I’ve come to accept that I just have to wait for things like this to ferment and find their moment, but what’s so frustrating is that people suffer in the interim. Ballard is suffering now. Writers who don’t understand how these technologies work – the AI itself or the AI detectors – are terrified that they’re going to be falsely accused of using AI.
It’s going to take awhile to sort these things out. I think the community might finally be ready to start the process (as they clearly weren’t when Underwood’s post went up; some fine individual on a major author’s blog commented "I would call Erin Underwood a shill for AI, but I don't think she's canny enough to get paid for it." Real nice guys.). Because maybe the costs are now becoming clear.
One of the things that has bothered me most about The Discourse is the unaddressed issue of class disparity in AI use and impact. The media is getting to this – it’s why (good lord, was it only about a month and a half ago? Life comes at me quick these days) in February the NYT wrote about a romance writer who has published dozens of AI-written books using 21 pseudonyms.
Of course AI was going to come for commercial fiction first. Do you think Philip K. Dick would have turned up his nose at Claude when he was trying to write fast enough to get out of couch-surfing and taking out loans from friends? The Amazon self pubbers using AI do not give one fuck about what the SF literati thinks about AI (and let’s be real, folks; if you read Locus you’re part of the establishment), they are trying to eat. And it strikes me as deeply hypocritical that the traditional publication world largely looks down on these commercial fiction writers and also wants to dictate which tools they can use.
Look. I think the issue of consent in the use of creative works as training data is a complicated one. I have three books in the Anthropic settlement and I filled out my forms. I think there is a need for local action and careful regulation around the construction of hyperscale datacenters – the same way there is a need to ensure the environment is protected in any expansion of human infrastructure. All of this is true. But if you’re not convinced by now that people are going to use AI and people are getting benefit from AI I don’t know what to tell you. The shouting is not achieving the outcome you want.
We are in the turbulent phase of the rollout of a powerful new technology. It’s going to suck to a nontrivial extent. But the way to regulate the use of AI is to achieve fair and representative consensus in the writing community about how it should be used. That is what is prevented by shouting down any mention of AI use. This phrase “We will not consider any submissions written, developed, or assisted by these tools.” is massively problematic. What constitutes “developing” a work with AI? What constitutes “assistance”? How will this be detected, and enforced? Upon whom?
The answers to those two last questions will be “not well” and “unfairly” if there aren’t clear norms established over what the definitions are. That is what we are seeing now.
“AI detectors” will not prevent the use of AI. (Probably nothing will. Are we finally ready for that sentence?) But consensus-driven community norms might prevent it from being used in ways that we can agree are unfair and harmful.
I have some ideas about this. I’ve had a half-written blog post proposing a community survey about specific use cases for AI in creative writing. (I started it after the Underwood thing.) But I have not wanted to deal with the drama. I would love to start that conversation. But only if it can be a conversation.
In this way as well, the self-publishing community is ahead of traditional publishing. BookBub surveyed over 1,200 authors in 2025 and found the community split almost exactly in half, with 45% using AI and 48% refusing to. Taking this temperature didn't cause the community to implode. Meanwhile, SFWA couldn't keep a Nebula rule change up for a single day – largely because they enacted policy before having these conversations.
Taking the temperature is the starting point because it illustrates the disagreement. That BookBub poll also showed that, of those who don't use generative AI, 84% said they don't use it because they think it's unethical. How can this disagreement resolve? Stonewalling only prolongs suffering.
By the way, I used Claude to think through this before I started writing it. Here’s the conversation. And here’s something it said that I thought was well-put:
The self-publishing world has been living with AI for years. It's messy, it's contentious, but it's also where writers with the least resources — no agents, no advances, no institutional support — have been figuring out what AI use actually looks like in practice. Some of it is cynical content farming. Some of it is people trying to make a living. The traditional literary world has largely ignored this, or looked down on it, while maintaining a blanket "no AI" stance that maps neatly onto existing class divisions in publishing.
Mia Ballard crossed that line. She came from the self-publishing world where AI use is an open secret, got picked up by a major publisher operating under very different norms, and became the first person to publicly bear the consequences. Meanwhile, the Atlantic just revealed that AI has quietly infiltrated opinion pages at the most prestigious papers in the country — and nobody's getting dropped.
The point isn't that Ballard is innocent or guilty. The point is that the writing community's refusal to have an honest, granular conversation about what AI use is and isn't acceptable left everyone — Ballard, Hachette, readers — without a framework for navigating this. The "no AI ever" position didn't prevent AI use. It just ensured that when the collision happened, it would be maximally destructive and fall hardest on whoever had the least protection.
That conversation is one picture of what it looks like to be “assisted” by AI. Another way would be for me to feed this post back to Claude and get suggestions on how to improve it. I’m not going to do that. I’m going to ask what it thinks, because I find its ideas interesting. You’re welcome to think of that what you like.
If you haven’t read this 2021 article by Vauhini Vara about using AI to assist with writing, you should. I was working at Google at the time and I sent it to everyone there I knew who was interested in AI and creativity. It’s a beautiful, complicated contemplation of what it looks like when AI helps you find and name feelings you didn’t know you had. Vauhini Vara also wrote the recent Atlantic piece about fingerprints of AI being found all over the NYT, WSJ, and WaPo. Technology will not provide the answer. Community consensus and sensible policies might.
Edited 3/27: A friend pointed out that the first version of this post gave the impression that the self publishing community was a monolith and pro-AI, which was not my intention. I've clarified and added a bit about AI conversations happening in self-publishing communities.