tl;dr
Anything you read on this website is written by me, a human person. I don’t use so-called AI tools. No generative AI, no LLMs, no ChatGPT, no Midjourney, no Grammarly, none of it. (I’m using these terms somewhat interchangeably, to indicate a certain type of product deceptively marketed as “AI” since around 2022. I’m not touching on the much more complicated, interesting, and appropriately-named AI.)
This stuff is really, really bad
Data centers are really harmful ecologically. They are (surprise!) often most directly harmful to marginalized communities, so it’s a matter of environmental justice as well as general stewardship of the planet (which is…not currently a strong suit of H. s. sapiens). There’s a lot of human labor that goes into “automated” processes. Many of these workers are exploited and there’s an echo of old colonial patterns. (And that’s even before you get into the moral and labor issues of the provenance of data sets.) The AI fad, following so closely upon the face-plant of NFTs, is simply another tech bubble. A few rich people will cash out and leave everyone else to deal with the economic fallout.
All too often, environmental, ethical, economic, and legal concerns are waved away. My new personal favorite argument is that such objections are evidence that opposition to so-called AI has become a “purity test.” This is a pretty toxic rhetorical trick. (I am old enough to have watched multiple rounds of toxic rhetorical tricks cycle into popularity.) I feel comfortable saying yes, it’s fair to judge people for the tools they choose to use. So the rest of this post is really superfluous, but I’ve included it anyway because I feel it’s important to note that these tools folks embrace and defend aren’t just bad on the back end, they’re bad at what they’re supposed to do.
It’s personal
I’m a writer. My work has been pirated (old novels are sitting in Anna’s Archive and presumably other shadow libraries) and scraped (everything published online, I assume, regardless of robots.txt settings). If the Anthropic settlement didn’t define the class so narrowly, I would be part of it. (The tl;dr there is that individual short stories don’t typically have ISBNs assigned, whether they’re in a periodical or anthology, and not every ISBN-bearing novel has its copyright registered in the US.) So I have personal beef with LLMs that incorporate my work without permission, attribution, or compensation.
Garbage in, garbage out
Frankly, LLM users should look askance at models that incorporate decades-old small press books and AITA posts and instructions to bonsai your kitten and Omegaverse fan fiction. I’ve worked in tech and archives. Metadata and provenance matter. GIGO.
Generative AI isn’t fit for purpose. The output is crap. It’s likely to include factual errors and the style is tedious.
If nobody bothered to write it, why should I bother to read it?
Aside from objective-ish quality issues, LLM writing is an uninteresting waste of time. Nothing makes me click away faster than learning a writer isn’t actually writing what they publish. Novel or newsletter, there’s a social contract implied with a byline. It says a person (or persons) wrote a thing for the benefit of other people.
(Side note: I really appreciate it when authors disclose the use of so-called AI tools. I still avoid their slop, but I respect the self-awareness and honesty.)
I know a lot of folks feel insecure about their writing: they don’t have a formal education in composition, they’re ESL writers, they’re not confident about their grammar, they’re told a disability means their voice is unworthy, etc. But let me assure you that I would much rather read ragged prose—your voice—than generic slop. Also remember: this is English. The language is a vital, living mess. Unless you’re writing in highly specific, professional contexts—in which case you should have at least one person, ideally paid by your publisher or boss, editing your work for house style and grammar—bring on the weird constructions, the sentence fragments, the localized slang, and even the near-criminal refusal to use Oxford commas.
Consciousness arguments
The most disturbing pro-AI arguments are those that propose some degree of consciousness. It starts with references to “hallucinations,” “mistakes,” or other attributions of intent, but the end point is interpreting programmed output as a personality.
This, to be clear, is science fiction (at best) or delusion (at worst). But. If your position is that an AI tool is a conscious being, then your ethical next step is to argue that the conscious being should be granted rights. Anybody who thinks their so-called AI is a silicon person deserving enslavement is a trash human being and their opinions on basically any subject should be discounted.
A lot of weird, cult behavior comes out of the Silicon Valley mindset. It’s wrapped up with eugenics and dark libertarianism and end-time fantasies and fascist impulses. Other, smarter, better-informed people make this point much better than I. (I recommend checking out the DAIR Institute folks.) It may not be obvious in the way so-called AI or other tools/grifts are marketed, but if you scrape the surface, you’ll find it.
There are some creepy ideologies out there, with their basis in scientific racism and other crimes against humanity. Don’t be creepy.
In conclusion
Stop using the planet-destroying plagiarism machines. They suck and the things they make suck and they’re making life worse for everybody.