A few weeks ago, I started adding this badge to the bottom of my articles:
Given the dialogue that the badge generated as just a footnote on a longer article, I began to brainstorm what a longer piece on AI might look like (and how it might fit in with the work I'm doing with Good Queer News).
Then, a Substack I follow suggested creating and publishing a formal "AI policy" to share transparently with your readers when, how, and why (if ever) you choose to use AI in your writing. I absolutely loved this idea, and decided to put something together.
First I'll share the specifics of my AI-use policy. Then, for those that are interested or curious, I'll go a bit deeper into the larger "why" behind my strong stance (though to give you a preview, know that one of my favorite movies in childhood was WALL-E).
AI Use Policy
Overview: To the best of my knowledge and ability, I do not use AI to ideate, create, or refine any piece of this project.
Writing: I do not use AI to write, ever. I do not use it to generate ideas, headlines, subtitles, article text, notes or image captions. Every single word you read here is either written by my own hand or dictated into my notes app if I have a good idea on a walk.
Research: I conduct all my research manually, and through following a variety of trusted sources. I've even set up a backdoor in my browser to eliminate the "google AI overview", which I often find unnecessary at best and wrong at worst. While I could outsource research to AI, it's great for my mental health to spend a few hours a week reading good news stories. I also feel much more confident in the accuracy of the stories I report, and am happy to avoid "AI Hallucinations" like the recent AI-generated “Summer Reading List” in the Chicago Sun (A PRINT NEWSPAPER) that is mostly made up books and quotes from people who don’t exist.
Images and Art: To the best of my knowledge, I do not use AI-generated imagery. As often as possible I use pictures I've taken myself, or images from an open license photo library. If I ever need specific art or graphics, I either design it myself or pay a real human artist!
Editing: Once folks like Grammarly started heavily integrating AI and encouraging their users to rely on it for writing, in addition updating their TOS as to how and when they use your writing (recap: using your writing to train their AI models), I uninstalled it completely. I edit by reading my pieces out loud, asking my wife to read them for me, and texting friends "is this joke funny?". Sometimes I miss something and have a typo. Let this be a little ray of my human-ness shining through.
Em-Dashes: I love them. I promise that they are not AI-generated—they're just an elegant way to expand a thought!
In Sum: I write each piece here hoping you will respect and enjoy it enough to read it, engage with it, share it with others, and find some genuine meaning in it. Your trust, your attention, and your time are not faceless metrics I'm hoping to consume with whatever "content" I can create each week. It is a privilege to work hard to earn your trust, and to work even harder to maintain your trust. In short: I do not let AI touch my work because I respect my readers, myself as a writer, and my planet.
Deeper Dive: The Why
I think there's not going to be one right answer on when and how to use AI, and for some people, whether because of bandwidth, disability, or something else, AI significantly expands what's accessible and possible. I also think that there's a very important distinction between AI in general (detect cancer in my genome, help decrease train accidents by recognizing items on the rails, etc.) and generative AI (write my email, summarize google, make me a studio ghibli portrait, etc.)
AI in general has some amazing uses for research, medicine, transportation, and more. But I think it is a tool that should be applied thoughtfully. Now as it pertains to generative AI in particular:
I think most folks may not be as aware of the environmental impact of AI, largely thanks to aggressive efforts to hide or dismiss this data to avoid scaring off investors and users. "The cloud" is a helpful abstraction, but all the math to make AI work has to happen in actual physical data centers which usually ecologically decimate the communities they are built in. Between 5-50 chatGPT prompts is the equivalent of 1 16oz bottle of water. That adds up quickly. Doesn't necessarily mean folks can't use it, but we should do so when it is worth the water it will cost. (Source: HBR)
I do not use AI for skills I want to have. I am increasingly hearing stories from teacher friends whose students are completely reliant on chatGPT to summarize texts, brainstorm assignments, and complete homework. Many of my friends can't write an email, aren't sure how to research deeply, and have trouble coming up with new ideas or thinking deeply about strategy. If we outsource these skills to AI, we will lose them. I’m trying to get better at writing, not to churn out as much content as possible.
As a writer, I don't input any of my work into AI when I can avoid it, as most of the major models have faced massive scandals for illegal usage of art, music, books, and more to recreate works in someone's style.
This is probably the take that could get me in the most trouble, but in terms of writing articles or emails for me, I respect the people I am engaging with too much to outsource my communication with them to a computer. I am constantly thinking of the phrase "why would I be bothered to read something you couldn't be bothered to write?". If I am writing an article, or a Linkedin post, or an email, with the express purpose of trying to get someone's attention, to move them to take an action, to get them to connect with me, the least I can do is write that myself. I dread a version of the internet that is all our individual AI's reading and responding to each other and pretending to care and connect so we can be a little more productive.
Maybe this all just comes from me watching WALL-E too many times, or reading too much Ann Leckie, who writes dystopian sci-fi about AI ethics. Maybe I'm being stubborn and need to accept the ways of the future. I don't doubt I could potentially “accomplish” more if I chose to integrate AI into my work. But at what cost? Am I replacing myself? How do I decide when humanity is not required to complete a task?
For me, it's easiest just to say no to generative AI across the board (honestly for the climate reasons more than anything), but I don't have ill will towards others who use it. I have an Amazon Prime account and sometimes I forget to recycle. I write on Substack, which is owned by a shady billionaire oligarch. There's no such thing as a perfect advocate making all perfect choices, we're all just deciding which lines we'd like to draw in the sand.
For what it's worth, I'm not here saying that if you use AI you're a bad person at all. I'm not saying your a worse writer than me, or any other malevolent extrapolations you might make from my words. I'm just sharing where I am for the time being and how I got here. If we see rigorous data protection reforms and massive growth on sustainable computing, I might be in a different place. But I'm happy where I am right now.
For those thinking "isn't the cat out of the bag on AI? Don't be stuck in the past, Ben": I hear you. I do. But I gently remind you about how hard tech-bros tried to bring NFTs into the mainstream and how completely socially-unacceptable it became to be an NFT-bro. Within about a year, the truly awful "bored ape" profile pictures were few and far between, and that trend was dead. It's only normal if you help normalize it.
It is with great pride, tremendous respect, and complete humanity that I sign off today.
With love,
Ben
Thanks for the thoughtfulness here. I want to share a couple "heads up" here (for you and any writer) about what consumes your work that is maybe hidden. Substack has an "opt out" button in the settings, so if you haven't, I would check settings and opt out of it using your work for AI. It's so annoying because that should obviously be opt in 😤 The other big one is that google docs scrapes documents for generative AI, at least last I saw. If anyone needs a word processor that's free and not g docs, Libre Office is a free program set that provides a program similar to Word.
I really appreciate the nuance and awareness that shows in your policy. As a furloughed federal worker, I've been struggling with the constant advice to use Gen AI to do job applications. When corporations are using AI to gatekeep applicants, does using AI perpetuate that cycle? Or is it using a tool to remove a barrier? Your point about imperfect systems reminded me that there is no easy answer. Thank you ❤️