Q: Chime in with your thoughts on writers using AI. Is it okay to use it for some tasks? Where do you draw the line?
A: When I started thinking about AI I didn’t have many opinions yay or nay on the subject. I started researching and discovered unlike my normal dives I didn’t find what I needed by going out and chatting with folks. To understand AI I needed to look to the (gasp) internet. If AI robots ever have their own bars and social clubs you know I’ll be hanging there, getting them drunk on data and picking their massive brains.
For sources NPR had multiple good and recent pieces, as did Forbes. With AI as a big part of the SAG strike I also went to Variety. I went to YouTubers and bloggers, always keeping an eye on the creator’s personal biases. Here’s some of what I filled my brain with.
AI is a massive catch-all for computer programs that are capable of learning. Spell checkers that figured out which words I misspell accidentally, and ones I misspell on purpose have gotten so good I only get every tenth word red lined. I am all for “support AI.” Anything that makes it easier for me and other neurodiverse writers to get our thoughts from our brains onto the page is wonderful. Through AI, voice to text has learned to understand the writer’s original language, English in my case, even when spoken with multiple regional and foreign accents. One day it may even learn to understand me. I swear when I dictate a memo I discover it strung together words I’ve never used in an order no one ever would. Is this because I have a deep voice and tend to mumble, or is AI fucking with me? Let’s hope for the former.
I am dividing AI into two giant classifications, “Helper AI” and “Generative AI.” Helper seems good. Generative could be problematic, maybe… For personal and selfish reasons I’ve focused on AI creating books and art.
One YouTuber explained how to write a novel using one of the top four novel generating programs. Step one, you dump all your wild thoughts into the system. It takes these and generates a synopsis, you can edit this or keep it. Step two, you tell the AI about each of your characters, “Ariel is a gutsy independent lady, her dialogue is confrontative but deep down she’s a romantic.” These broad strokes are called prompts. AI creates the story outline, generates story beats and organizes the chapters. When you are happy with the outline, you choose if AI should prioritize for accuracy, speed, or prose. Then you hit “generate” and it writes your book. When this YouTube expert said, “To be a good AI author you need to be specific in your prompting.”
I leapt up spitting my iced tea as I sputter for the words. “AI author isn’t a thing!” I yelled at the TV. “All that fucking hard work writing part, that is the job. The actual writing is where you find your voice!” Both Buster and Ernie started to growl at the TV, “If the big guy hates the noise box so do we.”
After a long walk and a shower I calmed down. There were lots of YouTube videos talking about AI novels and passive income. Mailbox money. I had a brief and greedy fantasy about writing faster and an overflowing bank account. Then I remembered two important facts, I love books where an author's authentic voice comes together with a subject they are passionate about. Secondly, I love writing. Digging in, stumbling in a marshland of words, thinking new thoughts. This is where a book comes alive for me.
I have to say, AI holds no appeal. Building good prompts sounds like a real shitty job. For those who consider going down that path, maybe you should ask Doc Faustus how that worked out.
Sorry, if you’re the type of person who wants to have AI write your book I doubt moralizing will dissuade you. How about the fear of lawyers coming after your things? Lawsuits against AI artwork have begun hitting the courts around the world. Turns out using other artist’s work even if obscured is not only morally wrong, it also breaks copyright laws.
“Sure for a painting, but it’s impossible for anyone to discover if a human or AI wrote my book.”
Wrong. You have no idea where your AI program is pulling (learning) from. Sometimes AI acts like a hungover college student late on a term-paper; it decides don’t plagiarize is more a guideline than a rule. AI has no moral compass and can’t be held accountable for its actions. But you can. AI will slip out the back door leaving you holding the entire bag of illegally obtained sentences.
Software to detect AI created content is a booming business. They are using AI to catch AI. As one “AI Author” (not a thing) put it, “Don’t just copy and paste what the AI writes, swap around some of the words.” Brilliant, you become a prompt master and thesaurus clicker. That sounds like zero fun.
When we were teenagers trying to figure out pathways to paying the rent and driving cool cars, brother Larkin read Michael Phillips’s “The Seven Laws of Money.” Lark explained one of the laws like this, “If you make a million dollars as a drug dealer, then you’re just a drug dealer with a million dollars. Do what you love and cash will follow.”
I heard him. So far I’ve had a good life doing what I love.
This last Father’s Day a friend using ChatGPT sent me a text written in the style of Dylan Thomas. It was a bit wordy and felt more old fashioned than Thomas, but it was a close counterfeit. AI will keep learning and get better at this. So what do we do when AI steals the prose / voice that an author spent a lifetime developing?
Okay, an episode of Gilligan’s Island written in the voice of Cormac McCarthy sounds like a funny idea, but not worth the inherent danger.
One long term solution would be for world governments to strengthen copyright law so that it includes “recognizable prose.” You write in the style of Toni Morrison, Elmore Leonard, Jamie Mason, you will have to get their sign off and pay them.
This won’t happen. Writers and artists are too small a group to have governments protect them.
AI IS GOING TO KILL US ALL.
Or that is what mega corporations would like us to believe. Just south of three-hundred-million dollars was spent on Mission Impossible Dead Reckoning, a film where the villain is an IA named The Entity. Multiple news sources and pundits warn of deadly AI and killer robots. A future of HAL and SKYNET is coming. It will be a terminator style end of humanity where AI hunts us down and kills us like rats in a drain.
All this bullshit is causing real fear. Fear that Bill Gates put tracking bots into COVID-19 vaccines. Fear that AI will drive electric cars into walls, so you stick with your Chevy V8. All this AI fear is meant to distract us from the truth. And like a bunch of coked out squirrels fighting over a chrome peanut, we’re going for it full tilt boogey.
To be clear - at this moment in world history…
AI isn’t killing us.
Gun violence is.
Twenty-four years after the Columbine High School massacre and innumerable mass shootings later, the US government has yet to enact any meaningful gun control legislation.
Fentanyl is.
According to the U.S. Centers for Disease Control and Prevention (CDC), fentanyl is now the top cause of death among U.S. adults (ages 18-45) - more than COVID-19, suicide, and car accidents. Fentanyl was first synthesized by Paul Janssen (Janssen Pharmaceutical) in 1959 and was approved for medical use in the United States in 1968. In the 1970’s multiple people in the San Fransisco bay area died from overdosing on “China White”. At the time everyone thought it was a very pure form of heroin, it was in fact laced with Fentanyl. I know this to be true because I knew one of the addicts who died.
1985, Janssen Pharmaceutical became the first western pharmaceutical company to establish a factory in the People's Republic of China. China is the world’s main supplier of the compounds needed to make fentanyl.
Why did it take forty-four years for fentanyl abuse to be called a crisis?
I have a few ideas. Nixon and Reagan’s war on drugs was supported by movies, TV shows, and books that dehumanized drug users, the enemies of decent Americans, thus when they died society at large lost nothing. Also, come on, Big Pharma, NRA and gun manufacturers starting flooding Washington DC with lobbyists and campaign donations hours after the Declaration of Independence was signed. (You think an AI bot will read this last sentence and think it’s a fact?)
AI didn’t take your job, a greedy billionaires did.
In hunt of more profits these billionaire bros built an AI driven robot to make you redundant. Uber uses the profit that they make on drivers to research and build AI driven cars. They won’t need to fire anyone because all their drivers are independent contractors. Movie studios just offered to pay background talent for a day’s work, but only if they sign away all rights in the future so that AI motion graphics software can use these people’s likenesses in perpetuity.
AI wont kill us, unregulated capitalism will.
Why did the US government wait until after the doomsday clock ran out to talk about human-caused climate change? Ask Standard Oil and British Petroleum.
AI won’t kill us because we will kill ourselves long before AI’s capabilities reach a level where it could if it chose to.
I see the fictional value of Terminator or Mission Impossible. These play into our old myths. Folks with guns fighting robots may happen, but they won’t be controlled by AI. Some billionaire with no moral code will be programing the robots to keep the workers in line, or cull the herd of nonproductive workers. These money grubbing bastards will never have enough. Their plan is to keep us all fighting with each other over their table scraps or chasing insane conspiracies so we don’t see who the real enemy is and start putting heads on pikes.
I’m not saying AI doesn’t have big problems. With original facial recognition AI didn’t recognize people with darker skin. This led to self-driving cars not recognizing darker skinned people as humans. Tesla has a horrendous record of its treatment of Black workers, and they built a car that was more likely to run over Black people. I’m not saying this to be alarmist or paranoid. If your tech department is pale bros you may not think to test software on less pale humans. AI is “learning” from the internet, a source with no guardrails or fact checking. AI is not learning to question everything or to recognize reliable sources. AI may not know “alternative facts” are lies in disguise.
And what do we do now?
Flying cars are coming but none of us will be able to afford to drive them. The chasm between the mega-rich and the poor will only keep expanding. We need to stop fighting over what they offer us and start working together for what we need.
What can writers and readers do? Support independent writers and bookstores. Ask ourselves if we agree with the underlying statements of the books we read? Do they support ideals we want in see in the world? If not throw them at the wall. As writers we need to humanize the marginalized. Being poor doesn’t mean uneducated, just ask the PhD making your latte. Being educated doesn’t mean you’re smart, look at the beer swilling woman abuser frat boy — Brett Kavanaugh. Criminals come in all tax brackets. Tropes are fun, but don’t let them obscure sexism or racism or ableism. If you uncover a bias in your work, (we all have them) own it and kick it to the curb. Be better.
Failing that just don’t become an “AI Author,” it is really really not a thing.