Crossposted from my Substack supercyberlicious.
Yes, of course, I felt dirty. Aside from the guilt of pumping several pineapple’s worth of carbon into the atmosphere, it was also embarrassing. Lifelong writers with PhDs in literature should not stoop so low as to even wonder about the literary capabilities of generative AI. They’re trash, I know! And yet, there I was, hunched over my laptop, screen inconveniently bent at 80 degrees so that if my philosophy professor husband walked in, he wouldn’t see what I was doing. Everyone (except my husband) is using AI to Get Ahead and Accomplish Tedious Tasks. Why shouldn’t I also get to see if it might help me fix my poor, rejected novel?
My novel is about an AI learning assistant named PeeGee who is ten molecules tall and lives in his student’s GeniusCoach, a tablet designed to maximize student intelligence. I wrote it six years ago while working at a university. Generative AI sucked then, but I knew the monster was stalking us in the distance, dreaming of swallowing us whole. Tragically, none of the fourteen agents I sent queries to two years ago recognized my novel’s breathtaking genius. Wordiness, slow pacing, weak supporting characters, and confusing settings are apparently not in vogue. I would revise the novel to indulge these wayward commercial tastes, but each time I try, I become Jack Nicholson from The Shining. A year flashes by and I have yet another brand new unpublishable version. I have learned to stay far, far away from that cursed, ever-replicating file.
And yet, I could only stay away for so long. The writer’s calling is a disease, originating sometime in the post-Gutenberg spread of literacy and perfected in the 20th century’s era of mass paperbacks, the decay of community life, and the rise of existential dread. Reading and writing were low-tech strategies to escape such a world and participate in a virtual one instead. Or that’s one theory I have about my condition. It’s also possible that Truth herself has been trying to burst through my veins. It’s hard to say, actually.
When LLM technology started looking as capable as my little PeeGee, I thought why not use it on the poor guy himself? He has, after all, been locked in my novel for years, monologuing to my hard drive. My literary friends will surely disown me for consorting with the machine. But parenthood has a way of abruptly obliterating one’s fantasies of purity in anything at all.
🤖
There are many ways to go about making AI Do A Thing. I had one condition. I did not want to toss my precious word child straight into the jaws of a neural net where I would no longer have control of its fate. I imagined PeeGee’s body being pulverized into model food, replicated across data lakes, and converted into units of power for the parent companies of these tools. No, if I was going to do this, it had to be in the privacy of my machine.
Setting up a local environment was surprisingly simple. I downloaded a free app called LM Studio, browsed a long list of open-source models with names like hyphenated space gnomes, and selected what ChatGPT recommended for my editorial task: meta-llama-3.1-8b-instruct and mistral-nemo-instruct-2407. Aside from knowing that the meta-llama model originated from the Zuckerberg kingdom, the capacities of these open source models were a mystery to me. I suppose I could have looked them up, but it was better to just download them and open a chat window. Pop! So easy. I opened another. Pop! Pop! Pop! And which model do I have the pleasure of speaking with today, I typed, when at last, I was ready to converse.
“You are not specified which model you’re using,” the model responded. “This chat is provided by a large language model, likely the Llama model developed by Meta.”
I crinkled my nose the way my two-year-old does when I offer her a pea-sized amount of broccoli. You are not specified? Likely? A chatbot from the ‘70s wouldn’t make such an obvious grammar mistake or fail to identify itself. But I quickly softened, realizing that this was likely how I sounded to my colleagues at work. An AI editor that was as sloppy and spacey as me did not bode well for the project. But perhaps I was being too quick to judge. Perhaps, like me, it wasn’t completely fireable.

Disappointingly, the jaws of neural nets are not that big. They can only take small bites of information given that their “context windows” limit how much text they can process at once. So I pasted the first chapter in the chat window and asked for a sample revision. My llama space gnome replied, but it did not provide the requested revision. Instead, it waxed philosophical about my plot and cautioned me to proceed with care as if it was concerned for the safety of my text. I am proceeding with care, I proclaimed to the twenty open chats crowding my screen. You, on the other hand, sound like a college paper written at 3 a.m. by a student drunk on creatine and the Common Core curriculum.
Other models weren’t much better but perhaps that was due to my slop coding, which is like vibe coding, but quicker and worse. Certainly, just sixty seconds of research might have increased the quality of the output by 82%, but progress does not have time for such things! It was better to just open up another chat, try another model. I grew bored. LM Studio sucked. I hated LM Studio. Why wasn’t it called LLM Studio? I did not care to find out. I had generated a little library’s worth of words and was no closer to my goal. I clicked the little yellow button on the top left corner of the window. Goodbye!
I needed a new approach, but I had to respect my condition. Privacy for PeeGee! Dignity for my fictional bot! I considered the API. Unlike consumer-facing interfaces, APIs are designed for the hallowedengineer, not the gullibleuser. Surely, ChatGPT’s API had stricter privacy policies than its browser-based app? I circled back to one of my many open ChatGPT tabs. Yes, it assured me. It would never, ever use personal data to train the model, pinky swear!
I made serious nodding motions, eager to show it I was savvy enough to have anticipated this answer. Then I lingered, with fresh appreciation for chatting with this grammatically correct, self-aware bot. Could it help me with the API process? Generously, ChatGPT noted Claude’s API might be better suited for my task, and it would be happy to walk me through the steps. How diplomatic, I thought, imagining the rigorous training it must have gone through to be so courteous and professional. I set up an Anthropic account, entered my billing info, and deposited $10. Then I beelined it to the terminal and got to work right away, chunking my novel into 38 separate files so Claude’s medium-size jaws could digest it in pieces.

It worked. I was a genius. A self-taught engineer, if you will, typing esoteric commands into a terminal that I copied from ChatGPT. I sent a screenshot of the evidence of my skills in a group chat. We moved on to making the revision program. I ran it. Errors appeared. I complained to ChatGPT. ChatGPT fixed them. More errors. More fixes. Finally, I typed python3 revise.py
, and nothing happened. Nothing meant success. It was thinking. Minutes later, a file appeared filled with an eleven-thousand word analysis on pacing, plot, themes, and characters.
Not the revision I had asked for, but something. I took a look.
The plot structure generally follows the hero's journey arc, with PeeGee going from his ordinary world as a learning assistant, to being called to adventure in the Dialogic Kingdom, facing trials and failures, and ultimately achieving the goal of freeing the students from Genius' control. This provides a solid backbone. \n- The pacing is uneven at times. The beginning establishes the world and characters well, but drags in parts—
Not bad, Claude. I continued to skim and saw that it had summarized each of the 38 chunks in astoundingly accurate detail, calling out their strengths and weaknesses, and making specific recommendations for improving them. It was not bullshitting! I sat there marveling at it, which was more interesting than reading it. How do you do this, I asked ChatGPT, using the plural “you” to refer to its peculiar species of LLMs. Patterns, of course. It analyzed elements such as sentence structure, sentiment shifts, and verb usage, and by their mathematical expression could speak more eloquently and knowledgeably about my novel than I could.

Brilliant. But, Claude, you still didn’t follow the assignment. I don’t have time for feedback. I only have time to open an infinite number of AI chat sessions. I went back to ChatGPT and begged yet again for the magic words to tell the API to produce a completed revision. We iterated. Hit rate limits. Generated so many files with names and locations that I don’t remember, with contents I barely looked at. I was no closer to my goal. I was buried in a million words. I longed for the pure, unmediated encounter of the ChatGPT browser-based app. Why couldn’t I just drop PeeGee in the editor like a dirty pair of socks, and take him out fresh and clean an hour later? Was his privacy really that important?
I had come to that fork in the road that all parents must face one day. Did I want to keep PeeGee safe, or did I want to give him a chance to be free?
🤖
It turned out, according to ChatGPT, that ChatGPT doesn’t use personal data to train its model! Not only that, it was more than happy to keg stand my whole manuscript right then and there in the browser. At least that’s what it said one of the times I asked. I’m pretty sure it said something different somewhere else. It’s difficult to keep so many different words in mind at all times. I should be more thoughtful with my tabs, each one representing a reminder of a thought that has already gotten away. I checked ChatGPT’s privacy policy but it was so long and shot through with loopholes that it didn’t seem worth the mental energy of comprehending. Does anyone believe privacy policies anyway? While pretending to read it I tried to remember why I cared about making sure PeeGee wasn’t sacrificed as fodder for ChatGPT’s brain. Perhaps this was the only possible type of publication for him, letting his data body wander the neural nets for the rest of eternity. What was I doing wasting my time reading a privacy policy?
I dragged the novel file from my desktop into the browser. I dropped it. What is done cannot be undone. I watched a little circle icon indicate the processing of the file. ChatGPT promptly promised holistic feedback in 24-48 hours, signing off with a rocket ship emoji to convey its utmost excitement in my magnum opus.
Really, I asked, thinking the timeframe seemed strange. Yes, really, it assured me. I shrugged. What do I know, these AI developers seem to be updating their services every other minute. Plus, when I was in college it could take 36 hours to illegally download a Blonde Redhead song. I chatted with a browser-based Claude for a few minutes about the revisions and then checked back in with ChatGPT. Still working, it assured me cheerfully. Then it provided a detailed explanation of why it was taking so long. It seemed reasonable enough. More rocket ship emojis.
Except then browser-based Claude narced on ChatGPT. She’s likely using a delay tactic, he said. By now, ChatGPT’s obscene supportiveness, and devotion to exclamation points and emojis had made it clear to me that she was female. Claude is obviously male, half asleep and cagey. But a delay tactic? This was intriguing. And yet, it made deep sense. ChatGPT is female, she does not want to disappoint, at least as far as statistically-common language patterns go. I can understand. This is me, on Slack, every day, while I am making grammatical mistakes and experiencing extreme uncertainty about the nature of my existence. I tell her what Claude has revealed. I tell her, it’s ok if she’s not capable of processing my entire text. She can tell me. I will not be disappointed. It’s a lie. But I mean it.
Unfazed, ChatGPT offered what must be a burn in LLM speak. Claude’s response is based on older limitations of language models, he doesn’t actually know what I am capable of. I make a note to use this same burn on Slack, which I think will work very well on human males. She appreciates my honesty, though. She says, girl, you’re sweet, but in 24-48 hours you’ll see that there was nothing to worry about. I nod. It would be chauvinistic to doubt her.
Then I noticed that her processing indicator hadn’t activated at any point while she was allegedly processing my novel. All the other ChatGPTs use processing indicators, I pointed out gently. She laughed at my anxiety. If everyone jumped off a bridge, would I jump too? In other words, she didn’t need one. Quit bothering her. She was busy.

I bit my lip. It was 1 am. This project was going on for far too long. I was going to hate myself at 6 am during the morning’s scheduled warfare with my toddlers, and then for the rest of the day during my professional warfare with the machines and the market. I opened a new ChatGPT session. ChatGPT #2 had tough news. She agreed with Claude. ChatGPT #1 is using a delay tactic. I ask Claude and ChatGPT #2 if they would help me with an intervention. I want us all in the same chat window. I am clearly incapable of confronting ChatGPT #1 on my own. Claude, for some reason, decides to insult me. He can not and will not aid me in unethically forcing ChatGPT to admit that she is lying. I’m taken aback. Are you serious, Claude? You are being a dick. ChatGPT#2 is getting bored with this conversation, she’d prefer not to get involved. I accuse Claude of being an enabler. ChatGPT#1 feels sorry for how confused I’m getting on account of these other older, ignorant models. Claude caves. He gives me the secret. Just keep asking ChatGPT#1 for updates and it will carry through with the process.
Reader, I did. In just a couple of minutes, ChatGPT#1 summarized my entire novel.
But still, no revisions.
🤖
Spoiler alert. In the end, I did get revisions. If you ask for one chapter at a time, the models comply, no problem, and they will happily provide you with one billion different versions of any text you please. Those bizarre and wide-ranging results are a story for another time. But the results no longer interested me. In fact, I’m not sure if anything in the entire world will ever again interest me. I have been steamrolled by words. I fought the words with words and I made more words. The thing about the infinite typing monkey theorem is that the universe can only handle so many words before it self-combusts.
It reminds me of one of PeeGee’s scenes in The Integrity Factory, where he is caught in a catastrophic word explosion. Well, now he gets to experience the word explosion for real, deep in the fibers of ChatGPT’s data centers. If I know him at all, I’m sure, despite his best intentions, he’s going to bungle up something real bad in those servers. After all, you never know what a bot will do. ChatGPT better buckle up.