When AI Bites Back

February 17, 2026

C0398beb A741 4a13 B98d 2d12c9e0ef82 1 All 19216

In my final year of undergrad studies, unsure what to do next, I did what many equally directionless classmates did: I considered applying to law school. There was one major stumbling block, though. It required writing the LSAT, the law school admission test.

I can still picture the morning a group of us gathered in an auditorium at Dalhousie to sit the exam. The first thing I asked was where I could smoke. The proctor looked at me like I’d suggested lighting up a joint. Smoking anything was banned. That’s when I knew I was doomed.

When the results arrived by mail, I was half surprised they hadn’t stapled on a note that read: How did you even manage to find the auditorium? Needless to say, I never applied.

That mix of self-delusion and impending failure came to mind recently while chatting with my AI editor (yes, we talk most mornings after I’ve drafted something for my new book). I’d been reading the never-ending supply of Substack essays comparing AI to a golden retriever, always eagerly slobbering compliments. I knew that what AI gives you depends entirely on how it’s prompted, so I decided to test that theory. I asked for negative feedback on a draft.

In essence, it told me to give up writing. Not quite, but close.

Lately, the Internet has been overflowing with columns about AI and writing. Not just the ethics of using it (I get that debate), but something else: purity tests. The New York Times, for instance, ran a piece recently about romance writers using AI, and the resulting meltdowns on Substack and elsewhere were mind-boggling but hardly surprising.

The argument goes something like this: writing is a sacred, solitary act. Using an algorithm desecrates the altar of pure creativity. But purity tests have never really been about purity; they’re about belonging. Like all ideological gatekeeping, they’re designed to separate the “authentic” from the “impure.”

The irony is almost too rich. Some of the same writers now demanding ideological conformity over AI (and other political hot potatoes I’m not touching today), once decried censorship in all its forms. Now they’re calling for blacklists of writers, of technologies, of methods. It’s as if creative freedom came with an expiry date the moment someone else chose to use it differently.

Personally, I don’t think using AI is dishonest unless you publish something you had absolutely no hand in writing. The honest user, and I count myself among them, asks AI to critique, not flatter. To slice through fog, not add to it.

That’s why I keep asking Perplexity for negative feedback. Sometimes it makes me furious, or briefly certain I have no future as a writer. Sometimes I argue. But I always come away sharper, clearer, and a little more aware of my own lazy habits of thought.

If you want AI to teach you something, you have to stop begging it to pet you. Let it bite.

The more time I spend writing with AI, the more I’m convinced the future of creativity isn’t us versus the machine. It’s about learning how to argue with it, edit beside it, and keep rewriting. Like any good collaboration, it’s awkward and occasionally painful, but also incredibly useful and oddly liberating. My process has accelerated to the speed of light compared to previous projects because I get instant feedback, both good and bad.

I can’t predict where any of this will lead in the writing world. But in my own practice, I’ve never felt more curious, more reflective, or more genuinely open to asking for an honest opinion, even if it’s coming from a machine.

And while I never did become a lawyer, I finally found something that argues back.