About a year and a half ago, I wrote about my kid’s experience with an AI checker tool that was pre-installed on a school-issued Chromebook. The assignment had been to write an essay about Kurt Vonnegut’s Harrison Bergeron—a story about a dystopian society that enforces “equality” by handicapping anyone who excels—and the AI detection tool flagged the essay as “18% AI written.” The culprit? Using the word “devoid.” When the word was swapped out for “without,” the score magically dropped to 0%.
The irony of being forced to dumb down an essay about a story warning against the forced suppression of excellence was not lost on me. Or on my kid, who spent a frustrating afternoon removing words and testing sentences one at a time, trying to figure out what invisible tripwire the algorithm had set. The lesson the kid absorbed was clear: write less creatively, use simpler vocabulary, and don’t sound too good, because sounding good is now suspicious.
At the time, I worried this was going to become a much bigger problem. That the fear of AI “cheating” would create a culture that actively punished good writing and pushed students toward mediocrity. I was hoping I’d be wrong about that.
Turns out … I was not wrong.
I’m accused of being AI on other sites simply because I construct complex sentences with regularity – and use emdashes.


This predates the ai bubble. There used to be a really common “plagiarism detector” (something like CheckMeIn?] that would generate a “similarity score” with a database of literature. Institutions were welcome to set their own thresholds of what they considered too similar. I hit the threshold multiple times in completely original works by using language that was simply too literary or formal in nature.
Mind I had been accused of plagiarism by teachers prior to those tools for much the same reason based only on vibes, so maybe that was a step up, since students could use it ahead of time.
There was a news story around that time of somebody getting taken through disciplinary action due to getting close to 100% similarity on the tool - eventually to discover that their own essays had Venn included in the database.