How to Spot AI-Generated Work

by Mousecore

<< All Articles | Print

Last Updated: 8/23/2025 by Mousecore Foreword The omnipresence of AI has become a fact of life. A side effect of large language models like ChatGPT being accessible to anyone with an Internet connection is a substantial uptick in AI-generated content. Each upgrade to these models inevitably means improvements to the quality of content they put out, meaning the newer and more updated a model is, the more likely it is to spit out something that's a lot closer to human speech pattern than its previous iterations. With that being said, at the time of writing, there are still key distinctions between human-written work and the work of an AI model. As a disclaimer, the tips that I will be explaining in this article are not meant to be 100% definitive evidence of AI usage. They are only meant to describe common features of AI writing. There are plenty of manmade works that are written in a fashion similar to AI (which is the sort of data the AI is training with in the first place), and there are many people online that are learning to write from AI models. Please be very diligent with your analysis if you are going to accuse someone of using AI for their work. And if you're right, you have my express permission to kill them. Or, y'know, you can contact an admin and make it their problem. The more data an AI model processes, and the more upgrades it receives from its developers, the greater its capabilities of simulating the work of a real human. The AI tip-offs that come to mind now (excessive em dash usage, overuse of certain words like "delve," rule of threes, etc.) may not apply a few years down the line. For that reason, I want to avoid relying on these AI tells in this article, mainly because I'm not going to want to update this every year with new tips, but also because there are much more consistent AI writing patterns across different models that are worth noting. Instead, I want to focus on three overarching components of AI writing that describe patterns in its content generation rather than certain artifacts like em dashes or favorite words. The Three Tenets of AI Writing 1. Nondescript descriptions. An oxymoronic-sounding tenet to start with. AI enjoys writing "fluff," trying its best to put meat on the bones of whatever you prompted it to create. But the problem with AI-generated "fluff" is that it lacks any of the personality of human writing. It will come up with details to explain the actions of a protagonist, describe a setting, or argue for or against a given position (in non-fiction works). But these details are so generic that the reader comes out feeling like they didn't learn anything new at all. AI writing will often tell, but very rarely show. Often it foregoes important dialogue entirely, writing what is basically a flowery summary of a conversation.The empty page is a canvas for the writer, but the colors with which to adorn the canvas are derived from human experience - unique, raw, messy. A human writer may bleed inspiration from their favorite works into their story intentionally, making references to other media or to real-world events that they relate to or are impacted by. Without these experiences for themselves, AI has nothing to inspire it, therefore resorting to meaningless description. If each paragraph reads like you're running in place and not getting any new information, it's not a good sign. 2. Sterile style. AI writing is perfectly consistent and, in its own way, trimmed of all the fat that human writers tend to leave at least a little of in their work. You will find no flaws in its spelling or grammar, no missing punctuation or letters, little to no fluctuation in tone or perspective. The piece will often maintain the same relative pace all throughout. This does not mean that every single squeaky-clean work you read is AI per se, but a human writer will make a mistake in their story, especially the longer or more complex it is. Yes, even the successful ones with millions of copies sold. They will fudge up pacing at some point, misspell a word or five or ten, or leave in details that should've been abandoned in the first draft. A human writer may even make these mistakes on purpose and break conventional writing rules if it aids the tone of their story. AI will chug along, printing out perfect paragraph after perfect paragraph, and will forego a unique voice to maintain this perfection. 3. Circular writing. Not to be confused with "circular storytelling" in this case. AI is a slave to its prompt. It will repeat the details you feed it ad nauseam and hyper-fixate on the prompt's motifs. There is zero way for it to avoid this since, like how Tenet #1 describes, it can't base the rest of the story off any real-life experience, so it only sticks to what it has available: the prompt and cached data about other "similar" stories (which it can certainly be wrong about what is "similar" to your story). It may base itself off works that it collects data on, even if it claims to be a 100% unique story. If you tell the model to write a short story about a hardworking girl that learns to let loose, it will never let the reader forget about how practical and dutiful this protagonist is. You'll get paragraph after paragraph about these traits, but just like I said in Tenet #1, it will eventually start running in place because it doesn't know where to go next or how to logically get from a prompted Point A to Point B. It tightens its scope over the exact words of the prompt - so tight that it doesn't let the rest of the story breathe or flow correctly, or rather, humanly. The piece will seem like it solely exists to support one or two morsels of ideas rather than a cohesive, multilayered concept. Because it's so focused on the prompt, the rest of the work will seem decidedly off, resorting to logical confusions, phrases of speech that don't make sense, and an unclear sense of direction. It will literally write itself in a circle, hence the tenet name. And as a bonus fourth tenet: the AI's prompt, or its response to the prompt, is actually in the work itself. Yes, some people are that fucking stupid and leave the "Certainly! Here's the full chapter two of a fantasy epic where..." bullshit in the writing. If that doesn't convince you that you're reading a load of AI-vomited shit, I don't know what will. The Tenets in Practice: Fiction Written by AI AI-written fiction can be a bit harder to catch than other forms of AI-generated writing. You'll likely be able to smell the AI dripping from someone's forum post, school assignment, or work message. The work might seem to be done in a completely different way than they would reasonably know how or choose to do. The tone of what they sent can be the polar opposite to how they usually talk in real life or write on that same platform. AI fiction is different because it actively tries to avoid speaking in that neutral, lifeless prose. If the "author" hasn't published any stories of their own before, it might be hard to distinguish what is their unique writing voice and what is the voice of an AI trying to emulate the voice of countless authors fed into its model and spit out all at once. To better illustrate the Three Tenets, I wanted to put AI to the test myself. I employed four different models for the sake of this article: ChatGPT 5 (the newest at the time of writing), ChatGPT 4o (a still-used older model), Claude, and Google Gemini (AKA that annoying fucker that pops up at the top of the results page whenever you search anything). I gave them all the same prompt: write a 1000 word story about a valiant knight on a quest to save his village. Then I saved each result and analyzed my findings. The first thing I noticed was that of all four AI models I used, three of them named the protagonist Alaric. Alaric seems to be a Warhammer character based on some basic research, so that's likely where that detail is coming from (a bit of Tenet #3 in action). The only exception to this was Gemini which also used a generic-sounding name for a fantasy knight. There were two camps in terms of prose, but both of them were victims to Tenet #1. The first camp was that of flowery, action-focused prose, composed of ChatGPT 5 and Claude. These models delivered stories that were firmly grounded "in the moment" of the action and followed their Alarics closely. Still, the attributes of the stories were surface-level even if they were fluffed up by that flowery prose. ChatGPT struggled to describe the Big Bad and how exactly it threatened Alaric's village ("She told him of the Shadow" was literally how it handled this exposition), and Claude had its message written on its sleeve (outright saying "he had learned then that courage was not the absence of fear, but the decision to act in spite of it" when talking about a past conquest of Alaric's, showing but not telling numerous times over and giving the reader no sense of accomplishment or growth on Alaric's behalf). The AI models write as if they have some grasp of common symbolism, antagonists, and themes in fantasy, but because they haven't directly experienced any of these things (because they quite literally cannot), it all falls unnaturally flat or sounds like it's a bizarre retelling of stories you've heard before. The second camp with ChatGPT 4o and Gemini took a different route. Rather than focusing on Alaric's "personal" journey to save his village, these two models wrote the story in a more expositional fashion, as though it were a legend passed down generations or a really boring Wikipedia article. These models spent a painstaking amount of time establishing a medieval fantasy setting, going into depth about the village, the kingdom itself, and Wikipedia-like descriptions of the enemies the protagonist faced. They threw in a generous portion of random-fantasy-name-vomit (The Kingdom of Eldraith! The Sunstone sitting in the Heart of the Dragon's Tooth sitting in the Crying Stone! No, I'm not kidding, that second one was what the AI seriously came up with!) that meant absolutely nothing to the reader and was just there to imitate practically any fantasy story it manage to get its data-hungry paws on (Tenet #3!). In this case, Tenet #1 is used not in the sense of whimsical and colorful but ultimately vapid prose (the cotton candy of writing, if you will). Here, it's an attempt to make the world much more lived-in and expansive, even if none of it makes sense at all and a lot of these details are never revisited again, thus holding zero weight to the reader. In both cases, the writing in terms of SPAG and consistency was very, very clean. This is something I covered in Tenet #2 - even if the content of the writing doesn't make very much sense or reads as though it's something you've heard a million times over, at first glance it looks like a decently written story with little to no grammatical errors. They all maintained the exact same tone throughout their stories with zero fluctuation. It's only once you actually read through a few sentences that you realize a whimsical little short story about a knight and his brave journey is one massive, piping-hot AI nothingburger. Moving Forward The best way to steer clear of the AI-generated fiction trap is to stay vigilant and relevant. As AI models "smarten up," it's important to be aware of what new calling cards it's picked up over time and where people are using it next. I've intentionally avoided mentioning tip-offs like the punctuation it overuses, the phrases it misuses, specific framing patterns, etc. because these are likely to phase out over time as AI models advance and learn not to rely on those things. But alongside their development will inevitably come new writing, grammatical, or logical quirks that will become the "definitive" AI markers for a while. So how can you keep up? Heed the Three Tenets, the rules near-guaranteed to stay the same across all of AI's iterations. Try to have a model generate a short story for you so you can practice. Or, if you really want to avoid AI entirely, revisit a post or story that was confirmed to be written by AI (or you strongly suspect is written by AI). Use the Three Tenets as your little handbook and analyze that piece. Are those details sprinkled throughout furthering your understanding of the action and the characters, or do they seem impersonal and detached? Is that writing a little too clean and consistent for what it's supposed to be? Do you find that the writing is running in place and that the logic is feeding itself into a loop, not really going anywhere or even making sense to begin? As another example you can look through, I enjoyed this YouTube video that compares an AI vs. non-AI short story. If you'd like, check it out with the Three Tenets in mind and see if they help you discern which is manmade and which isn't. Finally, avoid succumbing to the doomerist thought that you will not be able to tell AI from human work eventually and therefore it's not even worth trying to tell the difference. An AI model will intrinsically be incapable of producing things that require real-world experiences and connections, including impactful works of art. Understanding the patterns of AI will yield far greater reward than merely checking for em dashes or whatever its current model is hung up over. Eventually you'll be able to spot an AI piece of fiction even from its opening sentence. And of course, if you do use AI for your writing: it will not be good, and you will be rightfully shit on for it. Stop being a pathetic loser and go exercise that pea brain your ancestors have been perfectly fine with using for tens of thousands of years. You are on a writing website. Be creative and WRITE!