The AI-x Scale For Written Content
This is a draft.
This is a proposed system for ranking use of AI in in articles and documentation. One can use it on their own work, and Internet commenters can use it as shorthand to debate the level of Slop an article contains.
There is a place for AI to help build good documentation, and some people use it to polish their own writing. But many people don't like it when they realize they're reading AI-generated content. It's too easy to AI-generate coherent content that looks useful, but took zero effort and adds nothing new to a conversation.
That's not helpful — it's genuinely frustrating. (Joking)
So for the sake of online bitching and moaning about AI-generated content, I propose the following system to quickly label the level of perceived AI use in blogs and docs. If one wants, they could even use this system when publishing, to disclose their use of AI in good faith.
The Scale
| Label | Description | Example |
|---|---|---|
| 0 | No AI use whatsoever | Human-written from scratch. Not expected for tech docs. Usually used for prose. |
| 1 | AI used for translation | Direct translation of AI-0 content |
| 2H | AI used for (H)elp research, but not for written content | You did some digging through AI, and synthesized useful content |
| 2R | AI used to (R)efine source writings of the same size | The author wrote the content, but passed it through AI because they don't like their own writing |
| 5 | AI used to generated most or all content from prompts | Pure slop - Articles crafted almost entirely from a basic prompt. |
AI-3 and AI-4 are reserved.
The listing roughly correlates to the tolerance people have for the use of AI. AI-2 is completely reasonable for modern tech writing, in my opinion. People who struggle with prose may step it up to AI-3. Beyond that, wider audiences may resent wasting their time on the kind of thing they could have easily asked AI themselves.
