Don't Write Like AI: 10 Takeaways from Wikipedia's Signs of AI Writing

@blake.stockton

Which ai writing sign are you suprised to see? 😬 #wikipedia #ai #artificialintelligence #chatgpt #writer #writers #writertok #writersoftiktok

♬ original sound - Blake

*There are three more TikTok vids at the end of this post

Wikipedia saw so much AI-generated writing in its articles that editors launched an “AI cleanup” project.

What’s interesting is that the goal of the project is not to ban AI-generated content, but to make sure it meets quality standards. Editors are helping authors revise AI-written text so it fits Wikipedia’s tone and guidelines.

Before we get into the signs of AI writing (what I call “tells”), here are a few things to keep in mind.

A few tips on spotting AI

  1. Just because a phrase sounds like AI doesn’t mean AI wrote it. LLMs are trained on human writing, so sometimes people naturally write in ways that resemble AI output.
  2. AI detection software is also unreliable. It often flags false positives, which is why human judgment matters more than a score.

Here’s the biggest takeaway from Wikipedia’s new page:

"AI content can have a promotional tone, reading like a tourism website."

I’ve found it to be just as true in B2B writing. AI tends to create meaning, then promote it with intensity. Wikipedia editors explain that this promotional tone often slips through, even with careful prompting.

💡
Tip: Avoid removing every possible tell with prompt instructions. It works better to prompt out only a few things, such as em dashes or parallel negation structures. Too many restrictions make the result stiff or generic, which feels even more artificial.

Let's get into the signs.

1. Inflated symbolism and meaning

AI often exaggerates the importance of a topic by connecting it to broader themes. 

Wikipedia notes that these phrases show up in predictable ways, so editors should consider rewording or removing them.

Phrases to revise or cut:

  • is / stand as / serves as a testament
  • plays a vital / significant role
  • underscores its importance
  • continue to captivate
  • leaves a lasting impact
  • watershed moment
  • key turning point
  • deeply rooted
  • profound heritage
  • steadfast dedication
  • stands as a
  • solidifies

2. Promotional language (around cultures)

AI often writes about cultural heritage with admiration that feels promotional. This tone feels more like travel copy than encyclopedia writing.

Phrases to watch for:

  • rich cultural heritage
  • rich history
  • breathtaking
  • must-visit
  • must-see
  • stunning natural beauty
  • enduring / lasting legacy
  • rich cultural tapestry

3. Editorializing

AI often adds interpretation or opinion, even when a neutral tone is requested.

Common phrases:

  • it's important to note / remember / consider
  • it is worth
  • no discussion would be complete without
  • in this article

4. Overuse of conjunctive phrases (connecting phrases)

Transitions help writing flow, but AI relies too heavily on a small set. Some of these also suggest synthesis or analysis, which does not fit Wikipedia’s neutral tone.

Phrases to watch:

  • on the other hand
  • moreover
  • in addition
  • furthermore
  • however
  • in contrast

5. Negative parallelisms (negation)

Wikipedia highlights a structure AI loves: “It’s not X, it’s Y.” This phrasing creates a contrast that can feel dramatic or persuasive.

Negation is the most prominent AI writing tell. Just a few days ago, I read a prominent newsletter and it was in it five times. Yikes.

A great example from Wikipedia is:

"It's not just about the beat riding under the vocals; it's part of the aggression and atmosphere."

While most people can identify negation in one sentence, it also can be expanded to two sentences, not just one:

"He hailed from the esteemed Duse family, renowned for their theatrical legacy. Eugenio's life, however, took a path that intertwined both personal ambition and familial complexities."

This pattern shows up so often that it’s one of the few worth prompting out. Gen AI will sometimes even use it in back-to-back sentences.

6. Superficial analyses

AI writing tends to comment or analyze information often in relation to its significance, recognition, or impact.

This usually shows up at the end of a sentence with an -ing word, which can feel like empty analysis.

These -ing words often introduce unnecessary opinions. For example:

"Consumers benefit from the flexibility to use their preferred mobile wallet at participating merchants, improving convenience."

Words to watch:

  • ensuring...
  • highlighting...
  • emphasizing...
  • reflecting...

7. Vague attribution of opinion

I've personally noticed this issue a lot. Especially if you attached deep research docs to your prompt.

Attributing opinions or claims to a vague authority is a practice called weasel wording. Yes, that's the real term lol.

Words to watch:

  • Industry reports
  • Observers have cited
  • Some critics argue

These phrases suggest authority without clear sources. The writing feels informed, but the support often comes from a single source, or none at all.

8. Excessive use of boldface

AI sometimes uses bold formatting in predictable ways, often to highlight product names or sections. Wikipedia discourages this formatting outside of specific use cases.

In general writing, bold text should feel intentional. Too much bolding creates a pattern that readers associate with AI.

9. Overuse of em dashes

The em dash is a hot topic among writers. Most are angry at Chat for making it seem like every use of the em dash indicates AI writing.

Wikipedia approaches this by saying that AI chatbots use the em dash more frequently than most editors do.

Here is the key phrase, and what you should look out for in your writing:

AI especially uses em dashes in places where human writers are much more likely to use parentheses or commas.

So, if you're using AI to write for you, definitely replace the em dash with parentheses or commas where necessary.

10. Bullet points with bold titles

Bullet points with bolded titles are a common ChatGPT structure, but virtually nonexistent on Wikipedia. The bolded phrase is often just reworded in the sentence that follows, which adds little value.

For example:

  • Scalability: The system is designed to scale easily across different use cases.

When this type of bulleting is done consistently, it's an AI tell.

Final thoughts

Wikipedia’s “AI Cleanup” project helps editors recognize and fix AI-generated writing.

It also highlights something important: AI writing tools can help, but the output still needs a human touch.

By learning to spot and revise these tells, we can improve our writing drastically.

More TikTok vids:

@blake.stockton

Wikipedia ain’t happy about ai writing signs in their articles #ai #wikipedia #chatgpt #artificialintelligence

♬ original sound - Blake
@blake.stockton

Part 3 | Wikipedia’s signs of AI writing: The main takeaway from the examples is the use of superficial phrases with little info depth. Because ai is an okay writer, humans will need to step in to make the writing good, possibly even great. We must include our depth of knowledge and experience. #ai #artificialintelligence #wikipedia #chatgpt #writing #writer #writertok #writersoftiktok #socialmediamarketing #copywriting

♬ original sound - Blake
@blake.stockton

Part 4 | Wikipedia’s editors have flagged common AI phrases. The phrases themselves are fine, but overusing them can make writing feel like AI. And remember, Wikipedia aims for a neutral tone. AI often is promotional in tone or tries to add meaning too often or where it’s not needed. #ai #artificialintelligence #wikipedia #writertok #writersontiktok #socialmediamarketing #author #authorsoftiktok #college #collegestudent #studentlife #students #collegetips

♬ original sound - Blake