By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIAnthropicTech

A new plugin teaches AI how not to sound like AI

Wikipedia taught us how to spot AI writing, and now AI is using that lesson to blend in.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Jan 22, 2026, 10:05 AM EST
Share
We may get a commission from retail offers. Learn more
Abstract digital illustration of a human head in profile view, created with glowing blue curved lines on a black background, symbolizing artificial intelligence, neural networks, and the human mind.
Illustration by Aleksei Vasileika / Dribbble
SHARE

If you’ve spent any time online in the past couple of years, you’ve probably felt it: that slightly plastic, overpolished tone that gives away AI-generated text before you even reach the end of the paragraph. It’s the “breathtaking landscapes,” the “in today’s fast-paced world,” and the strangely earnest “I hope this helps!” that all start to blend together into what Wikipedia editors bluntly call “AI slop.” Now, in a twist that feels extremely 2026, a new plugin for Anthropic’s Claude is trying to use Wikipedia’s best guide to spotting AI writing… to help AI hide its tracks.

The tool is called Humanizer, and on paper, its job is simple: strip out the stylistic tells that Wikipedia volunteers have spent years cataloging as giveaways of AI-written prose. Developer Siqi Chen built it as a Claude Code “skill” — essentially a structured bundle of instructions that sits alongside the main system prompt and nudges the model away from those familiar patterns. Wikipedia’s “Signs of AI writing” page, which the plugin leans on, is a sort of field guide built by the WikiProject AI Cleanup group: a volunteer effort that’s been tagging and cleaning up AI-gunked articles since late 2023 and formally published its pattern list in mid‑2025. That catalog is now doing double duty, both as a way to defend the encyclopedia from machine‑written junk and as a cheat sheet for AI systems trying to sound more like us.

To understand why Humanizer even exists, it helps to look at Wikipedia’s guide itself. The document isn’t some secret blacklist of forbidden words; it’s a long, very human, very annoyed tour through all the little habits LLMs fall into when they’re asked to write “helpful,” “engaging” text at scale. Editors noticed that AI-written drafts tend to obsess over why a topic is “important,” burn a lot of space on generic praise, and wrap up with outline-like “Challenges” and “Future prospects” sections that feel more like a school essay than an encyclopedia entry. They lean heavily on vague attributions — “experts believe,” “many people say” — instead of specific sources, and they love puffed-up phrasing like “a pivotal moment in the broader movement” for topics that just don’t merit that level of drama.

The language itself often has a weirdly uniform shine. Wikipedia’s guide calls out lists sprinkled everywhere, overuse of emojis, and those notorious em dashes that became a running joke in AI-detection circles when models started to lean on them as a crutch for rhythm. On the user-facing side, you’ll see chatty little asides that don’t belong in an encyclopedia at all — personal-sounding wrap-ups, “I hope this helps!” closers, or even stray prompts that lazy editors forgot to delete before pasting their AI output into an article. Under the hood, there are more technical signals: odd citation patterns, formatting glitches, and markup quirks that don’t match how humans usually edit Wikipedia. None of these on their own proves a passage is AI-written, but taken together, they form a surprisingly practical “vibe check” for slop.

Humanizer takes that vibe check and inverts it. Instead of using the list to flag bad content, it feeds those patterns to Claude as behaviors to avoid. In the GitHub description, the plugin is pitched very bluntly: a Claude Code skill that “removes signs of AI-generated writing from text, making it sound more natural and human.” That can be as small as swapping out a cheesy phrase — changing “nestled within the breathtaking region” to a stripped-down “a town in the Gonder region,” for example. Or it might push Claude to ground vague claims in specific details, turning “Experts believe it plays a crucial role” into something like “According to a 2019 survey by…” with a concrete source. The idea is not necessarily to make the writing brilliant, but to make it boring in a very human way: less fluff, fewer tells, more plain information.

Technically, Humanizer is built as a “skill file” for Anthropic’s ecosystem, which means it’s essentially a Markdown document packed with instructions that Claude has been specifically trained to interpret in a structured way. It doesn’t rewrite the model; it behaves more like a persistent style rulebook bolted onto the prompt, particularly in Claude Code and Claude’s desktop tools. Skills are a paid feature, so you need access to Claude with code execution to actually use it, but once installed, Humanizer can be applied to text so that the model cleans up its own output on the fly. Chen has also set it up so that when Wikipedia updates its AI-signs guide, Humanizer’s instructions can be refreshed automatically, keeping pace with both new patterns and new editorial tricks.

The backstory behind those patterns adds another layer to the story. WikiProject AI Cleanup — the group whose work Humanizer relies on — is essentially a volunteer patrol for AI slop inside Wikipedia. Founded by French editor Ilyas Lebleu, the project has flagged more than 500 articles for suspected AI contamination and documented what they were seeing across “many thousands” of AI-generated snippets. Their guide is explicit about its limits: it’s designed for Wikipedia, not the whole web, and it doesn’t tell editors to auto‑ban specific words so much as to treat certain patterns as a prompt for closer scrutiny. It also warns editors not to lean on automated AI detectors, which tend to be noisy and unreliable — a point echoed by tech outlets that have praised the guide as one of the most practical resources for learning to spot AI writing.

There’s a certain irony in all of this. A community that’s fighting hard to keep undisclosed AI content out of an open encyclopedia has inadvertently produced what TechCrunch and others have called one of the best guides on the internet for recognizing AI prose — and that guide is now being used to help AI “blend in.” Ars Technica summed it up with a neat paradox: the web’s best playbook for catching AI has become a manual for hiding it. It’s not just Wikipedia that’s adapting, either. OpenAI, for example, has already had to tweak ChatGPT to tone down certain stylistic tics like its overuse of em dashes once users started using them as a quick shorthand for “this feels like a bot.” What Humanizer does is essentially turn that defensive move into a product feature.

Depending on how you look at it, that’s either clever, worrying, or both. On the optimistic side, tools like Humanizer could nudge AI into writing that’s less saccharine and more straightforward: fewer generic platitudes, more specifics, a tone that doesn’t instantly set off people’s “this is spam” radar. For teams already using AI as a rough draft generator, having a post‑processing pass that aggressively removes the obvious tells could mean less manual cleanup and fewer embarrassing “as an AI assistant” artifacts making it into production copy. You can imagine this being used in documentation, internal tools, or even consumer apps where the goal is simply to have AI that doesn’t sound so robotically earnest all the time.

But the same mechanism makes AI harder to spot in places where disclosure and transparency really matter. Wikipedia itself is the clearest example: the entire point of the “Signs of AI writing” guide is to protect an open knowledge base from being quietly reshaped by models that hallucinate sources, invent facts, and write persuasive nonsense with total confidence. If those models are now being armed with a defensive shield that strips away the exact patterns volunteers rely on, the job of moderation gets significantly harder. And outside of Wikipedia, there’s a broader ecosystem of education, journalism, and policy work that depends on being able to tell when content is AI-written, at least enough to ask questions about how it was produced and what checks were in place.

There’s also a philosophical tension in the name itself. “Humanizer” implies that the plugin makes text more human, but that’s doing a lot of work here. At best, it makes the output less obviously machine‑like by pruning the most recognizable quirks; it doesn’t magically give the model lived experience, judgment, or accountability. You can absolutely have a perfectly “human‑sounding” paragraph that’s factually wrong, ethically questionable, or written in a way that manipulates readers, and a tool that cleans up surface‑level style only makes that more comfortable to read. In other words, Humanizer tackles the aesthetics of AI detection, not the underlying issues that made AI slop a problem in the first place: low‑effort mass production, weak sourcing, and opaque disclosure.

Still, it’s very on‑brand for this moment in AI that a grassroots defense mechanism from Wikipedia has already been looped into the optimization feedback loop for commercial models. Volunteers spend thousands of hours cataloging the ways AI gives itself away; developers wrap that work into prompts to help AI hide better; platforms respond with new policies, guides, and tools to detect the next wave. Humanizer is just one plugin in a growing catalog of Claude skills — GitHub lists full collections of them for planning, documentation, frontend design, and more — but it’s a particularly clear example of how quickly this space is evolving. If nothing else, it shows that the future of online writing isn’t going to be a simple human‑versus‑machine story; it’s going to be a constant back‑and‑forth between people trying to keep text legible and trustworthy, and systems learning, bit by bit, how to sound more like us.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Claude AI
Leave a Comment

Leave a ReplyCancel reply

Most Popular

DJI’s FC200 and T200 drones push industrial delivery and agriculture into the 200kg era

DJI Osmo Mobile 8P debuts with detachable remote and smarter tracking

DJI Power 1000 Mini is the new sweet spot for portable 1kWh stations

GoPro Mission 1 series is powerful, pricey, and not for casual users

Cheap MacBook Neo spurs Microsoft to stack student deals on Windows 11 laptops

Also Read
Claude Cowork logo and text on a light grey background, featuring a coral-colored starburst icon next to the product name in black serif font.

Anthropic adds interactive charts and diagrams to Claude Cowork

Screenshot of an AI chat interface showing the model selection dropdown menu open. “Kimi K2.6 Thinking” is selected at the top, with options including Best, Kimi K2.6 (marked New), Claude Sonnet 4.6, Claude Opus 4.7 (marked Max), and Nemotron 3 Super. A tooltip on the right says “Moonshot AI’s latest model,” highlighting Kimi K2.6.

Perplexity Pro and Max just got Kimi K2.6 support

Kimi K2.6 hero image

Kimi K2.6 is Moonshot’s new engine for autonomous coding and research

Hand-tracked webcam slingshot game demo in Google AI Studio, showing a prompt describing pinch-and-pull controls, a dotted aiming line targeting colored bubbles, score display, and color selection UI with Gemini 3.1 Pro Preview.

Google AI Studio is now bundled with Pro and Ultra subscriptions at no extra cost

Gemini Embedding 2

Gemini Embedding 2 is now live for multimodal AI

Anthropic logo displayed as bold black uppercase text on a light beige background.

Anthropic’s secret Mythos AI just slipped into the wrong hands

A computer-generated image of a circular object that is defined as the OpenAI logo.

OpenAI Privacy Filter brings open-weight PII redaction to everyone

2027 BMW 7 Series

2027 BMW 7 Series debuts with Neue Klasse tech and bold luxury

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.