By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIAnthropicTech

A new plugin teaches AI how not to sound like AI

Wikipedia taught us how to spot AI writing, and now AI is using that lesson to blend in.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Jan 22, 2026, 10:05 AM EST
Share
We may get a commission from retail offers. Learn more
Abstract digital illustration of a human head in profile view, created with glowing blue curved lines on a black background, symbolizing artificial intelligence, neural networks, and the human mind.
Illustration by Aleksei Vasileika / Dribbble
SHARE

If you’ve spent any time online in the past couple of years, you’ve probably felt it: that slightly plastic, overpolished tone that gives away AI-generated text before you even reach the end of the paragraph. It’s the “breathtaking landscapes,” the “in today’s fast-paced world,” and the strangely earnest “I hope this helps!” that all start to blend together into what Wikipedia editors bluntly call “AI slop.” Now, in a twist that feels extremely 2026, a new plugin for Anthropic’s Claude is trying to use Wikipedia’s best guide to spotting AI writing… to help AI hide its tracks.

The tool is called Humanizer, and on paper, its job is simple: strip out the stylistic tells that Wikipedia volunteers have spent years cataloging as giveaways of AI-written prose. Developer Siqi Chen built it as a Claude Code “skill” — essentially a structured bundle of instructions that sits alongside the main system prompt and nudges the model away from those familiar patterns. Wikipedia’s “Signs of AI writing” page, which the plugin leans on, is a sort of field guide built by the WikiProject AI Cleanup group: a volunteer effort that’s been tagging and cleaning up AI-gunked articles since late 2023 and formally published its pattern list in mid‑2025. That catalog is now doing double duty, both as a way to defend the encyclopedia from machine‑written junk and as a cheat sheet for AI systems trying to sound more like us.

To understand why Humanizer even exists, it helps to look at Wikipedia’s guide itself. The document isn’t some secret blacklist of forbidden words; it’s a long, very human, very annoyed tour through all the little habits LLMs fall into when they’re asked to write “helpful,” “engaging” text at scale. Editors noticed that AI-written drafts tend to obsess over why a topic is “important,” burn a lot of space on generic praise, and wrap up with outline-like “Challenges” and “Future prospects” sections that feel more like a school essay than an encyclopedia entry. They lean heavily on vague attributions — “experts believe,” “many people say” — instead of specific sources, and they love puffed-up phrasing like “a pivotal moment in the broader movement” for topics that just don’t merit that level of drama.

The language itself often has a weirdly uniform shine. Wikipedia’s guide calls out lists sprinkled everywhere, overuse of emojis, and those notorious em dashes that became a running joke in AI-detection circles when models started to lean on them as a crutch for rhythm. On the user-facing side, you’ll see chatty little asides that don’t belong in an encyclopedia at all — personal-sounding wrap-ups, “I hope this helps!” closers, or even stray prompts that lazy editors forgot to delete before pasting their AI output into an article. Under the hood, there are more technical signals: odd citation patterns, formatting glitches, and markup quirks that don’t match how humans usually edit Wikipedia. None of these on their own proves a passage is AI-written, but taken together, they form a surprisingly practical “vibe check” for slop.

Humanizer takes that vibe check and inverts it. Instead of using the list to flag bad content, it feeds those patterns to Claude as behaviors to avoid. In the GitHub description, the plugin is pitched very bluntly: a Claude Code skill that “removes signs of AI-generated writing from text, making it sound more natural and human.” That can be as small as swapping out a cheesy phrase — changing “nestled within the breathtaking region” to a stripped-down “a town in the Gonder region,” for example. Or it might push Claude to ground vague claims in specific details, turning “Experts believe it plays a crucial role” into something like “According to a 2019 survey by…” with a concrete source. The idea is not necessarily to make the writing brilliant, but to make it boring in a very human way: less fluff, fewer tells, more plain information.

Technically, Humanizer is built as a “skill file” for Anthropic’s ecosystem, which means it’s essentially a Markdown document packed with instructions that Claude has been specifically trained to interpret in a structured way. It doesn’t rewrite the model; it behaves more like a persistent style rulebook bolted onto the prompt, particularly in Claude Code and Claude’s desktop tools. Skills are a paid feature, so you need access to Claude with code execution to actually use it, but once installed, Humanizer can be applied to text so that the model cleans up its own output on the fly. Chen has also set it up so that when Wikipedia updates its AI-signs guide, Humanizer’s instructions can be refreshed automatically, keeping pace with both new patterns and new editorial tricks.

The backstory behind those patterns adds another layer to the story. WikiProject AI Cleanup — the group whose work Humanizer relies on — is essentially a volunteer patrol for AI slop inside Wikipedia. Founded by French editor Ilyas Lebleu, the project has flagged more than 500 articles for suspected AI contamination and documented what they were seeing across “many thousands” of AI-generated snippets. Their guide is explicit about its limits: it’s designed for Wikipedia, not the whole web, and it doesn’t tell editors to auto‑ban specific words so much as to treat certain patterns as a prompt for closer scrutiny. It also warns editors not to lean on automated AI detectors, which tend to be noisy and unreliable — a point echoed by tech outlets that have praised the guide as one of the most practical resources for learning to spot AI writing.

There’s a certain irony in all of this. A community that’s fighting hard to keep undisclosed AI content out of an open encyclopedia has inadvertently produced what TechCrunch and others have called one of the best guides on the internet for recognizing AI prose — and that guide is now being used to help AI “blend in.” Ars Technica summed it up with a neat paradox: the web’s best playbook for catching AI has become a manual for hiding it. It’s not just Wikipedia that’s adapting, either. OpenAI, for example, has already had to tweak ChatGPT to tone down certain stylistic tics like its overuse of em dashes once users started using them as a quick shorthand for “this feels like a bot.” What Humanizer does is essentially turn that defensive move into a product feature.

Depending on how you look at it, that’s either clever, worrying, or both. On the optimistic side, tools like Humanizer could nudge AI into writing that’s less saccharine and more straightforward: fewer generic platitudes, more specifics, a tone that doesn’t instantly set off people’s “this is spam” radar. For teams already using AI as a rough draft generator, having a post‑processing pass that aggressively removes the obvious tells could mean less manual cleanup and fewer embarrassing “as an AI assistant” artifacts making it into production copy. You can imagine this being used in documentation, internal tools, or even consumer apps where the goal is simply to have AI that doesn’t sound so robotically earnest all the time.

But the same mechanism makes AI harder to spot in places where disclosure and transparency really matter. Wikipedia itself is the clearest example: the entire point of the “Signs of AI writing” guide is to protect an open knowledge base from being quietly reshaped by models that hallucinate sources, invent facts, and write persuasive nonsense with total confidence. If those models are now being armed with a defensive shield that strips away the exact patterns volunteers rely on, the job of moderation gets significantly harder. And outside of Wikipedia, there’s a broader ecosystem of education, journalism, and policy work that depends on being able to tell when content is AI-written, at least enough to ask questions about how it was produced and what checks were in place.

There’s also a philosophical tension in the name itself. “Humanizer” implies that the plugin makes text more human, but that’s doing a lot of work here. At best, it makes the output less obviously machine‑like by pruning the most recognizable quirks; it doesn’t magically give the model lived experience, judgment, or accountability. You can absolutely have a perfectly “human‑sounding” paragraph that’s factually wrong, ethically questionable, or written in a way that manipulates readers, and a tool that cleans up surface‑level style only makes that more comfortable to read. In other words, Humanizer tackles the aesthetics of AI detection, not the underlying issues that made AI slop a problem in the first place: low‑effort mass production, weak sourcing, and opaque disclosure.

Still, it’s very on‑brand for this moment in AI that a grassroots defense mechanism from Wikipedia has already been looped into the optimization feedback loop for commercial models. Volunteers spend thousands of hours cataloging the ways AI gives itself away; developers wrap that work into prompts to help AI hide better; platforms respond with new policies, guides, and tools to detect the next wave. Humanizer is just one plugin in a growing catalog of Claude skills — GitHub lists full collections of them for planning, documentation, frontend design, and more — but it’s a particularly clear example of how quickly this space is evolving. If nothing else, it shows that the future of online writing isn’t going to be a simple human‑versus‑machine story; it’s going to be a constant back‑and‑forth between people trying to keep text legible and trustworthy, and systems learning, bit by bit, how to sound more like us.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Claude AI
Leave a Comment

Leave a ReplyCancel reply

Most Popular

The $19 Apple polishing cloth supports iPhone 17, Air, Pro, and 17e

Apple MacBook Neo: big power, surprising price, one clear target — Windows

Everything Nothing announced on March 5: Headphone (a), Phone (4a), and Phone (4a) Pro

OpenAI’s GPT-5.4 is coming — and it’s sooner than you think

BenQ’s new 5K Mac monitor costs $999 — here’s what you’re getting

Also Read
Close-up of a person holding the Google Pixel 10 Pro Fold in Moonstone gray with both hands, rear-facing triple camera array and Google "G" logo prominently visible, worn against a silver knit top and blue jacket with a poolside background.

Pixel Care+ makes owning a Pixel a lot less scary — here’s why

Woman with blonde curly hair sitting outside in a lush park, holding a blue Google Pixel 10 and smiling at the screen.

Pixel 10a, Pixel 10, Pixel 10 Pro: one winner for every buyer

Google Search AI Mode showing Canvas in action, with a split-screen view of a conversational AI chat on the left and an "EE Opportunity Tracker" scholarship and grant tracking dashboard on the right, displaying a total funding secured amount of $5,000, scholarship cards with deadlines, and status labels including "To Apply" and "Awarded."

Google’s Canvas AI Mode rolls out to everyone in the U.S.

Google NotebookLM app listing on the Apple App Store displayed on an iPhone screen, showing the app icon, tagline "Understand anything," a Get button with In-App Purchases noted, 1.9K ratings, age rating 4+, and a chart ranking of No. 36 in Productivity.

NotebookLM Cinematic Video Overviews are live — here’s what’s new

A Google Messages conversation on an Android phone showing a real-time location sharing card powered by Find Hub and Google Maps, displaying a live map view near San Francisco Botanical Garden with a blue location dot, labeled "Your location – Sharing until 10:30 AM," within a chat about meeting up for coffee.

Google Messages real-time location sharing is here — here’s how it works

Screenshot of the Perplexity Pro interface with the model picker dropdown open, displaying GPT-5.4 labeled as New with the Thinking toggle switched on, and other available models including Sonar, Gemini 3.1 Pro, Claude Sonnet 4.6, Claude Opus 4.6 (Max-only), and Kimi K2.5.

GPT-5.4 is now on Perplexity — here’s what Pro/Max users get

A Microsoft Excel spreadsheet titled "Consumer Full 3 Statement Model" displaying a Balance Sheet in millions of dollars with historical financial data across four years (2020A–2023A), showing line items including cash and equivalents, accounts receivable, inventory, PP&E, goodwill, total assets, accounts payable, current debt maturities, and total liabilities, alongside an open ChatGPT sidebar panel where a user has asked ChatGPT to build an EBITDA-to-free-cash-flow conversion bridge with charts placed on the Balance Sheet tab, and the AI is actively responding by planning the analysis, filling in financing cash rows, and executing multiple actions in real time.

ChatGPT for Excel is here — and it runs on GPT‑5.4

ChatGPT logo and wordmark in white on a soft blue and orange gradient background, representing OpenAI’s ChatGPT platform.

OpenAI’s GPT-5.4 can click, type, and work your PC for you

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.