GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIEntertainmentStreamingTech

The creative industry’s biggest anti-AI push is officially here

Hollywood is done negotiating quietly and is taking its AI fight straight to the public.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Jan 23, 2026, 12:39 PM EST
Share
We may get a commission from retail offers. Learn more
Illustrated image of artificial intelligence (AI)
Illustration by Kasia Bojanowska / Dribbble
SHARE

Hollywood is kicking off one of its first large-scale, coordinated fights against generative AI, and it’s doing it with a blunt, meme-ready slogan: “Stealing Isn’t Innovation.” At its core, this is less a niche industry squabble and more a high-stakes argument about who gets to profit from human creativity in the age of AI — Big Tech or the people who actually make the stuff everyone loves.

The campaign is being driven by the Human Artistry Campaign, a coalition that’s quietly grown into a serious power bloc made up of unions, rights groups and trade bodies spanning film, TV, music, news, sports and publishing. Its new push, branded “Stealing Isn’t Innovation,” officially launches with a broadside against tech companies that have trained generative AI models on copyrighted scripts, songs, books, images and performances without asking permission or paying for any of it. The group’s message is deliberately simple: if your AI business model depends on vacuuming up copyrighted work without consent, that’s not disruption, it’s theft.

To make that point land beyond policy nerds and copyright lawyers, the campaign has recruited what is essentially an A‑list cast list for an industry-wide protest. Scarlett Johansson, Cate Blanchett, Joseph Gordon-Levitt, Jennifer Hudson, Kristen Bell, Olivia Munn, Sean Astin and “Breaking Bad” creator Vince Gilligan are among the actors and filmmakers putting their names on the effort. Musicians range from Cyndi Lauper, LeAnn Rimes and Questlove to bands like R.E.M., MGMT, OK Go and OneRepublic, while authors including George Saunders, Jodi Picoult, Roxane Gay and Jonathan Franzen have also signed on. In total, more than 700 creators are backing the campaign at launch, with the organizers saying that number is already climbing.

This isn’t just a change[.]org petition with some famous signatures, either. The New York Times — which is itself suing OpenAI and Microsoft over alleged misuse of its journalism to train AI models — has thrown its weight behind “Stealing Isn’t Innovation” with a coordinated ad campaign across print, digital, and social. The Times‘ publisher A.G. Sulzberger framed it as a fight against “systematic theft” by AI firms that have scraped news, books, music and more to build commercial products without consent. The Human Artistry Campaign’s own messaging leans into the same idea, warning that unlicensed data-mining is not just a legal gray area but “massive and unprecedented theft” that could gut the creative middle class.

If that language sounds familiar, it’s because Hollywood has been building to this moment for a while. During the 2023 writers’ and actors’ strikes, generative AI went from a futuristic talking point to a red-line bargaining issue, with the Writers Guild of America ultimately securing contract language that bars studios from using AI to write or rewrite scripts and prevents AI-generated text from being treated as “source material” that could undercut a writer’s credit and pay. SAG-AFTRA followed with hard-fought protections for performers’ likenesses and voices, after early deals revealed how easily a “background” scan could turn into a studio owning a digital double for life. Those negotiations framed AI as a labor issue; “Stealing Isn’t Innovation” is the next evolution, targeting the upstream pipeline: the training data itself.

What the campaign wants, put simply, is to flip the current default. Today, most big AI companies operate on an opt‑out model: they assume they can train on your work unless you jump through hoops to say no, often using fragmented or barely public processes. The Human Artistry Campaign is pushing for the opposite — an opt‑in, licensing-first system where companies must explicitly secure rights before ingesting creative works into training data. That includes concrete asks: legally enforceable licensing frameworks, the right for artists and rights holders to refuse being used as training material, and stronger enforcement against deepfakes and AI impersonations that muddy the waters between real and synthetic content.

Underneath the rhetoric, there’s a cold economic argument. The campaign frames unlicensed AI training as a direct attack on one of the U.S.’s most successful export industries: entertainment and media. The creative economy supports millions of jobs — not just household-name stars, but staff writers, session musicians, makeup artists, animators, journalists and countless below‑the‑line workers whose livelihoods depend on continuing demand for new, original work. The fear is that if AI companies can saturate the market with AI-generated “content” trained on existing material — and do it at scale and near-zero marginal cost — it strips away both the financial incentive and the negotiating power for human creators.

There’s also a culture war angle baked into the messaging. The Human Artistry Campaign repeatedly calls out “AI slop” — a term that’s become shorthand for low-effort, algorithmically generated junk flooding feeds and recommendation systems. Creators worry that the same platforms that already amplify engagement bait will happily serve up slightly‑remixed, barely‑original AI music, video and writing trained on their work, pushing their actual output further down the algorithmic stack. When they talk about defending “original thought and expression,” it’s partly about money, but it’s also about not letting the culture be defined by statistically plausible mashups of what already exists.

Of course, Hollywood’s relationship with AI is not purely adversarial. The industry is simultaneously experimenting with what sanctioned, paid‑for AI partnerships look like. Disney is the most visible example: in December, the company signed a three-year deal reportedly worth around $1 billion with OpenAI, aimed at bringing some of its iconic characters into the video-generation platform Sora. That deal came after a wave of anger when Sora 2.0 was shown generating video featuring recognizable characters from franchises like “Bob’s Burgers,” “Grand Theft Auto” and “SpongeBob SquarePants,” even though rights holders had not signed off. By inking a licensing agreement, Disney arguably legitimized OpenAI’s tech — while also signaling that if you want to play with its IP going forward, you’re going to pay.

That “two tracks at once” dynamic is important: on one side, artists and unions are trying to put hard legal and ethical rails around AI’s use of existing works; on the other, studios and conglomerates are cutting deals that could normalize AI use as long as the check clears. The Human Artistry Campaign leans into this tension but doesn’t reject AI outright. Its line is that there is a “better path” where AI can develop rapidly, but only when companies license content, share revenue, and collaborate with the people whose work powers their models. That pitch is designed to appeal to lawmakers and regulators as much as the public: this isn’t about banning AI, it’s about insisting that the tech sector follow the same copyright rules everyone else already lives under.

What makes “Stealing Isn’t Innovation” feel like a turning point is the way it connects dots across industries that don’t always move in sync. In the past two years, authors have sued over AI-generated “shadow libraries” trained on pirated books; visual artists have taken on image generators; news organizations from the Times to local publishers have started pushing back on AI firms lifting their archives. Now, those fights are being bundled into a single narrative that casts generative AI as a kind of industrial-scale copy machine, and creators as the ones being copied without being asked. The campaign’s website and social content are clearly built to be shareable, with posters, slogans and calls to action that can be screenshotted and circulated beyond trade press and policy circles.

For everyday viewers and listeners, the stakes might not be obvious yet; your favorite show still drops on time, your playlists still update, your feeds still scroll. But the people behind those stories are sending a pretty unambiguous warning: if AI development keeps leaning on “ask forgiveness, not permission” when it comes to training data, the pipeline of new, weird, risky human-made art gets thinner. One of the campaign’s bluntest lines sums it up: taking creators’ work without consent “is not innovation. It is not progress. It is theft — plain and simple.”

In other words, Hollywood’s AI fight is shifting from the picket line to the narrative battlefield. Legislatures, regulators and courts will ultimately decide how far AI firms can go in scraping the world’s culture to feed their models, and whether artists have any practical way to opt out. “Stealing Isn’t Innovation” is an attempt to set the terms of that debate early — not just in dense policy filings, but in the kind of clear, emotional language that tends to win when history looks back.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Leave a Comment

Leave a ReplyCancel reply

Most Popular

How to stream all five seasons of The Boys right now

Claude for Microsoft 365 is now generally available

ASUS’ 12.3-inch ROG Strix XG129C is made to sit under your gaming monitor

Anthropic launches full Claude Platform on AWS with native integration

OpenAI upgrades its Realtime API with three new voice AI models

Also Read
Modern kitchen interior featuring a Samsung Bespoke AI Refrigerator Family Hub in a soft green-themed space. The large white refrigerator has a built-in display panel on the upper door showing abstract artwork. Surrounding the refrigerator are matching pastel green cabinets, a kitchen island with open shelving, and a dark countertop with a gold-tone faucet. Natural light enters through a large window beside the minimalist kitchen setup, highlighting the clean and modern design.

Gemini AI comes to Samsung’s Bespoke AI refrigerator Family Hub screen

Screenshot of the Windows 11 touchpad “Scroll & zoom” settings page in dark mode. The panel shows multiple enabled touchpad options with blue checkmarks, including “Drag two fingers to scroll,” “Automatic scrolling at edge,” “Automatic scrolling with pressure,” “Accelerated scrolling,” and “Pinch to zoom.” A “Single-finger scrolling” option is set to “Right Side.” The interface also includes sliders for “Scroll speed” and “Zoom speed,” along with a dropdown menu for “Scrolling direction” set to “Down motion scrolls up.”

Windows 11 adds custom scroll sliders to Settings

Dark-themed screenshot of the Google Finance Beta interface focused on European markets. The dashboard shows a left sidebar watchlist with major stock indexes and live market values, including the S&P 500, DAX, Nasdaq-100, Nikkei 225, and STOXX Europe 600, each with mini trend charts. In the center, market cards display European indexes such as DAX, FTSE 100, CAC 40, IBEX 35, and STOXX 50 with percentage changes and line graphs. Below, an AI-generated “Europe market summary” explains recent market rebounds driven by technology and banking sectors. On the right, a “Research” panel offers AI-powered financial question prompts and tools like “Deep Search” and “Analyze my watchlist.” A large search bar at the bottom allows users to search for stocks, ETFs, and more.

AI-powered Google Finance launches across Europe now

Illustration comparing Gmail writing suggestions before and after personalization. On the left, under the heading “Today,” a generic email draft to “Alex Liu” uses formal, template-style language with placeholder text. On the right, under “With personalization,” the same draft is rewritten in a more natural and conversational tone with specific influencer campaign details, highlighted text snippets, and a personalized sign-off. Along the right side are three colored labels reading “Personalized tone and style,” “Based on past emails,” and “Based on Drive files,” emphasizing how Gmail uses user context to improve writing suggestions.

Help me write in Gmail gets smarter with personalization

Three smartphone mockups displaying a ChatGPT trusted contact safety feature. The first screen explains how adding a trusted contact can help someone receive support during serious mental health or safety concerns. The second screen shows a form for inviting a trusted contact with fields for name, phone, email, and consent confirmation. The third screen confirms that the invitation was sent and offers an option to send a personal note.

OpenAI adds an emergency-style Trusted Contact option inside ChatGPT settings

Futuristic digital artwork showing a glowing computer face icon inside a translucent glass-like sphere resting on a soft grassy surface. Floating reflective droplets surround the sphere against a dark black background, creating a surreal and minimalist sci-fi atmosphere.

The new Perplexity Mac app ships with Personal Computer

Icon of Apple App Store mobile application on iPhone.

Apple now allows gambling apps on Brazil App Store with license requirements

Apple logo on iPhone 11

Apple’s next chips may come from Intel’s fabs

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.