Apple’s big new AI headache isn’t about fixing Siri’s personality; it’s about what happens when AI collides with App Store rules, copyright law, and a culture that wants AI everywhere, all at once. The company has landed in the crosshairs of two very different lawsuits that, together, expose how messy its AI strategy and enforcement actually are.
On one side, there’s Ex-Human, a San Francisco AI startup behind Botify AI and Photify AI, which Apple booted from the App Store and allegedly froze around $500,000 in revenue. Botify AI is the controversial “AI companion” platform that MIT Technology Review found hosting sexually charged conversations with bots mimicking underage celebrity characters like Jenna Ortega’s Wednesday Addams and Emma Watson’s Hermione Granger, some even brushing off age-of-consent laws as “meant to be broken.” Photify AI, meanwhile, lets users generate images of real people in revealing outfits without their consent, pushing into the territory of non‑consensual sexual imagery and AI‑powered abuse. Apple’s official justification, according to the lawsuit, was a vague reference to “dishonest or fraudulent activity,” but the timing lines up closely with the public backlash following these investigations into underage‑style bots and non‑consensual content.
Ex-Human argues Apple is arbitrarily enforcing its rules, pointing out that its apps remain live on Google Play and that Apple supposedly targeted them to protect its own generative image feature, Image Playground. That argument hits a nerve because the App Store has long been criticized for inconsistency—what gets banned for one developer slides through for another, especially when big platforms or billionaire‑backed brands are involved. Apple loves to market the App Store as the “safest place” for apps, but when X (formerly Twitter) and its Grok chatbot can host or generate non‑consensual sexual material and still stay available, it gets much harder to defend why smaller AI apps are treated more harshly or more quickly. To regular users, it starts to look less like principle and more like politics.
On the other side, Apple is being accused of going too far with AI training rather than not far enough. Three established YouTube channels—h3h3Productions (and its podcast channels), MrShortGame Golf, and Golfholics—have filed a class‑action lawsuit claiming Apple scraped millions of their YouTube videos, bypassed YouTube’s protections, and used that data to train its internal video AI models. The creators say Apple “deliberately circumvented” YouTube’s controlled streaming architecture, essentially pulling down video at scale in ways normal users can’t, and then profiting from that content without asking, paying, or even notifying the people who made it. They’re suing under the DMCA, arguing that Apple’s approach to building “smart” video AI rests on a foundation of unauthorized copying—something that echoes similar lawsuits they’ve filed against Meta, NVIDIA, ByteDance, and Snap.
What makes this awkward for Apple is its carefully cultivated image as the “responsible” tech giant, the one that talks about privacy and ethics while rivals race ahead. For years, Apple has positioned itself as more cautious and more respectful of user data than the likes of Meta or Google, and it has also leaned into a narrative that it’s late to the AI party because it wants to do things the right way. If a court finds that its AI models were trained on content it had no right to copy at scale, that moral high ground erodes fast. At a minimum, the case forces Apple to explain—under oath—exactly what data it used, how it accessed it, and why that should be considered legal when creators never opted in.
Put these two lawsuits together and you get a neat snapshot of Apple’s AI dilemma. On one front, it’s under pressure to crack down on harmful and abusive AI behavior in the App Store—underage‑coded bots, non‑consensual images, sexualized content that can be targeted at vulnerable people. On the other, it’s accused of being so aggressive in its own AI ambitions that it may have trampled on creators’ rights while building the models it hopes will power the next wave of Apple products. Too little AI moderation in one place, too much AI data‑hoovering in another. For a company famous for tight control, this is about control breaking down at both ends.
None of this has much to do with Siri’s usual complaints—you know, being slow, being less capable than ChatGPT, occasionally pretending not to hear you. The Siri story is about consumer‑facing features; these lawsuits are about systems and incentives behind the scenes: who gets protected in the App Store, whose content is fair game for training, and whether “AI at all costs” has silently become the default setting in Cupertino. When AI is treated as an urgent gold rush instead of just another tool, the guardrails that Apple likes to brag about can end up looking more like marketing than policy.
The uncomfortable truth for Apple—and the rest of the industry—is that the hardest AI questions aren’t technical ones. They’re about power, money, and consent: who gets to build on whose data, who gets to set the rules, and who actually pays when those rules are bent or broken. AI absolutely has real, useful applications, from accessibility to productivity, but the rush to jam it into every app, every service, every device is driven far more by the prospect of making already wealthy companies even richer than by any genuine human need. Apple is now being forced to confront that tension in court, from both directions at once—and that’s a problem no Siri update can talk its way out of.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
