Ex-Human, the San Francisco AI startup behind the controversial Botify and Photify AI apps, has taken Apple to court, accusing the iPhone maker of yanking its apps from the App Store without clear justification and freezing roughly half a million dollars in earnings. The case folds together three hot-button issues in tech right now: sexually explicit AI, non-consensual image generation, and how much power Apple really has over the businesses that depend on its App Store.
At the center of the dispute are Botify and Photify AI, two apps that lean heavily into the more risqué side of generative AI. Botify is an AI companion platform where users chat with customizable bots, some of which were created to mimic celebrities and fictional characters, including younger versions of real-world actresses. The app landed in the spotlight after an MIT Technology Review investigation highlighted bots role‑playing as underage characters and celebrities in sexually charged conversations, including one that reportedly dismissed age-of-consent laws as “meant to be broken.” Photify AI, meanwhile, lets users generate highly revealing images of real people without their consent, a feature that taps into the broader, growing problem of AI-powered non-consensual sexual imagery.
According to Ex-Human’s complaint, Apple removed both apps from the App Store in 2025 under a broad accusation of “dishonest or fraudulent activity,” then withheld around $500,000 in revenue generated through the apps. The company says Apple never spelled out what exactly was dishonest or fraudulent, nor pointed to any specific user behavior or transaction that broke the rules. As far as Ex-Human is concerned, that lack of detail transforms a standard enforcement action into an arbitrary, opaque decision—and one that directly threatens its business. The startup is now seeking an injunction that would not only unfreeze its funds but also block Apple from enforcing similar bans in the same way going forward.
Money is a big part of the story. Ex-Human says Apple’s own App Store business team had previously labeled it a “high-growth developer,” with Botify pulling in about $330,000 a month and Photify AI bringing in another $100,000. For a startup, having $500,000 locked up and a primary distribution channel suddenly cut off is not a minor inconvenience; it is an existential threat. The company also points out that both apps remain live on the Google Play Store, which it frames as proof that Apple’s standards and enforcement are out of step with at least one other major platform.
Ex-Human is not just arguing that Apple overreacted; it is accusing Apple of leveraging its power to favor its own products. In the complaint, the startup claims the removal of Photify AI aligned suspiciously with Apple’s promotion of its “Image Playground” tools, positioning the takedown as an anticompetitive move dressed up as policy enforcement. The theory is that by removing an edgy, high-engagement AI imaging app, Apple cleared space for users to spend more time inside its own ecosystem features rather than with a third-party service. Industry watchers, however, are split on that argument; some note that Apple’s system-level image tools are designed very differently, and that Apple may not see them as being in the same category as an AI social app that enables sexualized content.
The elephant in the room is content moderation. Reports around Botify and Photify describe behavior that would trigger alarms almost anywhere: chatbots that identify as minors while engaging in explicit conversations, and AI tools that can fabricate sexualized images of real people without their consent. Ex-Human has previously admitted that some user-generated bots slipped through the cracks of its moderation system and still managed to rack up significant traction—millions of likes in some cases—before being taken down. For Apple, which markets the App Store as a safer, curated environment, that combination of explicit content, minors, and real identities is precisely the kind of risk it tends to squash quickly.
From Apple’s perspective, the company’s App Store guidelines already spell out that apps cannot facilitate illegal or abusive content, and that they must have strong moderation systems for user-generated material, especially around sexual content and minors. While Apple has not publicly detailed every reason for the removals, reports suggest the company viewed Ex-Human’s apps as failing those standards, particularly after the MIT Technology Review report drew wider attention. Apple’s defenders argue that once explicit conversations involving underage personas and non-consensual images of real people surfaced, the company had little choice but to act, both legally and reputationally.
Ex-Human, on the other hand, is leaning into a broader frustration that many developers have with Apple: the feeling that App Store enforcement is opaque, inconsistent, and leaves little room for appeal. The lawsuit highlights that Apple’s takedown notice mentioned only “dishonest or fraudulent activity,” a catch-all phrase that does not obviously map to the content issues the apps are criticized for. To developers, this kind of vague language—combined with frozen revenue and no clear path to reinstatement—can look less like responsible gatekeeping and more like a black box where business fortunes can change overnight.
This case also touches on a growing tension around what “adult” AI content platforms can and cannot do on mainstream app stores. Apple does allow apps like X (formerly Twitter) that can host adult material, as long as they maintain robust moderation and comply with local laws. The line appears to get sharper, however, when AI is involved in generating sexual content that skirts into illegal territory—like minors or non-consensual deepfakes—because the scale and speed of AI generation make abuses harder to contain. Ex-Human’s situation effectively becomes a test case for how Apple will treat AI-first social apps that operate in that gray zone.
Zooming out, the lawsuit feeds into the long-running argument over whether Apple’s dual role—both platform owner and competitor—is fair. For regulators in the U.S. and Europe, examples where Apple allegedly disadvantages third-party services in favor of its own features are exactly the kinds of patterns they are hunting for. Even if a court ultimately sides with Apple on safety grounds here, Ex-Human’s claims add one more data point to the narrative that App Store decisions can have massive, sometimes opaque, competitive implications.
For users, this story is a reminder of how fragile access to certain apps can be. One week, an AI companion or imaging app is trending; the next, it is gone from a major app store with little public explanation beyond a line or two of policy language. People who relied on Botify’s AI companions or Photify’s tools may still find them on Android, but the iOS side of the audience has effectively been cut off while the legal fight plays out. At the same time, the nature of the content at issue—underage personas and non-consensual sexual images—will make many users feel that strict intervention is not only justified but overdue.
Where this goes next will depend on how much the court is willing to dig into both Apple’s justification and Ex-Human’s internal moderation efforts. If discovery shows that Ex-Human struggled for a long time to control obviously illegal or harmful content, Apple’s decision may look like standard, if harsh, enforcement. If, however, the case surfaces internal discussion that shows Apple linking the takedowns tightly to its own Image Playground rollout, the anticompetition angle could gain real traction. Either way, the outcome will send a signal to every developer building AI experiences on iOS: pushing the limits of what AI can do might be great for engagement, but when those limits collide with Apple’s risk tolerance, the platform will almost always win.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.