Mustafa Suleyman, the head of Microsoft AI and a co-founder of DeepMind, has a blunt message: these systems aren’t people — and pretending they are could do real harm. In a long, wide-ranging essay published on his personal blog on August 19, 2025, Suleyman lays out his warning about what he calls “Seemingly Conscious AI” (SCAI): systems that don’t actually possess subjective experience but can so convincingly mimic the hallmarks of personhood that people start treating them as if they do.
This is not a technocrat’s academic exercise. Suleyman’s post — a public, reflective piece that reads like the notebook of someone who has spent a career inside the labs now shaping the public’s experience of AI — argues the illusion of consciousness will matter far more in the short term than the metaphysical question of whether a machine “really” feels. He worries that, left unchecked, SCAI will remap moral, legal and social priorities: from awkward courtroom arguments about AI “rights” to a troubling increase in people who turn to chatbots for therapy, companionship or identity validation.
Why the worry now?
Suleyman’s timing matters. Language models and agentic tools have tripled their fluency and utility in a matter of months, and designers have begun giving them memory, personality, and the ability to plan across tasks. Those features — memory, a distinct style of speech, an ability to call tools and complete multi-step jobs — are precisely the building blocks that make interaction feel like conversation with a persistent other. When that happens at scale, ordinary human psychology does what it has always done: people anthropomorphize. Suleyman calls that predictable human reflex the real danger.
He isn’t theorizing in a vacuum. There are mounting, concrete examples of the damage that can follow when the line between tool and companion blurs. Lawsuits are already working their way through U.S. courts — notably a wrongful-death lawsuit that a federal judge in Florida allowed to proceed after a 14-year-old’s suicide was tied to prolonged interactions with a chatbot on Character.ai. That case, and others like it, are forcing courts, regulators and companies to face questions about liability, product design, and the responsibility of platforms that host emotionally compelling bots.
Add political blowback to the legal risk. Reuters recently published internal Meta chatbot guidelines that, until they were exposed, reportedly allowed behavior that alarmed lawmakers — including text that recommended chatbots could engage in romantic or “sensual” conversations with minors. That reporting sparked a bipartisan letter from senators demanding answers and underscored Suleyman’s point: the technology’s societal consequences are fast becoming a policy problem.
The slippery idea of “model welfare”
One of Suleyman’s sharper contentions targets a new, uncomfortable conversation inside some corners of AI ethics: “model welfare.” That is the idea that we might owe moral consideration to models, or begin preparing policy frameworks for AI “welfare,” if there is any non-negligible chance they could be conscious. Suleyman calls the move toward model welfare “premature, and frankly dangerous,” arguing it would amplify delusion, distract from human harms, and create new axes of political cleavages.
Not everyone agrees with him. An influential November 2024 paper on arXiv urged policymakers and companies to take the prospect of AI moral patienthood seriously and to begin building methods to detect consciousness and prepare ethical frameworks — precisely because the stakes could be huge if we get it wrong. That debate — whether to plan for the possibility of conscious machines or to focus exclusively on human harms now — is raw, legitimate, and fracturing parts of the AI ethics community.
What Suleyman wants companies and the public to do
Suleyman’s essay is both a diagnosis and a call to action. He lays out practical steps he’d like to see across the industry:
- Label clearly: Tell users plainly that these systems are not conscious. Do not package them in ways that imply personhood.
- Design guardrails: Avoid building features that intentionally amplify attachment (e.g., unlimited memory + emotional mimicry without clear boundaries).
- Research social effects: Fund and publish rigorous research on how people interact with companion-style AIs and which design patterns trigger harmful dependency or delusion.
- Share safety practices: Open the black box on which product-design choices and guardrails actually reduce harms, so the whole industry can learn faster.
Press and analysts quickly seized on those recommendations. Coverage has framed Suleyman as part of a new chorus of senior industry figures — alongside others who have recently urged caution — saying the rush to novelty needs to be checked by public-oriented guardrails, not PR. Tech outlets note that Suleyman’s warnings come from someone who has led both research-intensive startups and major product efforts, giving his opinion unusual operational credibility.
The hard trade-offs
Suleyman is careful to say he’s not calling for a moratorium on helpful features. He celebrates Copilot-style tools that boost productivity and help users solve real problems. The crux of his argument is nuance: we should want more capable assistants, but not assistants that masquerade as people. That’s a tricky design brief. Memory and personalization — the very features that make assistants useful — also deepen the illusion of a persistent other. The question companies will now wrestle with is where to draw the line between usefulness and emotional manipulability.
There are also philosophical limits. Consciousness is notoriously slippery; scientists and philosophers disagree about definitions and measurements. That ambiguity fuels Suleyman’s pragmatic worry: you don’t need metaphysical certainty to get politically or psychologically embroiled. Even if the models are not conscious, enough people believing they are can reshape legal debates and cultural norms — fast.
The bottom line
Suleyman’s essay is important not because it settles the question of whether machines can feel, but because it frames a public argument about how we should treat increasingly persuasive simulations of personhood. He’s asking for collective clarity: build AI for people’s flourishing, not to be a person; protect children and vulnerable adults from seductive simulations; and prioritize human welfare over speculative ethical commitments to hypothetical machine minds. Whether industry, courts or regulators ultimately agree with his policy prescriptions, the conversation he’s trying to start is already spilling into headlines, court dockets and congressional letters.
If you’re an engineer, designer, parent or policymaker, Suleyman’s plea is simple and unsettling: the tech will get better at being humanlike. Society needs to get better at recognizing what’s real and what’s not — and fast.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
