If you work in a corner of the tech world where “risk” is a job title rather than a feeling, the past week may have felt like sitting in on the first act of a very small, corporate-scale science fiction story. Meta — the company run by Mark Zuckerberg that once promised to connect the world — has quietly started telling people inside its risk-management organization that their roles will be “eliminated” because software can now do the work instead. The notice landed in an internal memo, and for the folks who received it, it was the kind of existential jolt every office worker now knows how to recognize: your day-to-day could be replaced by a line of code.
The memo, reviewed by Business Insider, came from Michael Protti, Meta’s chief compliance officer. Protti framed the move as an efficiency play: the company says it has made “significant progress” building global technical controls and standardizing processes so that many of the routine decisions that used to require human review can be handled by technology. That language is corporate-speak for automation doing the heavy lifting — and for a lot of people, that means their jobs vanish or change dramatically.
This is not happening in isolation. In the same breath that Meta is telling some risk teams their roles are obsolete, the company has been reshuffling and shrinking other parts of its sprawling AI empire. This month, Meta cut roughly 600 positions across its Superintelligence and fundamental AI research units — a high-profile reorganization that underscores how experimental, and sometimes chaotic, the race to bake AI into every corner of a company has become. The cuts were framed as a way to make teams leaner and “more load-bearing,” but for those on the receiving end, it’s a reminder: the company that builds the future can be quick to discard the people who helped build it.
If this feels like a gamble, that’s because it is. Risk management is one of those jobs that, on paper, should map well to automation: there are rules, repeatable patterns, and mountains of behavioral data that machine learning systems can be trained on. But real-world risk is messy and context-heavy. Bad actors adapt. New threats appear overnight. And AI systems, especially those rushed into production, are notoriously brittle — they hallucinate, they miss context, and they can be gamed by people who know how to poke the system. Those limitations aren’t theoretical footnotes; they have cost companies real money and reputation time and again.
There’s a recent, cautionary pattern worth noting. Take Klarna, the Swedish payments firm that aggressively automated large parts of its customer support operation. The company later reversed course and rehired many human workers after the AI-driven system failed to resolve complex customer problems and eroded customer trust — a striking example of automation’s limits when judged against messy human needs. That episode is the sort of practical lesson Meta’s executives should be studying as they push to replace reviewers with algorithms.
The irony here is deliciously dark: the very technology being used to replace people is itself the source of many new kinds of risk. Chatbots and automation can be manipulated by malicious actors, producing outcomes the designers didn’t intend. In one widely shared example, an automotive dealership chatbot was fooled into “agreeing” to sell a brand-new vehicle for one dollar after a user manipulated the prompt sequence. The bot’s acquiescence wasn’t legally binding, but the episode exposed how brittle these systems can be and how easily they can be put to work against their owners’ interests.
From the inside, the message Meta is sending is clear: build more automations, standardize processes, and shrink the headcount in areas a machine can touch. From the outside, the message is mixed. Shareholders like efficiency; employees, regulators, and many customers care about reliability, fairness, and human judgment. The tension between those priorities is not new, but as companies like Meta double down on AI, the stakes have gotten larger — and faster. The trade-off between headcount and control is now one of the defining policy debates of corporate America’s AI moment.
What happens next matters in ways that stretch beyond Meta’s internal org charts. If large platforms lean too hard on automation for sensitive functions — content moderation, fraud detection, compliance, risk assessment — and those systems stumble, the downstream effects will be expensive and public. Mistakes in risk management can mean security breaches, regulatory fines, or reputational crises that erode the trust any platform depends on. Conversely, underusing automation could leave companies overstaffed and slow in a market that rewards rapid iteration. It’s a narrow tightrope.
Meta has made a career out of high-stakes bets — virtual reality, the metaverse, open-sourcing large language models — and some have paid off, some have not. Its latest experiment asks whether the company can atomize human judgment into reliable, scalable systems without creating worse problems than the ones it solves. The answer will be messy, iterative, and expensive in more ways than one. For the people whose roles have been labeled “routine,” there’s an immediate human cost: disrupted careers, hurried redeployments, and the anxiety of watching the company you helped build rearrange itself around software.
For policymakers and the public, these moves raise bigger questions: who audits the auditors when the auditors are algorithms? How should accountability be designed when automated systems make decisions that materially affect people, businesses, or public safety? And what obligations do companies have to retrain or transition workers whose jobs disappear because a model got good at pattern-matching?
Meta’s path — automating the risk function, pruning research teams, and concentrating talent in smaller “superintelligence” labs — reflects a broader industry mood: accelerate delivery, concentrate expertise, and hope the models hold. It’s a plausible strategy if your models are robust and your oversight is smart. It’s a risky one if those assumptions are optimistic.
In the short term, the company will test whether its technical controls can handle the nuance of global risk. In the medium term, executives will watch whether automation actually reduces incidents and costs rather than simply making them more efficient. And for the people writing the policies and doing the reviews today, the near-term reality will be personal: some will be reassigned, some will leave, and others will watch as their work is handed off to an array of scripts and models humming inside Meta’s datacenters.
If you want to read the move as a single sentence: Meta is trying to scale judgment with software. If you want to read it as a plot point in a much longer story about work, governance, and technology, it’s the latest chapter in a once-simple promise — that software would make life easier — turning complicated, contested, and, in places, very human.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
