OpenAI just took a significant step toward protecting its growing user base from one of the most persistent threats in digital security – account takeover. On April 30, 2026, the company officially launched Advanced Account Security (AAS), a new opt-in feature for ChatGPT accounts that essentially throws passwords in the trash and replaces them with something far harder to beat.
The timing makes sense. ChatGPT has evolved from a curious experiment into a tool millions of people rely on for sensitive work – think legal research, medical queries, confidential business strategy, and personal matters that you probably wouldn’t want a stranger reading through. An account can now hold months or years of deeply personal conversations and connect to third-party workflows, making it a juicy target for attackers. OpenAI knows this, and AAS is its answer.
At its core, Advanced Account Security swaps out traditional password-based login for passkeys or physical security keys, making the entire sign-in process phishing-resistant by design. Phishing – where an attacker tricks you into entering your credentials on a fake login page – is one of the oldest and most effective attacks in the book, and it works specifically because passwords can be stolen and replayed. With passkeys and hardware security keys, there’s no password to steal. The cryptographic handshake happens between your device and the server, and there’s nothing for a fake website to capture.
One of the more notable changes is what happens to account recovery. Most people don’t think about recovery options until they’re locked out, but those same recovery paths – email and SMS – are also the ones attackers love to exploit. A compromised email account or a SIM swap attack can let someone bypass everything else and reset their way into your ChatGPT. AAS closes that door entirely: email and SMS recovery are disabled, and only backup passkeys, physical security keys, or recovery keys can be used to get back in. The trade-off is real – OpenAI’s own support team won’t be able to help you recover your account if you lose access. If you lose both your security key and your recovery key, your account and its conversation history are gone. That’s a meaningful responsibility shift, and OpenAI is upfront about it.
Sessions also get tighter. Under AAS, sign-in sessions are intentionally shortened so that even if your device is compromised or a session token is stolen, the window of exposure is much smaller. Users also get instant alerts when a new login happens, and they can review and manage all active sessions across every device they’re signed into. It’s the kind of transparency that’s standard in banking apps but has been slow to arrive in AI platforms.
There’s also a privacy angle that many users will appreciate. AAS automatically excludes your conversations from being used to train OpenAI’s models. Previously, users had to manually opt out of that setting. For anyone doing sensitive professional work inside ChatGPT – lawyers, doctors, security researchers, journalists – having that automatically locked in is a meaningful reassurance.
The feature also extends protection to Codex, OpenAI’s AI-powered coding tool. That matters because Codex users often work with proprietary code, unreleased projects, and sensitive software infrastructure. Protecting those accounts with the same level of security as a ChatGPT consumer account is a straightforward call, but it’s worth noting that OpenAI is thinking about its developer ecosystem as part of this security push, not just casual users.
To make the shift to hardware-based authentication less of a barrier, OpenAI has partnered with Yubico – the company behind YubiKeys – to offer users a co-branded bundle of two security keys. The bundle includes the YubiKey C Nano, which is designed to sit inside your laptop’s USB-C port and stay there for everyday authentication, and the YubiKey C NFC, which works for backup use across laptops and mobile devices. The two-key bundle is priced at $68, and it’s available to all eligible users through the security settings on the web – not just AAS enrollees. Yubico CEO Jerrod Chong put it directly: “Ultimately, our intent is to drastically reduce the threat of unauthorized access to sensitive data in OpenAI accounts worldwide.” Users can also bring their own FIDO-compliant security key from any other vendor, or stick with software-based passkeys if they prefer a hardware-free setup.
The feature is particularly aimed at what OpenAI describes as “high-risk” users – journalists, elected officials, political dissidents, and researchers who are more likely to be targeted by nation-state actors or sophisticated attackers. But “high-risk” is a relative term. In 2025 and 2026, a growing number of professionals fall into this category simply because of the work they do. The feature being available to everyone, including free-tier accounts, is a smart move – it normalizes stronger authentication rather than treating it as a premium perk.
AAS is also getting a mandatory rollout within OpenAI’s Trusted Access for Cyber program. Starting June 1, 2026, individual members of this program who access OpenAI’s most advanced and permissive cyber-capable models will be required to have AAS enabled. Organizations can alternatively attest that their single sign-on workflows already include phishing-resistant authentication. This signals OpenAI’s recognition that stronger security isn’t optional when the models in question are powerful enough to be used in national security contexts.
It’s worth noting that this isn’t entirely new territory in the tech industry. Google has offered a similar Advanced Protection Program for Gmail and Google accounts for nearly a decade, and it uses essentially the same playbook – physical security keys, restricted account recovery, and tighter session management. That OpenAI is now building something comparable reflects how seriously it’s taking its role as core AI infrastructure, as the company itself described it. When you’re the platform that businesses, governments, and individuals are building critical workflows on top of, the accountability for security goes up significantly.
OpenAI has made clear that this is just the beginning. The company says it plans to extend Advanced Account Security to enterprise environments as well – where the stakes for a single compromised account can cascade across an entire organization. For now, anyone who wants to enroll can do so through the Security section of their ChatGPT account on the web, starting today.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
