GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIAnthropicTech

Anthropic adds auto mode to cut Claude Code approval fatigue

Claude Code’s auto mode is built for longer, more autonomous runs, letting the agent carry out refactors and scripts without pinging you at every tiny step.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Mar 25, 2026, 6:06 AM EDT
Share
We may get a commission from retail offers. Learn more
Logo featuring a stylized orange asterisk-like symbol followed by the word 'Claude' in bold black serif font on a light beige background.
Image: Anthropic
SHARE

Anthropic is giving Claude Code a new gear, and it’s aimed squarely at developers who are tired of clicking “yes” on endless permission prompts but don’t quite have the nerve to hand the keys over completely. Auto mode, launched as a research preview for Claude Team users, is essentially a smarter middle path between Claude’s ultra-cautious default behavior and the anything-goes --dangerously-skip-permissions flag that only the bravest (or most reckless) folks touch in production-like environments.

If you’ve used Claude Code for real work, you already know the trade-off. By default, every file write and bash command requires an explicit green light from you. That’s great for safety, but it means you can’t just kick off a big refactor, a multi-step migration script, or a long-running debugging session and walk away. You end up babysitting the model, approving small, incremental steps one by one. Some developers have been sidestepping that friction by running with permission checks disabled via --dangerously-skip-permissions — which Anthropic openly calls out as risky, suitable only for tightly isolated environments where a destructive command can’t hurt anything important.​

Auto mode is Anthropic’s attempt to solve that tension without pretending the risk goes away. Instead of prompting you for every action, Claude decides on permissions for you, but under the watchful eye of a classifier that screens each tool call before it runs. That classifier is trained to flag obviously dangerous patterns: mass file deletion, suspicious data exfiltration, or commands that look like they’re trying to execute malicious code. If the action looks safe, it just goes through automatically; if it trips the risk detector, it gets blocked and Claude is nudged to find a different route to the goal.​

In practice, this means you can finally run longer tasks with far fewer interruptions. You can imagine starting a coding session where Claude needs to create a bunch of helper files, reorganize a directory, and run a series of tests: under default mode, that’s a flurry of “approve/deny” boxes; under auto mode, the mundane steps flow while the system stands guard against the truly dangerous ones. The model doesn’t just slam into a wall when something is blocked, either. If a risky action gets denied, Claude is redirected to try another approach, and only if it repeatedly insists on the blocked pattern will you eventually see a permission prompt asking you to step in.

Anthropic is careful not to oversell how safe this is. Auto mode reduces risk compared with skipping permissions entirely, but it doesn’t make your environment bulletproof. The classifier can still get calls wrong in both directions: it might allow something that turns out to be risky in your specific context, especially if your setup is idiosyncratic or the intent is ambiguous, and it might occasionally block harmless operations because they resemble something sketchy at a glance. That’s why Anthropic still recommends using auto mode in isolated environments for now and labels it clearly as a research preview. It’s a tool to reduce friction, not a blanket guarantee.​

There’s also a small practical cost: every tool call now goes through that extra review step, which Anthropic says may slightly increase token consumption, cost, and latency. For most teams, the trade-off will likely be worth it — shaving off constant human approvals and context switching can easily pay back milliseconds of extra model overhead. But if you’re running very tight, high-volume automation pipelines, that overhead may be something you’ll want to measure in your own environment.​

On the rollout side, Anthropic is starting with Claude Team users and then expanding to Enterprise and API customers in the coming days, covering both the Sonnet 4.6 and Opus 4.6 models under Claude Code. Admins get centralized control: on Enterprise, Team, and API plans, they’ll be able to disable auto mode for the CLI and VS Code extension via managed settings by setting "disableAutoMode": "disable". On the Claude desktop app, auto mode is actually off by default and can be toggled on under Organization Settings → Claude Code. That gives security-conscious organizations a clear way to stage and test the feature before letting everyone loose with it.

For individual developers, enabling auto mode is straightforward. On the command line, you can run claude --enable-auto-mode and then cycle to that permission mode with Shift+Tab during a session. In the desktop app or the VS Code extension, you turn it on in Settings → Claude Code and then pick it from the permissions drop-down once it’s available. From there, the day-to-day experience should feel pretty similar — you still ask Claude to perform coding tasks and use tools — but you’ll see far fewer confirmation pop-ups mid-flow, especially for routine edits and commands.​

Zooming out, auto mode fits into a broader push from Anthropic to make Claude feel more like a reliable teammate embedded in your local workflow, not a remote assistant you have to micromanage. In recent weeks, Anthropic has been rolling out features that connect Claude more deeply to your computer and development tools, positioning Claude Code as something that can actually drive day-to-day engineering work rather than just generate code snippets in a chat window. Giving the model a limited autonomy layer — bounded by a safety classifier — is a natural next step on that path.​

For teams evaluating whether to flip the switch, the big questions will be cultural as much as technical. If your org has already normalized --dangerously-skip-permissions in safe sandboxes, auto mode will probably look like a welcome upgrade: similar convenience, better guardrails. If your security posture is stricter, you might treat auto mode like any new privilege escalation mechanism: roll it out to a subset of users, monitor what it does in real projects, and tune your policies before wider adoption. In both cases, developers who have been wrestling with approval fatigue now have a more nuanced option that doesn’t require choosing between safety and sanity every time they open a terminal.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Claude AIClaude Code
Leave a Comment

Leave a ReplyCancel reply

Most Popular

Claude for Microsoft 365 is now generally available

How to stream all five seasons of The Boys right now

Anthropic launches full Claude Platform on AWS with native integration

OpenAI upgrades its Realtime API with three new voice AI models

AI-powered Google Finance launches across Europe now

Also Read
Person holding a smartphone displaying the Gemini app in dark mode with an AI-generated optics study guide on screen. The document includes explanations of spherical mirror geometry, focal points, and mirror equations, along with mathematical formulas and bullet-point notes for exam preparation. The phone is held in a warmly lit indoor environment with a blurred background, creating a focused study atmosphere.

Turn handwritten notes into a smart Gemini study guide

Screenshot of a dark-themed terminal window running “Claude Code” on a desktop interface. The terminal displays project task management information for a workspace named “acme,” including one task awaiting input and several completed coding tasks such as test coverage improvements, load testing, payment migration, performance auditing, PR reviews, and dark mode implementation. A highlighted task labeled “release-notes” requests guidance on feature priorities. At the bottom, a command prompt invites the user to “describe a task for a new session.” The interface appears on a muted green background with subtle wave patterns.

Anthropic ships agent view to tame your Claude Code chaos

Apple App Store logo

Apple rebalances South Korea App Store pricing to keep global tiers in line

Close-up mockup of an iPhone displaying an RCS text conversation in the Messages app. The chat is with a contact named “Grace,” shown with a profile photo at the top. Below the contact name, the interface displays “Text Message • RCS” and “Encrypted,” indicating secure RCS messaging support. A green message bubble asks, “How are you doing?” and the reply says, “I’m good thanks. Just got back from a camping trip in Yosemite!” The screen uses Apple’s clean light-mode Messages interface with the Dynamic Island visible at the top.

iOS 26.5 update adds secure RCS messaging for iPhone users

Modern kitchen interior featuring a Samsung Bespoke AI Refrigerator Family Hub in a soft green-themed space. The large white refrigerator has a built-in display panel on the upper door showing abstract artwork. Surrounding the refrigerator are matching pastel green cabinets, a kitchen island with open shelving, and a dark countertop with a gold-tone faucet. Natural light enters through a large window beside the minimalist kitchen setup, highlighting the clean and modern design.

Gemini AI comes to Samsung’s Bespoke AI refrigerator Family Hub screen

Screenshot of the Windows 11 touchpad “Scroll & zoom” settings page in dark mode. The panel shows multiple enabled touchpad options with blue checkmarks, including “Drag two fingers to scroll,” “Automatic scrolling at edge,” “Automatic scrolling with pressure,” “Accelerated scrolling,” and “Pinch to zoom.” A “Single-finger scrolling” option is set to “Right Side.” The interface also includes sliders for “Scroll speed” and “Zoom speed,” along with a dropdown menu for “Scrolling direction” set to “Down motion scrolls up.”

Windows 11 adds custom scroll sliders to Settings

Illustration comparing Gmail writing suggestions before and after personalization. On the left, under the heading “Today,” a generic email draft to “Alex Liu” uses formal, template-style language with placeholder text. On the right, under “With personalization,” the same draft is rewritten in a more natural and conversational tone with specific influencer campaign details, highlighted text snippets, and a personalized sign-off. Along the right side are three colored labels reading “Personalized tone and style,” “Based on past emails,” and “Based on Drive files,” emphasizing how Gmail uses user context to improve writing suggestions.

Help me write in Gmail gets smarter with personalization

Three smartphone mockups displaying a ChatGPT trusted contact safety feature. The first screen explains how adding a trusted contact can help someone receive support during serious mental health or safety concerns. The second screen shows a form for inviting a trusted contact with fields for name, phone, email, and consent confirmation. The third screen confirms that the invitation was sent and offers an option to send a personal note.

OpenAI adds an emergency-style Trusted Contact option inside ChatGPT settings

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.