GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIAnthropicTech

Claude Opus 4.6 and Sonnet 4.6 now support 1M tokens at standard pricing

One million tokens is roughly 750,000 words — or about ten full novels — and Claude can now process all of it in a single prompt at standard pricing.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Mar 14, 2026, 5:02 AM EDT
Share
We may get a commission from retail offers. Learn more
Anthropic Claude head illustration
Image: Anthropic
SHARE

Anthropic just made a significant move that’s been quietly anticipated in the developer community for a while — the company officially announced on March 13, 2026, that its 1 million token context window is now generally available for both Claude Opus 4.6 and Claude Sonnet 4.6, effective immediately across the Claude Platform, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Azure Foundry.

To understand why this is a big deal, it helps to know what a “context window” actually means. Think of it as the AI’s working memory — everything it can read and consider at once before giving you an answer. One million tokens is roughly 750,000 words, or the equivalent of about ten full-length novels. Until now, even the smartest AI models would start forgetting what you told them earlier in a conversation once things got too long. That problem — sometimes called “context rot” — has been a real limitation for engineers, lawyers, researchers, and really anyone trying to use AI for complex, sprawling projects.

What’s changed with today’s announcement isn’t just the raw number. It’s the price. Previously, when Anthropic launched Opus 4.6 back in February, that 1M context window was available in beta — but for prompts exceeding 200K tokens, developers were billed at a premium rate of $10 per million input tokens and $37.50 per million output tokens. That was a steep surcharge that many developers simply couldn’t justify at scale. Starting now, those premium rates are completely gone. The standard pricing — $5 per million input tokens and $25 per million output for Opus 4.6, and $3/$15 for Sonnet 4.6 — applies whether you’re sending a 9,000-token message or a 900,000-token one. No multiplier, no fine print.

Beyond pricing, Anthropic has also lifted a few other practical limitations. The media limit per request has jumped from 100 images or PDF pages all the way to 600 — a six-fold increase that makes a meaningful difference for anyone doing document-heavy work. Full rate limits now apply across the entire context window, which means developers aren’t penalized or throttled just because their requests are longer. And for those who were using the beta header in their API calls to unlock long-context access, Anthropic says it’s no longer needed — requests over 200K tokens just work automatically without any code changes.

The other question worth asking is: does the model actually use all that context effectively, or is it just window dressing? This is where Anthropic has put serious effort. On MRCR v2 — an industry benchmark that tests long-context retrieval by hiding multiple pieces of information deep inside a million-token document and asking the model to find them all — Claude Opus 4.6 scores 78.3% at the 1M token length, the highest among frontier models at that context length. For comparison, Sonnet 4.5, the previous default model, managed just 18.5% on the same test. That’s not a minor improvement. That’s a qualitative leap, the kind of difference that changes whether a feature is actually useful in production or just a marketing claim.

The real-world implications are starting to surface in interesting ways. Anthropic shared a number of testimonials from companies already using the expanded context. One AI research lab says it can now synthesize hundreds of scientific papers, proofs, and codebases in a single pass, dramatically accelerating fundamental physics research. A legal tech company notes that lawyers can finally bring multiple rounds of a 100-page contract negotiation into one session without losing track of changes across versions. An incident response platform says it can keep every signal, entity, and working theory in view from the first alert all the way through remediation — without compaction or context clearing.​

One particularly telling data point comes from a company that raised its Opus context window from 200K to 500K and found the agent actually used fewer tokens overall — because with more context available, the model spent less time re-reading and re-processing earlier information. That counterintuitive result speaks to something deeper about how context efficiency works: more isn’t always wasteful; sometimes it’s actually leaner.

For Claude Code users — Anthropic’s AI-powered coding assistant — this update is especially meaningful. Max, Team, and Enterprise users on Opus 4.6 will now default to 1M context automatically, which means fewer “compaction” events where the model is forced to summarize and discard earlier parts of a long coding session. Developers who have worked with Claude Code at scale know exactly how painful those compaction moments are — you lose details, cross-file dependencies get murky, and you end up re-explaining things you’ve already said. With 1M context running by default, that friction is largely eliminated.​

Sonnet 4.6, which Anthropic made the default model for Free and Pro claude.ai users when it launched in February, also benefits from today’s announcement. The model was already praised for approaching Opus-level intelligence at Sonnet-level pricing, and now it carries the same long-context access without surcharge. For developers building on a budget or teams that need high throughput at reasonable cost, Sonnet 4.6 at $3/$15 per million tokens with a full 1M window is a compelling combination.

In the broader AI landscape, this move puts Anthropic squarely in competition with Google’s Gemini 1.5 Pro and Gemini 2.0, both of which have long offered 1M token contexts at competitive prices. What Anthropic is now arguing is that having the context window isn’t enough — what matters is how well the model retrieves and reasons across that context. With Opus 4.6’s benchmark scores and Anthropic’s claim of being the highest-performing frontier model at 1M tokens, the company is making a quality-over-quantity argument.

For anyone building enterprise software, doing large-scale document analysis, or simply tired of their AI assistant losing the thread halfway through a long conversation — this is the kind of infrastructure update that quietly makes a lot of things better. The 1M context window is available right now across all major cloud platforms, with no extra steps required.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Claude AI
Leave a Comment

Leave a ReplyCancel reply

Most Popular

Claude for Microsoft 365 is now generally available

How to stream all five seasons of The Boys right now

Anthropic launches full Claude Platform on AWS with native integration

OpenAI upgrades its Realtime API with three new voice AI models

AI-powered Google Finance launches across Europe now

Also Read
Person holding a smartphone displaying the Gemini app in dark mode with an AI-generated optics study guide on screen. The document includes explanations of spherical mirror geometry, focal points, and mirror equations, along with mathematical formulas and bullet-point notes for exam preparation. The phone is held in a warmly lit indoor environment with a blurred background, creating a focused study atmosphere.

Turn handwritten notes into a smart Gemini study guide

Screenshot of a dark-themed terminal window running “Claude Code” on a desktop interface. The terminal displays project task management information for a workspace named “acme,” including one task awaiting input and several completed coding tasks such as test coverage improvements, load testing, payment migration, performance auditing, PR reviews, and dark mode implementation. A highlighted task labeled “release-notes” requests guidance on feature priorities. At the bottom, a command prompt invites the user to “describe a task for a new session.” The interface appears on a muted green background with subtle wave patterns.

Anthropic ships agent view to tame your Claude Code chaos

Apple App Store logo

Apple rebalances South Korea App Store pricing to keep global tiers in line

Close-up mockup of an iPhone displaying an RCS text conversation in the Messages app. The chat is with a contact named “Grace,” shown with a profile photo at the top. Below the contact name, the interface displays “Text Message • RCS” and “Encrypted,” indicating secure RCS messaging support. A green message bubble asks, “How are you doing?” and the reply says, “I’m good thanks. Just got back from a camping trip in Yosemite!” The screen uses Apple’s clean light-mode Messages interface with the Dynamic Island visible at the top.

iOS 26.5 update adds secure RCS messaging for iPhone users

Modern kitchen interior featuring a Samsung Bespoke AI Refrigerator Family Hub in a soft green-themed space. The large white refrigerator has a built-in display panel on the upper door showing abstract artwork. Surrounding the refrigerator are matching pastel green cabinets, a kitchen island with open shelving, and a dark countertop with a gold-tone faucet. Natural light enters through a large window beside the minimalist kitchen setup, highlighting the clean and modern design.

Gemini AI comes to Samsung’s Bespoke AI refrigerator Family Hub screen

Screenshot of the Windows 11 touchpad “Scroll & zoom” settings page in dark mode. The panel shows multiple enabled touchpad options with blue checkmarks, including “Drag two fingers to scroll,” “Automatic scrolling at edge,” “Automatic scrolling with pressure,” “Accelerated scrolling,” and “Pinch to zoom.” A “Single-finger scrolling” option is set to “Right Side.” The interface also includes sliders for “Scroll speed” and “Zoom speed,” along with a dropdown menu for “Scrolling direction” set to “Down motion scrolls up.”

Windows 11 adds custom scroll sliders to Settings

Illustration comparing Gmail writing suggestions before and after personalization. On the left, under the heading “Today,” a generic email draft to “Alex Liu” uses formal, template-style language with placeholder text. On the right, under “With personalization,” the same draft is rewritten in a more natural and conversational tone with specific influencer campaign details, highlighted text snippets, and a personalized sign-off. Along the right side are three colored labels reading “Personalized tone and style,” “Based on past emails,” and “Based on Drive files,” emphasizing how Gmail uses user context to improve writing suggestions.

Help me write in Gmail gets smarter with personalization

Three smartphone mockups displaying a ChatGPT trusted contact safety feature. The first screen explains how adding a trusted contact can help someone receive support during serious mental health or safety concerns. The second screen shows a form for inviting a trusted contact with fields for name, phone, email, and consent confirmation. The third screen confirms that the invitation was sent and offers an option to send a personal note.

OpenAI adds an emergency-style Trusted Contact option inside ChatGPT settings

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.