By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIAnthropicTech

Anthropic’s Claude Code Review is coming for your bug backlog

Anthropic is wiring Claude Code directly into your pull requests, turning its own internal review workflow into an AI teammate that never gets tired of reading diffs.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Mar 11, 2026, 4:36 AM EDT
Share
We may get a commission from retail offers. Learn more
Minimal flat illustration of code review: an orange background with two large black curly braces framing the center, where a white octagonal icon containing a simple code symbol “” is examined by a black magnifying glass.
Image: Anthropic
SHARE

Anthropic is turning its own internal code-review playbook into a product — and it’s aimed squarely at the messy reality of AI-era software development, where engineers ship more code than ever but have less and less time to read it carefully.

At the heart of the announcement is Code Review, a new feature inside Claude Code that throws a team of AI agents at every pull request instead of a single summariser bot. The system is modeled on the process Anthropic uses internally and is now rolling out in research preview for Claude Team and Enterprise customers, with reviews triggered automatically on GitHub PRs once an admin flips the switch. Anthropic is explicit about what this is and isn’t: it’s designed for depth, not speed, and it will not auto-approve your PRs — humans still own the green button.

The pitch starts from a problem that a lot of engineering orgs will recognise immediately. Inside Anthropic, code output per engineer has reportedly jumped 200 percent in the last year, thanks to coding assistants and agents. That productivity boost didn’t magically create more hours in the day for senior engineers to comb through diffs, so reviews became a bottleneck and many PRs got what the company bluntly calls “skims rather than deep reads.” Anthropic says that before Code Review, only 16 percent of PRs received substantive comments from human reviewers; after rolling the system out internally, that figure jumped to 54 percent. The company is betting that a lot of teams living with the same tension — more AI-generated code, thinner human attention — will be willing to pay for help.

Under the hood, Code Review behaves less like a single omniscient assistant and more like a panel of specialised reviewers. When a pull request opens, Claude Code dispatches multiple agents in parallel, each reading the diff and relevant context from a different angle, hunting for logic errors, edge cases, and fragile patterns that could ship subtle bugs. Those agents then cross‑check each other’s findings to filter out obvious false positives, and a final aggregator agent merges the results, deduplicates overlapping issues, and ranks them by severity before posting back to GitHub as one high‑signal summary comment plus a set of inline notes pinned to specific lines. Reviews scale with the change: large or complex PRs get more agents and a deeper pass, while tiny tweaks get a lighter touch, with the average review taking around 20 minutes in Anthropic’s testing.

Anthropic is already sharing a couple of “we would have shipped this bug” stories from its own usage and early customers. In one internal case, a seemingly routine one‑line change to a production service — the sort of diff that often gets rubber‑stamped — would have broken authentication entirely, but Code Review flagged it as critical before the PR was merged. In another, during a ZFS encryption refactor in TrueNAS’s open‑source middleware, the system surfaced a pre‑existing bug in adjacent code: a type mismatch that was silently wiping the encryption key cache on every sync, a problem that wasn’t actually introduced by the PR itself. Anthropic says that on big PRs changing more than 1,000 lines, 84 percent of reviews produce findings, with an average of 7.5 issues, whereas on small PRs under 50 lines, that drops to 31 percent and roughly half an issue on average — and less than 1 percent of those findings are marked as incorrect by engineers.

The company is also being upfront that this level of depth isn’t cheap. Code Review is billed on token usage and, in practice, translates into something like $15–25 per review on typical PRs, with costs scaling up alongside the size and complexity of the diff. That effectively makes it an opt‑in premium layer on top of the lighter, open‑source Claude Code GitHub Action, which Anthropic continues to offer for quick summaries and suggestions. To avoid surprise bills, admins get a few levers: organisation‑wide monthly spend caps, the ability to enable Code Review only on selected repos, and an analytics dashboard that tracks how many PRs were reviewed, what percentage of findings teams accepted, and the total cost.

From a developer’s point of view, the integration is deliberately boring — in a good way. Once a Team or Enterprise admin enables the feature in Claude Code settings, installs the GitHub app, and chooses the repositories to cover, reviews simply appear on new PRs with no extra configuration. The promise is that engineers keep using their normal GitHub workflow while Claude sits in the background as a very picky, very patient reviewer that never gets tired of re‑reading diff hunks. Crucially, Anthropic stresses that humans are still expected to make the final call on merges, and Code Review is not marketed as a replacement for human judgment but as a way to widen coverage when senior reviewers are stretched.

Zoomed out, Code Review is the latest in a string of Anthropic moves to position Claude Code as more than just a coding assistant that spits out snippets. The company has been talking up Claude as a reasoning‑first agent that can help with security analysis, legacy code modernization, and long‑context refactors, and this feature extends that story into the governance layer of software development. It also lands in the middle of an industry‑wide shift where “vibe coding” with AI tools is common, but formal review processes haven’t fully caught up with the volume of machine‑generated changes hitting production. For teams staring at ever‑growing PR queues and nervous about subtle regressions slipping through, Anthropic is essentially arguing that the only way to keep up with AI‑accelerated coding is to bring equally capable AI into the review room.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Claude AIClaude Code
Leave a Comment

Leave a ReplyCancel reply

Most Popular

Claude Platform’s new Compliance API answers “who did what and when”

Amazon Prime just made Friday gas runs $0.20 per gallon cheaper

Google Drive now uses AI to catch ransomware in real time

Google launches Veo 3.1 Lite for cheaper AI video in the Gemini API

iOS 26.4 adds iCloud.com search for files and photos

Also Read
Product image showing a white ChromeOS‑branded USB flash drive next to its orange and white packaging with a laptop and heartbeat icon and the text “In case of obsolescence, break seal,” alongside the ChromeOS and Back Market logos on a clean white background.

This $3 ChromeOS Flex stick from Google and Back Market wants to save your old PC

Google Gemini AI. The image shows the word "Gemini" written in a modern, sans-serif font on a black background. The letters "G" and "e" are in a gradient blue color, while the letters "m," "i," "n," and "i" transition from a light blue to a light beige color. Above the second "i" in "Gemini," there is a stylized star or sparkle symbol, adding a celestial or futuristic touch to the design.

Google’s new MCP tools stop Gemini agents from hallucinating old APIs

A smart TV screen showing a paused YouTube podcast‑style video with two people talking into microphones, overlaid by a large circular “Ask” button with a sparkle icon in the bottom right corner.

YouTube’s new Ask AI button lands on smart TVs

Ray-Ban Meta Blayzer Optics (Gen 2) AI glasses

Meta’s new Ray-Ban AI glasses finally put prescriptions first

AT&T logo

AT&T OneConnect starts at $90 for fiber and wireless together

A wide Opera Neon promotional graphic showing the “MCP Connector” interface centered on a blurred gradient background, with a dialog that says “Connect AI systems to Opera Neon” and toggle for “Allow AI connection,” surrounded by labeled boxes for OpenClaw MCP Client, ChatGPT MCP Client, N8N MCP Client, Claude MCP Client, and Lovable MCP Client connected by dotted lines.

Opera Neon adds MCP Connector for true agentic browsing

Assassin’s Creed Shadows

Assassin’s Creed Shadows PS5 Pro patch adds new PSSR

A modern living room features a Sony BRAVIA 8 OLED TV mounted on a wall. The TV displays a vibrant abstract image with blue, yellow, and black colors. The room has a minimalist design with a large window showing a scenic outdoor view with trees and a pinkish sky. The furniture includes a beige sofa, a wooden coffee table with books and glass bottles, and a light-colored rug. Decorative items like vases and a plant are placed on a shelf below the TV. The overall ambiance is cozy and elegant.

Sony and TCL create BRAVIA Inc to run future Sony TVs

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.