Anthropic and OpenAI simply uncovered SAST's structural blind spot with free instruments
OpenAI launched Codex Security on March 6, getting into the applying safety market that Anthropic had disrupted 14 days earlier with Claude Code Security. Each scanners use LLM reasoning as a substitute of sample matching. Each proved that conventional static utility safety testing (SAST) instruments are structurally blind to complete vulnerability courses. The enterprise safety stack is caught within the center.
Anthropic and OpenAI independently launched reasoning-based vulnerability scanners, and each discovered bug courses that pattern-matching SAST was by no means designed to detect. The aggressive strain between two labs with a mixed private-market valuation exceeding $1.1 trillion means detection high quality will enhance sooner than any single vendor can ship alone.
Neither Claude Code Safety nor Codex Safety replaces your current stack. Each instruments change procurement math completely. Proper now, each are free to enterprise clients. The pinnacle-to-head comparability and 7 actions beneath are what you want earlier than the board of administrators asks which scanner you’re piloting and why.
How Anthropic and OpenAI reached the identical conclusion from totally different architectures
Anthropic revealed its zero-day research on February 5 alongside the discharge of Claude Opus 4.6. Anthropic mentioned Claude Opus 4.6 discovered greater than 500 beforehand unknown high-severity vulnerabilities in manufacturing open-source codebases that had survived a long time of professional assessment and thousands and thousands of hours of fuzzing.
Within the CGIF library, Claude found a heap buffer overflow by reasoning concerning the LZW compression algorithm, a flaw that coverage-guided fuzzing couldn’t catch even with 100% code protection. Anthropic shipped Claude Code Safety as a restricted analysis preview on February 20, obtainable to Enterprise and Workforce clients, with free expedited entry for open-source maintainers. Gabby Curtis, Anthropic’s communications lead, told VentureBeat in an exclusive interview that Anthropic constructed Claude Code Safety to make defensive capabilities extra broadly obtainable.
OpenAI’s numbers come from a unique structure and a wider scanning floor. Codex Safety developed from Aardvark, an inner software powered by GPT-5 that entered personal beta in 2025. Throughout the Codex Safety beta interval, OpenAI’s agent scanned greater than 1.2 million commits throughout exterior repositories, surfacing what OpenAI mentioned had been 792 essential findings and 10,561 high-severity findings. OpenAI reported vulnerabilities in OpenSSH, GnuTLS, GOGS, Thorium, libssh, PHP, and Chromium, leading to 14 assigned CVEs. Codex Safety’s false constructive charges fell greater than 50% throughout all repositories throughout beta, in line with OpenAI. Over-reported severity dropped greater than 90%.
Checkmarx Zero researchers demonstrated that reasonably difficult vulnerabilities generally escaped Claude Code Safety’s detection. Builders may trick the agent into ignoring weak code. In a full production-grade codebase scan, Checkmarx Zero discovered that Claude recognized eight vulnerabilities, however solely two had been true positives. If reasonably advanced obfuscation defeats the scanner, the detection ceiling is decrease than the headline numbers counsel. Neither Anthropic nor OpenAI has submitted detection claims to an unbiased third-party audit. Safety leaders ought to deal with the reported numbers as indicative, not audited.
Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, informed VentureBeat that the aggressive scanner race compresses the window for everybody. Baer suggested safety groups to prioritize patches primarily based on exploitability of their runtime context moderately than CVSS scores alone, shorten the window between discovery, triage, and patch, and keep software program invoice of supplies visibility in order that they know immediately the place a weak part runs.
Totally different strategies, virtually no overlap within the codebases they scanned, but the identical conclusion. Sample-matching SAST has a ceiling, and LLM reasoning extends detection previous it. When two competing labs distribute that functionality on the similar time, the dual-use math will get uncomfortable. Any monetary establishment or fintech working a business codebase ought to assume that if Claude Code Safety and Codex Safety can discover these bugs, adversaries with API entry can discover them, too.
Baer put it bluntly: open-source vulnerabilities surfaced by reasoning fashions needs to be handled nearer to zero-day class discoveries, not backlog gadgets. The window between discovery and exploitation simply compressed, and most vulnerability administration packages are nonetheless triaging on CVSS alone.
What the seller responses show
Snyk, the developer safety platform utilized by engineering groups to search out and repair vulnerabilities in code and open-source dependencies, acknowledged the technical breakthrough however argued that discovering vulnerabilities has by no means been the arduous half. Fixing them at scale, throughout lots of of repositories, with out breaking something. That’s the bottleneck. Snyk pointed to analysis exhibiting AI-generated code is 2.74 times more likely to introduce security vulnerabilities in comparison with human-written code, in line with Veracode’s 2025 GenAI Code Security Report. The identical fashions discovering lots of of zero-days additionally introduce new vulnerability courses after they write code.
Cycode CTO Ronen Slavin wrote that Claude Code Safety represents a real technical development in static evaluation, however that AI models are probabilistic by nature. Slavin argued that safety groups want constant, reproducible, audit-grade outcomes, and {that a} scanning functionality embedded in an IDE is helpful however doesn’t represent infrastructure. Slavin’s place: SAST is one self-discipline inside a much wider scope, and free scanning doesn’t displace platforms that deal with governance, pipeline integrity, and runtime habits at enterprise scale.
“If code reasoning scanners from main AI labs are successfully free to enterprise clients, then static code scanning commoditizes in a single day,” Baer informed VentureBeat. Over the subsequent 12 months, Baer expects the price range to maneuver towards three areas.
-
Runtime and exploitability layers, together with runtime safety and assault path evaluation.
-
AI governance and mannequin safety, together with guardrails, immediate injection defenses, and agent oversight.
-
Remediation automation. “The web impact is that AppSec spending in all probability doesn’t shrink, however the middle of gravity shifts away from conventional SAST licenses and towards tooling that shortens remediation cycles,” Baer mentioned.
Seven issues to do earlier than your subsequent board assembly
-
Run each scanners in opposition to a consultant codebase subset. Evaluate Claude Code Safety and Codex Safety findings in opposition to your current SAST output. Begin with a single consultant repository, not your complete codebase. Each instruments are in analysis preview with entry constraints that make full-estate scanning untimely. The delta is your blind spot stock.
-
Construct the governance framework earlier than the pilot, not after. Baer informed VentureBeat to deal with both software like a brand new knowledge processor for the crown jewels, which is your supply code. Baer’s governance mannequin features a formal data-processing settlement with clear statements on coaching exclusion, knowledge retention, and subprocessor use, a segmented submission pipeline so solely the repos you plan to scan are transmitted, and an inner classification coverage that distinguishes code that may go away your boundary from code that can’t. In interviews with greater than 40 CISOs, VentureBeat discovered that formal governance frameworks for reasoning-based scanning tools barely exist yet. Baer flagged derived IP because the blind spot most groups haven’t addressed. Can mannequin suppliers retain embeddings or reasoning traces, and are these artifacts thought-about your mental property? The opposite hole is knowledge residency for code, which traditionally was not regulated like buyer knowledge however more and more falls beneath export management and nationwide safety assessment.
-
Map what neither software covers. Software program composition evaluation. Container scanning. Infrastructure-as-code. DAST. Runtime detection and response. Claude Code Safety and Codex Safety function on the code-reasoning layer. Your current stack handles every little thing else. That stack’s pricing energy is what shifted.
-
Quantify the dual-use publicity. Each zero-day Anthropic and OpenAI surfaced lives in an open-source venture that enterprise purposes rely on. Each labs are disclosing and patching responsibly, however the window between their discovery and your adoption of these patches is strictly the place attackers function. AI safety startup AISLE independently found all 12 zero-day vulnerabilities in OpenSSL’s January 2026 security patch, together with a stack buffer overflow (CVE-2025-15467) that’s doubtlessly remotely exploitable with out legitimate key materials. Fuzzers ran in opposition to OpenSSL for years and missed each one. Assume adversaries are working the identical fashions in opposition to the identical codebases.
-
Put together the board comparability earlier than they ask. Claude Code Safety causes about code contextually, traces knowledge flows, and makes use of multi-stage self-verification. Codex Safety builds a project-specific menace mannequin earlier than scanning and validates findings in sandboxed environments. Every software is in analysis preview and requires human approval earlier than any patch is utilized. The board wants side-by-side evaluation, not a single-vendor pitch. When the dialog turns to why your current suite missed what Anthropic discovered, Baer supplied framing that works on the board stage. Sample-matching SAST solved a unique technology of issues, Baer informed VentureBeat. It was designed to detect identified anti-patterns. That functionality nonetheless issues and nonetheless reduces danger. However reasoning fashions can consider multi-file logic, state transitions, and developer intent, which is the place many trendy bugs reside. Baer’s board-ready abstract: “We purchased the best instruments for the threats of the final decade; the expertise simply superior.”
-
Monitor the aggressive cycle. Each corporations are heading towards IPOs, and enterprise safety wins drive the expansion narrative. When one scanner misses a blind spot, it lands on the opposite lab’s function roadmap inside weeks. Each labs ship mannequin updates on month-to-month cycles. That cadence will outrun any single vendor’s launch calendar. Baer mentioned that working each is the best transfer: “Totally different fashions purpose in another way, and the delta between them can reveal bugs neither software alone would persistently catch. Within the quick time period, utilizing each isn’t redundancy. It’s protection via range of reasoning techniques.”
-
Set a 30-day pilot window. Earlier than February 20, this take a look at didn’t exist. Run Claude Code Safety and Codex Safety in opposition to the identical codebase and let the delta drive the procurement dialog with empirical knowledge as a substitute of vendor advertising. Thirty days provides you that knowledge.
Fourteen days separated Anthropic and OpenAI. The hole between the subsequent releases can be shorter. Attackers are watching the identical calendar.
Source link
latest video
latest pick
news via inbox
Nulla turp dis cursus. Integer liberos euismod pretium faucibua








