AI Disruption in Cybersecurity: Anthropic’s Game-Changer
Artificial intelligence is making waves once more in the market. Following a recent drop in significant SaaS stocks across the U.S. and Israel, Anthropic’s latest offering has sparked a strong reaction in the cybersecurity space, raising concerns over potential vulnerabilities in the industry’s business model.
Anthropic, known for the Claude chatbot, has introduced an early version of Claude Code Security. This innovative tool is specifically designed to uncover hidden vulnerabilities within software code. According to the company, this system utilizes its Claude Opus 4.6 model and analyzes code in a manner akin to a human security researcher, rather than relying solely on established patterns and rules for detection.
The tool monitors data flow within applications, detects flaws in business logic, and conducts multi-step validation. It uses AI-based self-review processes to minimize false positives, and offers automated fixes for developers to accept or reject. However, the product currently doesn’t perform runtime testing, which means it does not provide comprehensive, real-time protection.
Anthropic tested this system on active open-source projects, uncovering over 500 previously unknown vulnerabilities. These features were developed over more than a year, in collaboration with its Frontier Red Team and through participation in cybersecurity competitions such as Capture the Flag, as well as partnerships with research institutions.
The market reacted promptly to the announcement. Shares of major cybersecurity firms such as CrowdStrike, Okta, Cloudflare, and Zscaler saw sharp declines. Israeli companies were not spared either, with JFrog plummeting 24%, Check Point dropping 4%, SentinelOne down nearly 3%, and Palo Alto Networks slipping by 1.5%.
3 View gallery


The AI tool tracks data flows within applications, identifies business logic flaws and performs multi-step validation
(Photo: Shutterstock)
Some investors express concerns that AI systems capable of autonomously scanning and fixing code could jeopardize traditional security analysis tools and squeeze profit margins for firms reliant on AI-driven detection mechanisms.
Conversely, some market observers urge a more measured view. Liran Grinberg, a founding partner at venture capital firm Team8, mentioned that the market’s response seemed disproportionate. “These are exaggerated reactions to the actual situation,” he noted, emphasizing that several affected companies have limited exposure to the specific niche Anthropic has entered.
He further articulated that while the arrival of major AI model developers in the cybersecurity domain was anticipated, overhauling enterprise-wide security infrastructure is complicated and necessitates deep operational expertise that cannot be acquired overnight.
Koby Sambursky, a partner at Glilot Capital, is less concerned about a significant collapse in the industry. “The expertise of cybersecurity firms is crucial,” he stated. “Large organizations won’t rely solely on a generic AI solution.”
Tomer Perry, CEO of InnoCom Group Aman, pointed out that the market has reacted almost reflexively to every new AI product recently. “The challenges in cybersecurity are persistent,” he remarked. “They are simply evolving to become more technologically driven.”
Nevertheless, industry analysts recognize that junior cybersecurity positions and startups focused on niche AI-based security solutions may face challenges if companies start leveraging general AI tools for similar internal tasks.
Anthropic’s entry into the market raises concerns regarding potential malicious use. While enhanced detection tools may pose hurdles for cybercriminals, such individuals might also seek to exploit similar AI capabilities to identify vulnerabilities themselves. Anthropic has stated that access to the tool will be restricted.
Interestingly, similar products from competitors have not invoked such extreme market reactions. OpenAI’s Aardvark, launched back in October 2025, along with Microsoft’s Security Copilot and Google’s Security Command Center, entered the scene earlier without causing noticeable market disturbance.
“It is not merely another code-scanning tool that shapes enterprise security; it is the capability to manage risk comprehensively,” stated Itai Schwartz, co-founder and chief technology officer of the cybersecurity firm MIND. “While AI can troubleshoot issues, it does not replace the need for cybersecurity strategy, organizational accountability, or operational intricacies.”
Anthropic remains optimistic, anticipating that a considerable portion of the world’s code will be scanned by AI in the near future. For the cybersecurity sector, this prediction may signify not an end but a substantial transformation.

