Categories AI

Anthropic Launches Security Tool Amid Declining Cybersecurity Stocks

Last week, cybersecurity software stocks experienced a decline following the announcement of a new security feature for the Claude model by Anthropic.

According to a report from Bloomberg News on February 20, this downturn is indicative of a broader trend, where software shares have been plummeting due to concerns about competition from artificial intelligence companies.

The release of Anthropic’s tool, designed to “scan codebases for security vulnerabilities and recommend targeted software patches for human evaluation,” has notably contributed to the dip in stock prices for firms like Cloudflare, as highlighted in the report.

“There has been consistent selling in the software market, and today it’s security that is experiencing a mini flash crash due to a headline,” commented Dennis Dick, head trader at Triple D Trading.

“This market environment is alarming for investors, as changes are continuously pushing values down at the slightest hint of disruption. Caution is warranted; there was previously speculation that the software decline was exaggerated, yet the downward trend persists.”

Another wave of selloff in AI-related software occurred last summer, impacting companies like Salesforce and Workday, as reported by PYMNTS.

We’d love to be your preferred source for news.

Please add us to your preferred sources list so our news, data, and interviews show up in your feed. Thanks!

Advertisement: Scroll to Continue

The Bloomberg report noted that a significant amount of selling activity has followed the introduction of new AI tools by companies such as Anthropic, Google, and OpenAI. Investors are apprehensive that the advent of “vibe coding”—leveraging AI to generate software code—could empower users to create their applications, subsequently diminishing the demand for traditional software solutions.

However, as noted by PYMNTS last year, research indicates that vibe coding is not likely to replace human software developers anytime soon.

Researchers found that agentic AI models, such as Claude, yield the best results when developers review outcomes at critical checkpoints, rather than managing fully autonomous sessions.

“In the absence of these checkpoints, the models produced lengthier, less maintainable codebases and overlooked security considerations,” added PYMNTS.

These findings align with previous research on CoAct-1: Computer-Using Agents with Coding as Actions, which revealed that human interaction is crucial for guiding multi-agent software systems toward reliable outcomes.

“While vibe coding may indeed ignite a new economy, it will not do so through complete automation. Its true potential lies in transforming collaboration: Developers who manage, instruct, and refine AI will define the future of software development,” the report stated. “In doing so, coding may evolve from a focus on syntax to a more collaborative workflow where human supervision remains integral.”

For comprehensive coverage of AI, subscribe to the daily AI Newsletter.

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like