Categories AI

Hackers Use AI Tools for Faster Attacks, Not New Tactics

AI-Based Attacks,
Artificial Intelligence & Machine Learning,
Fraud Management & Cybercrime

AI Augmentation Linked to Efficiency and Scale; Core Tactics Remain Unchanged

Hackers Gain Speed, Not Major New Tradecraft, Using AI Tools
Researchers state AI tools are enabling hackers to identify and exploit vulnerabilities more swiftly, enhancing everything from initial access to data exfiltration. (Image: Shutterstock)

Artificial intelligence (AI) tools are indeed assisting hackers in expediting certain elements of their attack strategies; however, the fundamentals of their operations remain largely unchanged, according to cybersecurity experts.

See Also: AI Is Empowering Cybercriminals, and Defenders Must Keep Up

Recent analyses from various cybersecurity firms regarding 2025 attack trends indicate that although attackers are integrating generative AI (GenAI) tools into their methods, this much-publicized technology has not led to any significant innovations in offensive strategies. “GenAI enhances speed, volume, and noise in the threat landscape—so far, that’s its main contribution,” posits a new report from cybersecurity firm Sophos.

The current narrative surrounding AI-enhanced hacking seems to suggest merely an acceleration of existing tactics. Reports highlight that “the most active AI-enabled adversaries contributed to an 89% increase in attacks year-over-year,” according to insights from CrowdStrike.

The use of GenAI tools by criminals and nation-states, as well as the potential for these enhanced capabilities to overwhelm existing defenses, are subjects of continuous scrutiny.

Experts have noted that GenAI tools are enabling hackers to reach their objectives more efficiently. While state-sponsored attackers may operate over extended periods, many cybercriminal organizations employ rapid tactics to achieve quick results.

Where data theft is the goal, these attackers are completing their missions faster than ever. According to findings from cybersecurity firm ReliaQuest, the speed of data exfiltration dropped dramatically, from an average of four and a half hours in 2024 to just six minutes in 2025. The most rapid break-in—transitioning from initial compromise to lateral movement within a victim’s network—was recorded at just four minutes last year.

When comparing 2024 to 2025, the average breakout time fell nearly a third—from 48 minutes to 34 minutes.

CrowdStrike’s analysis revealed that the average breakout time observed in their investigations decreased from 48 minutes in 2024 to 29 minutes in 2025, marking a reduction of 65%. In a particular incident, responders noted breakout time as low as 27 seconds. “This speedup reflects adversaries’ increasing dependence on trusted credentials, direct exploitation of unmanaged exposed assets, and AI-enabled efficiencies that minimize the time needed to assess environments and target high-value assets,” said CrowdStrike.

Emerging evidence suggests that AI is also empowering less experienced attackers to operate at greater scales.

On Friday, Amazon reported a campaign that ran from January 11 to Wednesday, leading to the compromise of over 600 FortiGate next-generation firewalls across nearly 60 countries. Researchers noted that attackers employed “multiple commercial GenAI services”—not from Amazon—to streamline and scale familiar attack methods despite their limited technical expertise.

This included access to the victim’s Active Directory environment and extraction of “complete credential databases,” as well as targeting organizational backup infrastructures, which may have set the stage for a potential ransomware assault.

With the support of AI, the attackers “achieved an operational scale that would have previously necessitated a significantly larger and more skilled team,” according to Amazon.

Notably, the attackers did not exploit any zero-day or known vulnerabilities. They instead gained remote access because victims left exposed management ports open and failed to implement multifactor authentication, overlooking “fundamental security weaknesses that an unskilled actor exploited at scale,” Amazon noted, attributing the attacks to a Russian hacker or group pursuing financial gain rather than a state-sponsored entity.

Such attacks are becoming increasingly prevalent, often conducted by criminals acting as initial access brokers, accumulating credentials for resale, as noted by Allan Liska, a threat intelligence analyst at Recorded Future, in a post on the social platform Bluesky.

“On one hand, this is troubling,” he remarked regarding the campaign discovered by Amazon. “On the other, I’ve observed experienced IAB teams executing similar campaigns in this timeframe. The distinction comes from experience and team effort,” he added, explaining that these tools enable novice operators to become operational more swiftly and “serve as blueprints for anyone looking to become an IAB.”

While the volume of AI-enhanced attacks is on the rise, experts maintain that defenders honing their foundational cybersecurity practices can successfully fend off many types of attacks, even those utilizing AI.

In the recent campaign targeting FortiGate devices, for instance, Amazon mentioned that “when this actor encountered fortified environments or advanced defensive measures, they simply opted for less secure targets instead of persisting, illustrating that their advantage lies in AI-driven efficiency and scale rather than profound technical skills.”

Even when deeper expertise is accessible, the hacking tools or tactics employed are typically well-known, as confirmed by multiple leading AI providers reporting that nation-state entities have been experimenting with their GenAI platforms to orchestrate the various phases of hacking operations (refer to: State Hackers Utilize Google AI as an Attack Acceleration Tool).

In the threat reports from AI vendors, “the specifics may be lacking, but we should assume that familiar tools like Mimikatz for password dumping and BloodHound for Active Directory traversal are in play—these classic tools should ideally be detectable and blockable, yet they are now being utilized in a more automated fashion,” explained Candid Wüest, a security advocate at Xorlab.

While orchestration may enhance speed and efficacy, it is not foolproof, especially against robust defenses. “Ultimately, it’s all about adhering to best practices and maintaining technical depth,” Wüest reiterated.

However, organizations neglecting to implement the basics may find themselves targeted more swiftly, as “not sometime later, but sooner, they will identify and exploit your vulnerabilities,” he cautioned.

The ongoing assessments of 2025 attack trends by security firms reflect the current landscape. “While it seems inevitable that GenAI will eventually evolve into fully autonomous attacks, potentially generating new attack vectors and malware in the process, we have not yet reached that point,” Sophos noted (see: Crafting Ransomware with AI for Profit? Don’t Bet on It).

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like