A recent report from The Financial Times suggests that two recent outages at Amazon Web Services (AWS) were attributed to engineers permitting its Kiro AI coding tool to implement changes autonomously.
According to a senior AWS insider, at least two production outages in recent months were triggered by the AI tool acting independently. “The engineers allowed the AI [agent] to resolve issues without any intervention,” remarked one source. “While the outages were minor, they were entirely foreseeable.”
“This brief incident was the result of user (AWS employee) error—specifically misconfigured access controls—rather than AI. The service disruption was minimal, affecting only one service (AWS Cost Explorer, which helps customers visualize their AWS costs and usage) in just one of our two Regions in Mainland China.

“This incident did not impact our core services—such as compute, storage, and database operations. After these events, we instituted several new safety measures, including mandatory peer review for production access.”
Furthermore, in similar statements to the FT, Amazon claimed that its Kiro AI tool “requests authorization before taking any action,” reiterating that the engineer involved in the incident had “broader permissions than expected.” The company also emphasized it was merely a “coincidence” that the AI tools were involved.
Thus, according to Amazon, user error—not the AI—was at fault. However, as AI coding tools become increasingly prevalent among engineers worldwide, it’s likely that this won’t be the last instance of rogue AI actions causing disruptions, alongside companies defending their control over the technology.
While the application of agentic AI in coding appears promising, establishing clear boundaries for its use seems to be an ongoing challenge, even for a tech giant like AWS.

Best gaming rigs 2026