As AI chatbots become increasingly integrated into the workplace, a hidden issue emerges: every question typed by employees is logged and stored by these AI systems. Unfortunately, many organizations lack clear guidelines on whether or when to delete this stored data.
When you consider the thousands of employees across myriad companies, the potential for significant security vulnerabilities becomes apparent.
This insight is central to a recent analysis from Brooks Kushman, a law firm specializing in technology and intellectual property advisory. The firm highlights two particular issues that constitute pressing security threats within corporate AI: the indefinite retention of AI-generated data and insufficient controls over access to AI systems.
The data retention issue is more widespread than many executives realize. Employees frequently upload sensitive information while using AI tools, often unaware that these files are being stored. Such data can encompass client information, financial details, legal strategies, or trade secrets. In some cases, AI platforms may use these interactions to enhance their models unless organizations actively opt out of this feature.
Consequently, Brooks Kushman points out that this creates an expanding attack surface. The more data a company retains, the greater the risk of theft, prompting regulators to demand clarity on how organizations mitigate this exposure.
We’d love to be your preferred source for news.
Please add us to your preferred sources list so our news, data, and interviews appear in your feed. Thanks!
The second pressing issue pertains to access. Specifically, it involves determining who—or what—has authority to utilize an AI system and what actions they can undertake. Traditional corporate software typically restricts users to specific tools and datasets. However, with AI, a single user with excessive permissions can extract data from various corners of an organization, create new content from it, and disseminate that output widely.
The situation becomes even more complex when the “user” is an AI agent that can operate autonomously, make decisions, and interact with other systems. According to Brooks Kushman, these agents should be regarded with the same caution as privileged employees.
Related: Investors Are Rethinking Government Tech as AI Rewrites the Rules
“AI security is no longer solely about protecting models. It encompasses data control, access definition, evidence preservation, and ensuring accountability across complex, evolving systems,” the firm notes.
To tackle the access issue, Brooks Kushman suggests implementing Role-Based Access Control (RBAC). This framework precisely delineates what each individual and AI agent is authorized to do within a company’s systems. In an RBAC setup, a developer would have different permissions compared to a manager, while an AI agent engaged in automation would be confined to only essential systems.
Legal risks are also significant. A recent ruling in United States v. Heppner determined that conversations with publicly available AI tools are not safeguarded by attorney-client privilege. If a lawyer or executive processes sensitive legal analysis through a consumer-grade AI product, that dialogue could potentially be disclosed in court. This ruling pressures companies to utilize enterprise-grade AI platforms that offer formal security assurances, rather than relying on free consumer tools.
Looking ahead, the scrutiny will likely intensify. The EU AI Act, numerous U.S. state privacy laws, and increased federal regulatory oversight all push towards stronger governance structures. Companies must demonstrate that they have real governance measures for their AI systems, moving beyond mere intentions. Brooks Kushman emphasizes that organizations proactive in tightening data retention policies, establishing solid access frameworks, and training employees on what information can be shared with AI will significantly improve their security posture compared to those who delay action.