Artificial intelligence (AI) is rapidly emerging as one of the most transformative technologies of our era.
According to the United Nations’ trade and development agency, UNCTAD, the global AI market is projected to reach $4.8 trillion by 2033, which is an astonishing 25-fold increase in just three years.
ChatGPT, which has popularized generative AI and large language models (LLMs), gained 100 million active users just two months after its launch in 2022, making it the fastest-growing consumer application to date.
Nevertheless, the rapid adoption of generative AI has predominantly occurred outside of corporate environments, with IT leaders and businesses striving to catch up.
The rise of free-to-use LLMs and generative tools has created a distinct layer of technology, largely beyond the oversight of corporate IT. This phenomenon, known as shadow AI, has proliferated even as vendors of enterprise software have integrated AI features into their products.
Although shadow AI resembles the challenges posed by shadow IT over the past two decades, its unchecked use brings forth a host of significant new issues.
These challenges range from privacy breaches and loss of intellectual property to security vulnerabilities and even misguided decisions made by unauthorized AI applications.
Hiding in Plain Sight
Industry analysts from Gartner report that 69% of organizations in their study suspect or confirm that staff are utilizing unauthorized AI tools. The firm further anticipates that 40% of businesses will face “security or compliance incidents” due to these unauthorized tools.
Security specialists are increasingly concerned about the implications of unauthorized AI, especially as its usage expands from simple chatbots to critical applications like code development or “agentic” systems that function with minimal human oversight.
Experts warn that current IT and data security protocols are ill-equipped to detect or hinder the deployment of AI tools.
James Gillies, head of cyber security at Logicalis UKI, remarks, “AI has been available for about three years now, and people are becoming accustomed to identifying which platforms yield satisfactory results.”
Gillies adds, “It’s become ‘easy’ to obtain quick answers to a variety of inquiries, but the initial deliberation regarding whether to ask that question often gets overlooked or, worse, ignored.”
This behavior results in employees inputting confidential and sensitive data into AI systems, often without considering the potential ramifications of data disposal and usage.
Unless stringent control measures are adopted by CIOs and CISOs, employees will likely continue to embrace shadow AI. However, there is a need for organizations to strike a careful balance between the risks posed by AI and the potential stifling of innovation.
“Unlike traditional shadow IT involving unauthorized applications or cloud services, shadow AI poses distinctive risks due to its method of operating and interaction with sensitive data,” cautions Findlay Whitelaw, a cybersecurity strategist at Exabeam.
He explained, “Information provided to AI applications may be preserved, reused, or transferred across jurisdictions, often eluding organizational surveillance or control. The widespread consequences of shadow AI further complicate the issue.”
Inside the Machine
For CIOs and data security professionals, addressing the challenges posed by shadow AI is a pressing priority.
Jon Collins, field CTO at GigaOm, emphasizes that “shadow AI represents a more significant concern than shadow IT because of its implications for identity, access, and infrastructure.”
He explains, “Shadow IT came with its own set of risks, such as accessing corporate data on unsecured personal devices or using services like Dropbox for file sharing, which could lead to data leaks. Each had its operational difficulties and represented ongoing challenges.”
“While shadow AI shares similar risks when limited to LLMs or chatbots, the unique risks arise from autonomous systems that operate independently and at scale. This is not merely a single user creating problems, but rather an individual unintentionally deploying a multitude of uncontrolled, autonomous operators into an environment that isn’t engineered to defend against their presence,” he warns.
Collins further points out that many organizational IT systems have vulnerabilities, including misconfigured systems and poorly managed secrets, which these agents can navigate and exploit.
He notes, “The risks associated with shadow AI echo those brought by any uncontrolled technology. For CIOs, this challenge is familiar. Incorporating consumer technology into the workplace has always required balancing control with usability, but AI amplifies this dilemma due to its intricacy and the human tendency to place trust in machines.”
Issues of Trust
“Shadow AI frequently capitalizes on what is often referred to as ‘machine trust,’ the presumption that algorithms are inherently precise and trustworthy,” explains Whitelaw from Exabeam. “Despite this inherent trust, AI agents are only as accurate as the data they are trained on, potentially carrying biases, errors, or vulnerabilities that can be exploited—intentionally or inadvertently.”
Even users who have confidence in personal AI tools and may prefer them over corporate options often lack insights into the underlying algorithms and the data used in training the models.
This lack of transparency can result in unforeseen negative outcomes, for which organizations may ultimately bear responsibility. Effectively managing shadow AI requires not only monitoring tools but also mapping AI usage against data and business risks, alongside robust workforce education.
Ron Minis, a senior security researcher at JFrog, underscores that “to gain better visibility into the models they utilize, organizations must inspect how AI models are being employed and ensure they do not introduce malicious models into their infrastructure.”
However, banning AI is not a viable solution, as warned by Minis. “If companies attempt to prohibit AI usage within their organization, employees will simply seek out alternative methods to utilize AI without disclosing them,” he cautions.
To address shadow AI effectively, enterprises should provide employees with access to AI tools that enhance productivity, integrate smoothly with existing systems, and are user-friendly.
Similar to traditional shadow IT, shadow AI will continue to thrive if sanctioned options are perceived as limiting, overly restrictive, or unappealing.