The unsanctioned use of AI tools by developers is a serious issue.

Alex de Minaur of Australia casts a shadow as he serves to Arthur Cazaux of France during their … More
Shadow AI is illuminating. In some ways, the use of unregulated artificial intelligence services that fail to align with an organization’s IT policies and wider country-specific data governance controls might be seen as a positive i.e. it’s a case of developers and data scientists looking for new innovations to bring hitherto unexplored new efficiencies to a business.
But mostly, unsurprisingly, shadow AI (like most forms of shadow technology and bring your own device activity) is viewed as a negative, an infringement and a risk.
AI Shadow Breeding Ground
The problem today is that AI is essentially still so nascent, so embryonic and only really starting to enjoy its first wave of implementation. With many users’ exposure to AI relegated to seeing amusing image constructs built by ChatGPT and other tools (think about human plastic toy blister packs last week, cats on diving boards this week and something zanier next week for sure), we’ve yet to get to a point where widespread enterprise use of AI tools has become the norm. Although that time is arguably very soon, the current state of AI development means that some activity is being driven undercover.
The unsanctioned use of AI tools by developers is becoming a serious issue as application development continues to evolve at a rapid pace. Scott McKinnon, CSO for UK&I at Palo Alto Networks says that this means building modern, cloud‑native applications isn’t just about writing code anymore, it’s about realizing that we’re now in a delivery model that’s operating in “continuous beta mode”, such is the pressure to roll out new enterprise software services today.
“The knock-on effect is that developers are under intense pressure to be fast and reduce time to market. With this in mind, it’s not surprising that many developers are using AI tools in an effort to increase efficiency and deliver on these challenging expectations,” lamented McKinnon. “Our research suggests that enterprise generative AI traffic exploded by over 890% in 2024 – and with organisations now starting to actually use these apps – a proportion of them can be classed as high risk. Meanwhile, data loss prevention incidents tied to generative AI have more than doubled, which is a clear red flag for governance failures.
Go-Around Guardrails
Compound all these realities and it’s easy to understand why software developers might be tempted to seek ways around the organization’s AI guardrail policies and controls. In practice, this sees them plugging into services from open source large language models outside of approved platforms, using AI to generate code without oversight, or skipping data governance policies to speed up implementation. The upshot is the potential for intellectual property to be exposed through compliance slips that also compromise system security.
“It all points to one thing: if developers are to balance speed with security, they must adopt a new operational model. It must be one where clear, enforceable AI governance and oversight are embedded into the continuous delivery pipeline, not bolted on afterwards,” said McKinnon “When developers use AI tools outside of sanctioned channels, one of the most pressing concerns is supply chain integrity. When developers pull in untested or unvetted AI components, they’re introducing opaque dependencies that often carry hidden vulnerabilities.”
What are opaque software dependencies?
It’s a scary enough sounding term in and of itself, opaque software dependencies are indeed bad news. Software dependencies are essential component parts of smaller
data services, software libraries devoted to establishing database connections, a software framework that controls a user interface or a smaller module that forms a wider external third-party application in its entirety. Useful software dependencies make their DNA easy to see and can be viewed with translucent clarity; opaque software dependencies are functional, but cloudy or muddied in terms of their ability to showcase their progeny and component parts. In technical terms, opaque software application dependencies mean the developer can not “assign” them (and forge a connection to them) using a public application programming interface.
According to McKinnon, another major problem is the potential for prompt injection attacks, where bad actors manipulate the AI’s inputs to force it into behaving in unintended and dangerous ways. These types of vulnerabilities are difficult to detect and can undermine the trust and safety of AI-driven applications. When these practices go unchecked, they create new attack surfaces and increase the overall risk of cyber incidents. Organizations must get ahead of this by securing their AI development environments, vetting tools rigorously and ensuring that developers are empowered to work effectively.
The Road To Platformization
“To effectively address the risks posed by unsanctioned AI use, organisations need to move beyond fragmented tools and processes toward a unified platform approach. This means consolidating AI governance, system controls and developer workflows into a single, integrated system that offers real-time visibility. Without this, organizations struggle to keep pace with the speed and scale of modern development environments, leaving gaps that adversaries can exploit,” said McKinnon.
His vision of platformization (and the wider world of platform engineering) is argued to enable organizations to enforce consistent policies across all AI usage, detect risky behaviors early and provide developers with safe, approved AI capabilities within their existing workflows.
“This reduces friction for software developers, allowing them to work quickly without compromising on security or compliance. Instead of juggling multiple disjointed tools, organizations gain a centralized view of AI activity, making it easier to monitor, audit and respond to threats. Ultimately, a platform approach is about balance, providing the safeguards and controls necessary to reduce risk while maintaining the agility and innovation developers need,” concluded Palo Alto Networks’ McKinnon.
At its worst, shadow AI can lead to so-called model poisoning (also known as data poisoning), a scenario which application and API reliability company Cloudflare defines as when an attacker manipulates the outputs of an AI or machine learning model by changing its training data. An AI model poisoner’s goal is to force the AI model itself to produce biased or dangerous results when it starts to process is inference calculations that will ultimately provide us with AI brainpower.
According to Mitchell Johnson, chief product office of software supply chain management specialist Sonatype, “Shadow AI includes any AI application or tool that operates outside an organization’s IT or governance frameworks. Think shadow IT but with a lot more potential (and risk). It’s the digital equivalent of prospectors staking their claims in the gold rush, cutting through red tape to strike it rich in efficiency and innovation. Examples include employees using ChatGPT to draft proposals, using new AI-powered code assistants, building machine learning models on personal accounts, or automating tedious tasks with unofficial scripts.”
Johnson says it rears its head now, increasingly, due to the popularization of remote working where teams can operate outside traditional oversight and where firms have policy gaps, meaning that an organization lacks comprehensive AI governance, which leaves room for improvization.
From Out Of The Shadows
There is clearly a network system health issue associated with shadow AI; after all, it’s the first concern brought up by tech industry commentators who want to warn us about shadow IT of any kind. There are wider implications too in terms of some IT teams gaining what might be perceived to be an unfair advantage, or some developer teams introducing misplaced AI that leads to bias and hallucinations.
To borrow a meteorological trusim, shadows are typically only good news in a heatwave… and that usually means there’s a fair amount of humidity around with the potential for storms later.