F5 Tightens Screws On Data Leakage In AI Application Delivery

Posted by Adrian Bridgwater, Senior Contributor | 15 hours ago | /big-data, /cloud, /innovation, Big Data, Cloud, Innovation, standard | Views: 10


Cloud took time. Once cloud computing had laid down its initial gambit (hinged around a promise of lower capital expenditure via a shift to service-based computing and data storage), the IT industry worked through a teething period while missing security, scalability and service suitability headaches were worked out. The rise of artificial intelligence is going through a similar adolescence.

Market analysis from application delivery and security platform company F5 suggests that while two-thirds of organizations are now able to demonstrate a level of “moderate AI readiness”, most lack robust governance and cross-cloud management capabilities related to performance, integration and security. The company’s latest 2025 State of AI Application Strategy Report compiles feedback from 650 global IT leaders alongside additional research carried out with 150 AI strategists, all of whom represent organizations with at least $200 million in annual revenue.

What AI Firewall Crisis?

The problem, perhaps, stems from where AI is today, as an experimental prototyping technology used for random web-centric research, for chatbot experiences inside social media apps and for amusing picture generation entertainment. If anything, our non-corporate use of AI services might be argued to be driving a premature familiarity with extremely powerful technologies that really need to be locked down inside corporate control mechanisms when deployed in the workplace.

This suggestion is perhaps validated out by F5’s estimation that, today, 71% of organizations use AI to boost security, while only 31% have deployed AI firewalls. As AI becomes core to business strategy, readiness requires more than experimentation – it demands security, scalability and alignment. The proposal from F5 states that the average organization uses three AI models; and typically, the use of multiple models correlates with deployment in more than one computing environment or location.

Fighting Fire With Fire

As a company, F5 has been working to architecturally align its platform for the new AI era for some time now. After specific updates in this direction at the start of this year, the company is now detailing new AI-driven capabilities in the F5 Application Delivery and Security Platform. In something of a case of fighting fire (AI risk) with fire (expanded capabilities in the platform include features such as the F5 AI Gateway service to protect against data leaks), the company is also offering new functionality in its F5 BIG-IP SSL Orchestrator, a technology that works to classify and defend encrypted data in motion and block unapproved AI use.

A piece of middleware, any AI gateway works as a filtering tool to inspect and validate data prompts between AI models and the large language models that serve them. Overseeing all the interactions between an AI service and a language model, an AI gateway views potentially chaotic data interchanges and lays down the law to bring order to achieve efficient usage, secure operations and responsible AI.

Underlining the progress made in the company’s Application Delivery and Security Platform this year François Locoh-Donou, F5 president and CEO spoke to press in London this week to explain where his firm’s vision for secure operations across new AI landscapes really manifests itself. The F5 Application Delivery and Security Platform is now being more deeply engineered to ensure CIOs, CISOs, AI Ops users and all engineers across modern DevOps teams working in hybrid multicloud infrastructures manage the key infrastructure, data movement and security challenges they face.

F5 CEO: Why IT Complexity Happened

“What is really happening in the world of application delivery right now, is that the organizations we work with (which are primarily large enterprises and government entities) are finding system security a lot more complex over the last couple of decades,” said Locoh-Donou. “That’s in large part due to the fact that companies have their cloud and datacenter estates established over a number of different service providers, so they have more than one element of infrastructure to manage. Combine this truth with the fact that modern applications are composed of multiple APIs and microservices… and you can understand why connecting one element of the total topography in any organization is now more difficult. Companies have traditionally used ‘point solutions’ to address each problem over the years and these multifaceted products themselves create a ‘ball of fire’ in terms of total systems management and application delivery.”

Instead of taking an incremental step to solve these challenges, Locoh-Donou proposes that it makes more sense to take a single platform approach to deliver and secure applications on-premises and in public cloud and out to the edge. He insists that organizations should not have to choose a different application delivery infrastructure to drive successful applications on different form factors.

With all that backdrop, when we also add AI into the mix, things get even more challenging because these applications are inherently more distributed (they typically call on data models from multiple sources and agentic AI adds even more dynamic behavior to that vortex as it makes calls agent-2-agent), so there is a whole new raft of security considerations from prompt injection to AI hallucination controls and so on.

“My view on this is that AI is being deployed so rapidly that we really should have looked more closely at what happened in the first decade of cloud computing. It was only back in Nov 2022 that the new AI revolution started with the arrival of ChatGPT, so the speed of progression now is massively exponentially faster,” said Locoh-Donou. “I believe that generative AI might well be the most sensitive vulnerability that organizations have to manage now. Using an AI gateway to route traffic to the right LLM and apply policy to the AI engine (organizations can use this process are able to manage which cost per token they are prepared to work with), a business can start to understand that working with AI means moving a lot of data around… so being able to take a total platform approach to managing these processes becomes fundamental.”

Alert Alarm Fatigue

For F5, Locoh-Donou says that his team has been on a journey that sees them infuse AI into its platform to provide AI for application delivery controller technologies. This is all about making it way easier (through natural language interfaces) for customers to deliver the apps that they need to securely. The company this year acquired Fletch.ai to help administrators get over the problem of “alarm fatigue” when there are just so many alerts and point administrators to the ones they need to be aware of, which are the ones where the F5 platform can automatically triage..

Locoh-Donou also notes that as businesses adopt AI and hybrid cloud technologies, sensitive data often moves across encrypted traffic and unapproved AI tools, creating security blind spots. Traditional security methods struggle to detect or prevent data leaks from these complex environments. He says that F5 answers this challenge with tools that allow organizations to achieve key compliance and security outcomes such as the ability to detect, classify and stop data leaks in encrypted and AI-driven traffic in real-time. It also tackles risks from unauthorized AI use (also known as shadow AI) and sensitive data exposure. It operates with controls to apply consistent policies across applications, APIs and AI services to maintain security and compliance.

Intrinsic In-transit Data

Data leakage detection and prevention capabilities are coming to F5 AI Gateway this quarter. The service will be powered by technology that F5 acquired from LeakSignal, a data governance and protection specialist with a National Institute of Standards and Technology recognition for data classification, remediation and AI-driven policy enforcement of data in-transit. The new functionality examines AI prompts and responses to spot sensitive data such as personal information or other sensitive data, and applies customer-defined policies to redact, block, or log it.

With the integration and ongoing development of this AI data protection technology, F5 says it expands its ability to inspect in-transit data, applying policies to secure sensitive information before it leaves the network. This addition is promised to simplify compliance and reduce risk across hybrid and multicloud deployments.

Competitive Analysis: Application & API Delivery

Hand in hand with application delivery and security comes (from any vendor worth its salt) an equally exhaustive approach to application programming interface management and security alongside AI gateway functionality. F5 shares benchspace in this sector with firms including Kong, Cloudflare, Akamai and (of the three major cloud hyperscalers) Google Cloud primarily, although Microsoft Azure and AWS also have fingers in the pie. Each firm has its competencies and costing schedules, but more obvious differentiation manifests itself in terms of how far each vendor can extend into the edge computing space and, crucially, make use of AI accelerators and intelligence boosters.

Pure-play application delivery controller competition again comes from AWS, this time alongside Barracuda, HAProxy, NetScaler, A10 Networks, Radware and (back to the hyperscalers again) Microsoft Azure. All three hyperscalers are known for their capabilities in AI inference routing, the ability to make sure an application’s resource requests correctly match the parameter requests of any given deployment. Not a replacement for an application delivery controller, but certainly another ingredient in this market’s mix.

The size of the cloud major players will naturally sit at the back of F5’s mind as it now extends its platform vision; the big service providers can bundle a degree of alternatives to what F5 provides as a standalone service (even though it is a platform in and of itself) and some IT managers will inevitably find that to be an attractive option.

If we mix data in-transit with real-time data and the need to bring controls to every tier of application execution, there’s clearly plenty of surface area to target. What matters now is whether the “safe and securely protected AI” space grows as fast as the wider AI landscape itself.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *