What Is AI Agent Washing And Why Is It A Risk To Businesses?

Posted by Bernard Marr, Contributor | 16 hours ago | /ai, /enterprise-tech, /innovation, AI, Enterprise Tech, Innovation, standard, technology | Views: 7


You’ve heard of greenwashing and AI-washing? Well, now it seems that the hype-merchants and bandwagon-jumpers with technology to sell have come up with a new (and perhaps predictably inevitable) scam.

Analysts at Gartner say unscrupulous vendors are increasingly engaging in “agent washing” and say that out of “thousands” of supposedly agentic AI products tested, only 130 truly lived up to the claim.

Agents are widely touted as the “next generation” of AI tools. As well as processing information and generating content (like ChatGPT, for example), they’re also capable of taking action.

To be truly agentic, applications must be capable of completing complex tasks and long-term goal-oriented planning, with minimal human intervention. They do this by interfacing with other systems using tools like web browsers or their ability to write and execute code.

So, what’s the scam? Well, according to the report, agent washing involves passing off existing automation technology, including LLM-powered chatbots and robotic process automation, as agentic, when in reality it lacks those capabilities.

So, how do we tell the difference between an AI vendor selling a truly agentic product and one who’s engaging in agent washing? And why is this sort of behavior more dangerous than it might at first seem?

Agentic Or Agent Washing?

Without understanding the difference between agentic and regular, non-agentic AI, it can be easy to fall victim to mislabeling.

Sometimes it can simply be a matter of semantics, and there might be no intention to mislead. The word “agent” is used in many contexts, despite having a precise meaning in AI-speak today.

AI customer service agents, for example, are often simply chatbots with no ability to take action beyond generating advice or connecting the user to a human to deal with more complex problems.

Gartner’s report also suggests that robotic process automation (RPA), which involves programming machines to complete tasks by executing a series of pre-determined steps, is being mislabeled by vendors as agentic AI.

RPA systems do take action (for example, automatically entering sales transactions into a ledger and updating an inventory system). But they don’t fit the criteria of AI agents because they don’t reason, plan or make decisions.

And while some LLM tools are capable of accessing and controlling external systems through APIs, this involves giving them precise instructions on doing so. A truly agentic solution should be capable of working it out for itself, even if it has never come across a specific API before. And if, as is likely, it finds that it can’t communicate with the external system using its natural language capabilities, it will write and execute computer code that can.

Tools that claim to be agentic because they orchestrate and pull together multiple AI systems, such as marketing automation platforms and workflow automation tools, are stretching the term, too, unless they are also capable of autonomously coordinating the usage of those tools for long-term planning and decision-making.

A few more hypothetical examples: While an AI chatbot-based system can write emails on command, an agentic system might write emails, identify the best recipients for marketing purposes, send the emails out, monitor responses, and then generate follow-up emails, tailored to individual responders.

And in e-commerce, a chatbot might be great for searching a simple catalogue and finding products that meet your requirements. But an agent can shop across multiple sites, comparing prices to find the best deals, before finally placing an order and making payment on your behalf.

Without understanding this fundamental difference, it’s easy to be impressed by generative AI chatbots that appear to be carrying out tasks in an agentic manner but aren’t really as capable as they seem.

Why Is This Dangerous?

According to Gartner, up to 40 percent of agentic projects are predicted to fail or be cancelled by the end of 2027. This means that the potential for misunderstanding, mislabeling and over-inflated expectations is a looming threat for many businesses.

The most obvious danger is that businesses and the public could be misled about the true capabilities of AI tools and apps. If they find they aren’t getting the results they expected after making investments, it could lead to a breakdown in trust between businesses and the AI industry.

Potentially, this could also impact trust in the concept of AI itself. And from the point of view of those of us who believe that, if used properly, it will create huge positive change, this would be a disaster.

Beyond risks to trust and reputation, misunderstanding of the capabilities and limitations of AI systems can lead to serious operational risks. Overconfidence in their abilities to deal with critical interventions, from customer complaints to cyber threats, could lead to lost revenue and business opportunities or even legal violations.

Longer term, the practice threatens genuine AI innovation by making it harder for developers and startups working on real agentic breakthroughs to gain traction, support and funding.

Gartner believes agent washing doesn’t just lead to failed AI projects, but it also undermines the efforts of the entire AI community to deliver truly useful products.

Of course, the key to avoiding falling victim to this trend lies in building AI literacy, both as individuals and by instilling it across our organizations.

This gives us the insight to distinguish between real agentic behavior and mere automation, identifying systems that can genuinely plan long-term and adapt to changing circumstances.

And vendors themselves should be held to the highest standards of transparency and accountability when it comes to honestly communicating the strengths and weaknesses of their products.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *