Your AI Initiatives Will Fail If You Overlook This Component

Opinions expressed by Entrepreneur contributors are their own.
The conversations I am having with CIOs have changed dramatically over the past year. The conversation used to center around digital transformation milestones and cloud migration timelines. Now it’s about agents, multi-agent workflows and how to scale AI initiatives beyond proof-of-concept demos. But here’s what’s becoming painfully clear: Most organizations are trying to build the future of work on infrastructure that was barely able to accommodate yesterday’s demands, let alone tomorrow’s.
As a Field CTO working with organizations at various stages of their AI journey, I’m seeing a troubling pattern. Mature companies rush to implement new agentic technologies, only to discover their underlying systems were never engineered to support the data, velocity, processing requirements or security governance that agentic workflows demand. The results aren’t just failed pilots — it’s cost, risk and operational drag that compounds over time.
Related: Outdated Systems Are Harming Your Business More Than You Realize. Here’s How to Modernize Before Disaster Strikes.
The agent infrastructure reality
Agents and models are fed on data, and without the right structure, network topology and foundational building blocks in place, agents sit around idle, waiting for information. We’re not just talking about having data — we’re talking about having it in the right format, at the right time, with the right security, transparency and governance wrapped around it.
The demands of globalization make this even more complex. When scaling across geographies with bespoke data sovereignty requirements, how is repeatability and consistency ensured when data cannot leave certain jurisdictions? Organizations that put modern infrastructure pieces in place with the goal of facilitating easy scale suddenly find they can onboard customers, move into new markets and launch new product offerings at a fraction of the cost and effort that they used to.
Inaction or embracing the status quo leads to what I call infrastructure debt, and it accumulates interest faster than most CIOs anticipate.
The operational health diagnostic
I use a simple framework to assess organizational readiness: the 60-30-10 model for engineering and software development. In a healthy IT organization, around 60% of resources should focus on “move-forward” incremental feature adds and improved user experience that respond to business unit requirements and customer requests. About 30% is devoted to maintaining current operations in areas like support, bug fixes and keeping existing systems functional. The last 10% needs to be reserved for the huge transformation initiatives that have the potential to 10x the impact of the organization.
When I see these ratios skew, particularly when maintenance climbs to 40 or 50% of resources, that is often a systems architecture problem masquerading as an operational issue. You may not be spending more time on maintenance because your code is poorly written, but rather because the underlying infrastructure was never designed to support the current needs, let alone future ones. The systems are getting stressed, things break, shortcuts are taken, and debt just accumulates.
If you find yourself climbing the same hill every time you create a new capability — doing the same data transformations, rebuilding the same integrations, explaining why this application can’t leverage what you built for that one — it’s likely your foundation that needs attention.
The multi-cloud strategy evolution
Your cloud needs will change as your capabilities mature. You might use amazing AI tools in one cloud while leveraging the partnership ecosystem in another. You may go multi-cloud because different product lines have different performance requirements or because different teams have different expertise.
The key is maintaining technology alignment with more open, portable approaches. This gives you the flexibility to move between clouds as requirements change. Sometimes, there’s a proprietary technology that’s core to what you do, and you accept that as the price of doing business. But wherever possible, avoid lock-in that constrains future decisions.
Know who you are as an organization. If you have amazing data scientists but limited Kubernetes expertise, gravitate toward managed services that let your data scientists focus on models rather than infrastructure. If your team wants to optimize every dial and parameter, choose platforms that provide that level of control. Align your cloud strategy with your internal capabilities, not with what looks impressive in vendor demos.
Related: How Multi-Cloud Could Be the Growth Catalyst Your Business Needs
The data architecture imperative
Before implementing any AI initiative, you need to answer fundamental questions about your data landscape. Where does your data reside? What regulatory constraints govern its use? What security policies surround it? How difficult would it be to normalize it into a unified data platform?
Historically, data has been sawdust — the inevitable byproduct of work being performed — that then becomes a cost center where you need to pay an ever-increasing amount to store and protect data that becomes increasingly less irrelevant the further you move away from its time of creation. Organizations often discover they’ve accumulated data over decades without considering its structure or accessibility. That’s acceptable when humans are processing information manually, but agents need structured, governed and accessible data streams. Now, data may be an organization’s most valuable resource — the more unique or more specialized, the better. The time investment required to prepare your data architecture pays dividends across every subsequent AI initiative.
This isn’t just about technical capabilities — it’s about governance maturity. Can you ensure data flows seamlessly where it needs to go while maintaining security boundaries? Can you coordinate multiple agents accessing different data sources and applications without creating compliance risks? Can you even pull disparate kinds of data from all the file systems, databases and object stores into a single view?
Legacy system assessment signals
Several indicators suggest your current infrastructure won’t support AI ambitions. If you’re spending increasing resources maintaining existing systems rather than building new capabilities, that’s a structural issue. If every new project requires extensive custom integration work that can’t be reused, your architecture lacks modularity.
When your sales team loses opportunities because features are “on the roadmap for next year” rather than available now, you’re paying opportunity costs for technical limitations. Jeff Bezos once said, “When the anecdotes and the data disagree, the anecdotes are usually right.” If you’re hearing stories about excessive resource allocation, missed opportunities or customer churn due to system limitations, pay attention to those signals regardless of what your dashboards indicate.
The infrastructure transformation approach
The rip-and-replace approach has burned many organizations because it assumes everything old lacks value. Modern approaches focus on componentization — addressing system elements individually while maintaining operational continuity. You can migrate functionality without losing capabilities, transitioning from old to new without creating a net loss in what you can deliver to customers.
This requires change management discipline and a graceful transition strategy. You’re balancing the introduction of new capabilities with maintaining what has been successful. Sometimes, that means a complete rewrite to take advantage of cloud-native technologies, but it requires architected migration of functionality rather than wholesale application replacement.
Preparing for agentic scale
The organizations that will succeed in the agentic era are those positioning themselves for speed, data accessibility and security without compromising any of these elements. As we move from individual models to agents to multi-agent workflows, the coordination requirements become exponentially more complex.
Having data flow seamlessly in the right format at the right time becomes a showstopper requirement. Everything needs integration with the lowest possible latency while maintaining security and compliance boundaries. Cloud platforms that can wrap governance envelopes around everything you’re doing help diminish the risk of human error as complexity scales. Organizations that can really excel at this don’t just keep up with the Joneses; they are the Joneses.
Related: The AI Shift: Moving Beyond Models Toward Intelligent Agents
Build for agents, not just apps
Your staff are already using AI tools whether your organization has sanctioned them or not. They’re uploading data to external services, using models for work tasks and finding ways to be more productive. The faster you can provide them with governed, secure alternatives, the faster you can put appropriate boundaries around how these tools get used.
Don’t implement AI for the sake of having AI initiatives. Focus on the problems you’re trying to solve and the goals you need to achieve. AI is a powerful tool, but it should be applied to address real business challenges, not to check a box for your board.
The infrastructure decisions you make today determine whether your AI initiatives will scale or stall. In the agentic era, there’s no middle ground between having the right foundation and having a very expensive pile of proofs-of-concept that never delivered business value.
Speed, data and security will be the neural system of successful AI implementations. Getting that balance right isn’t just a technical challenge — it’s a competitive requirement.
Join top CEOs, founders and operators at the Level Up conference to unlock strategies for scaling your business, boosting revenue and building sustainable success.