Enterprises want AI to move fast, but legacy processes keep slowing them down. Solving the friction around governance, transparency, and model readiness is now the real path to AI acceleration.
getty
Enterprises are racing to adopt AI, but most run into the same problem once they move past early experiments: progress gets slower, not faster.
Technology often isn’t the primary issue. The real drag comes from the organizational systems wrapped around it — security reviews, legal checks, compliance requirements, cost controls, and development workflows that weren’t built for the speed of modern AI.
Business leaders want results. Developers want access to the best open-source and commercial models. Teams want to experiment without being blocked by uncertainty about data handling, licensing, or infrastructure. Yet each step introduces new questions about risk and governance. A model may outperform everything else a company has tested, but if no one can explain what data it was trained on, how it’s licensed, or what it will cost to run at scale, it won’t get far.
The Hidden Complexity Behind “Just Use AI”
The idea of building with AI sounds simple. In practice, once a project grows beyond a proof of concept, companies discover that their internal processes weren’t designed for it. AI forces decisions that stretch across engineering, legal, security, compliance, and finance, and those decisions aren’t easy to coordinate.
Problems surface early. Teams often pull models from public repositories without fully understanding the security or licensing implications. Security reviews that should take days can stretch into weeks. Licensing terms vary widely, and some models carry restrictions that aren’t obvious at first glance. Compliance teams raise concerns about datasets with unknown origins. Costs can spike when workloads move from testing to production. Even basic collaboration breaks down when different groups use different tools and environments.
None of this is intentional. It’s a predictable result of trying to merge fast-moving open-source AI with slower, risk-sensitive enterprise structures.
Seth Clark, VP of product, AI at Anaconda, described the gap this creates inside organizations. Many generative AI tasks can be handled with commercial frontier models, but not all. As he explained to me, “about 20% of the time, we found the use case necessitates you to take a different approach.” He emphasized that this is especially true when working with regulated data, domain-specific terminology, or large internal datasets that are too costly to move.
Why Enterprises Need a Better AI Foundation
Some organizations respond to these obstacles by tightening oversight, which slows development even more. Others relax controls to accelerate testing, which introduces new risks. The companies moving fastest have accepted that AI requires a stronger foundation: transparency, standardization, and automated governance that keeps pace with development.
This foundation starts with clear visibility into where models come from, how they’re licensed, what data they were trained on, and how they behave. From there, organizations need unified environments where teams can test, validate, and compare models without stitching together ad-hoc pipelines.
This is also where identity and governance teams play a critical role. Den Jones, founder and CEO of 909Cyber, has spent years helping enterprises find this balance. As he put it, “Most companies don’t struggle with AI because the models are bad — they struggle because their systems, identities, and data aren’t ready for it. If you can’t trace where your data comes from or enforce basic access controls, adding AI on top only makes the risk bigger. Enterprises need visibility and governance first. Once you have that foundation, AI becomes a force multiplier instead of a liability.”
Clark echoed the importance of visibility has become as AI systems grow more complex. Organizations need the full picture to help teams balance cost, performance, and accuracy — something that becomes harder as model sizes and workloads grow.
Trying to Solve the Problem
Vendors are starting to recognize how much friction enterprises face. Anaconda’s new AI Catalyst suite is one example of how the market is responding. Instead of starting with a SaaS-first approach, the company built the platform around a VPC-first model so organizations can run workloads inside their own environments with tighter security and cost control. That alone reflects a shift in enterprise AI thinking.
AI Catalyst includes a curated catalog of vetted, secure generative AI models — each one checked for licensing, security risks, and provenance. The platform lets teams compare models on equal footing, deploy them inside private infrastructure, and choose between different quantized versions based on performance or cost needs. It’s a way to reduce the uncertainty that slows enterprise AI adoption.
The platform also aligns with a growing need for something like an “AI bill of materials,” giving organizations a detailed view of what each model contains, how it was trained, and what risks or restrictions come with it. That level of transparency is becoming essential for both governance and practical decision-making.
Looking Ahead
AI continues to move fast, and enterprises are trying to move with it without exposing themselves to unnecessary risk. The technology is powerful, but the operational challenges behind it matter just as much. Companies that focus on reducing friction — clarifying model provenance, unifying tooling, standardizing governance, and giving teams the freedom to work inside safe boundaries — will move faster than those relying on ad-hoc experimentation.
Speed comes from structure, not shortcuts. The organizations that build the clearest, most transparent, and most efficient system around those models will be the ones that come out ahead.
