The AI Race Is Now About Databases — Not Just Big Models

The real AI race isn’t about smarter models anymore. It’s about who can serve fast, reliable data to … More
When Snowflake announced its $250 million acquisition of Crunchy Data two weeks ago at its annual summit in San Francisco, it was a signal that the battle for AI dominance, which was once the domain of massive language models and advanced GPUs, was shifting straight into what many experts describe as the foundation of AI development: the database layer.
Two weeks earlier, Snowflake rival Databricks revealed its own $1 billion acquisition of Neon, another Postgres-native startup. And right about the same time, American cloud-based software company Salesforce made an $8 billion deal to buy data management provider Informatica.
Taken together, these aren’t just high-profile database deals but proof that the AI infrastructure race is moving down-stack. While sophisticated models may still grab headlines, the battle is now increasingly about who can serve AI-ready data, fast, resiliently and at scale.
Why Databases Really Matter
Despite their limitations, AI tools often produce remarkably human-like results. But these tools, whether in the form of AI copilots, chatbots, or assistants, require lots of data parameters to do that. They demand a steady flow of both structured and unstructured data — fresh, fast and accessible. As one analyst put it, “AI is stupid without access to good data.” And not just access to a treasure trove of data, but also data orchestration at machine speed.
To access and process data quickly, which is how AI chatbots are able to “reason,” PostgreSQL — the open-source database powering many traditional web apps and enterprise systems — has become a go-to choice for companies modernizing their data infrastructure because it’s reliable, widely used and already trusted in enterprise settings. But as Spencer Kimball, cofounder and CEO of Cockroach Labs, noted in an interview, while Postgres was built for batch jobs, periodic queries and peak load measured in thousands, AI doesn’t work like that.
“Retrofitting Postgres for real-time, agent-driven workloads exposes architectural limits. AI agents, copilots and real-time pipelines generate nonstop reads and writes. They require globally consistent data in milliseconds and expect systems to absorb failure without flinching,” Kimball said.
That’s why companies like Snowflake are rethinking what the modern database needs to be. For Snowflake, the answer is to embed transactional and AI-ready systems deep into its platform, giving customers a seamless way to support a new generation of intelligent workflows.
Snowflake’s Bet On Real-Time AI Workflows
At its yearly summit, which had 20,000 attendees this year, Snowflake CEO Sridhar Ramaswamy emphasized a major shift toward “workflow-native” data platforms. With tools like Openflow, the company is repositioning itself as the connective tissue between corporate data and AI-driven decisions.
The idea, according to Ramaswamy, is to make it easier for companies to tap into and connect fragmented data sources — including on-prem databases, SaaS apps and unstructured streams — into unified pipelines and build real-time workflows on top of them. And Snowflake Intelligence is meant to go a step further.
By layering generative AI atop enterprise data, the platform lets non-technical employees query across their company’s information without writing a single line of SQL or needing engineering support. Ask a natural-language question, and an AI agent finds the answer, with context.
Jeff Hollan, head of Cortex AI apps and agents, calls this the next frontier, noting that “the next generation of apps aren’t just data-hungry, but also reasoning-hungry.” That hunger for reasoning is why the database layer is now more critical for enterprises than ever before, Hollan said.
Artin Avanes, head of core data Platform at Snowflake, sees this growing demand for platforms that can support continuous data interaction, not just batch analytics, as a reflection of an industry-wide shift that Snowflake is aiming to lead. “While faster access to data is great, that’s not what you really need,” Avanes told me in an interview. ”You need systems that adapt to how decisions are made in real time.”
That shift is also fueling the rise of Cortex AI, Snowflake’s agentic AI framework, which orchestrates data at scale, automates operations and delivers business value in live environments. It’s a far cry from the common situation for most organizations today, where many are still trapped in proof-of-concept cycles, according to Vivek Raghunathan, senior vice president of engineering at Snowflake.
“Everyone’s experimenting. But very few are scaling responsibly,” he said. “Part of the challenge lies in the disconnect between what enterprises think AI will do and what it actually requires.”
As Raghunathan further explained, it’s an organizational issue. “Without clear vision and readiness at the infrastructure level,” he noted, “no amount of AI investment will deliver sustained value.” And that’s a reality many enterprise leaders are now coming to terms with.
In fact, according to Fivetran’s report on AI and data readiness, 42% of enterprises report that over half of their AI projects have been delayed, underperformed, or failed due to poor data readiness, underscoring the growing gap between AI ambition and architecture.
The Big, Bold Message
Snowflake’s move to acquire Crunchy Data, Databricks’ Neon acquisition and Salesforce’s big money deal are all land grabs in the AI foundation layer. They show that vendors no longer want to sit atop the data stack. They want to own it.
For enterprises, that should trigger a serious question: Who do you trust with the foundation of your AI future?
Because as AI moves from experimentation to execution, the companies that win won’t just build the smart models. They’ll be the ones who control the data stack. What that means is that the database layer isn’t the back-end of enterprise AI anymore. It’s now the front line.