Databricks logo
CFOTO/Future Publishing via Getty Images
A top data science implementation company is teaming up with OpenAI to create a new way for enterprise users to build artificially intelligent agents.
The Databricks “Agent brick,” deployed on the Databricks data intelligence platform, will feature models like GPT-5 contained in enterprise-ready agent entities that can analyze and act on business intelligence, automate workflows, and consider context, like regulations and cybersecurity standards.
Users can run LLMs at scale, and get the power of frontier models in an ecosystem that makes sense.
The partnership is estimated to be worth $100 million, with a stated goal of “offering frontier AI to companies around the world.”
“We’re seeing overwhelming demand from enterprise customers looking to build AI apps and agents on their data, tailored to their unique business needs,” said Databricks Co-founder and CEO Ali Ghodsi in a press statement. “This partnership makes it easier for enterprises to securely leverage their data and OpenAI models at scale with best-in-class governance and performance.”
“Enterprise demand for frontier AI is accelerating,” added OpenAI COO Brad Lightcap. “Our partnership with Databricks brings our most advanced models to where secure enterprise data already lives, making it easier for businesses to experiment, deploy, and scale AI agents with real impact.”
More from Databricks People on Innovation
Databricks is itself a big company, serving many clients in various sectors.
At our Imagination in Action event at Stanford in September, Navin Chaddha, a Managing Partner at Mayfield, talked to Databricks chairman and co-founder Ion Stoica and Databricks CTO Matei Zaharia about data engines that power frontier AI, and what the company is doing with vendors and other parties to enhance what’s available to enterprise.
Chaddha asked how Databricks does what it does, whether the company has some “special water” making its people innovative.
“I think that with many of these projects… they started to get some traction,” Stoica said. “And now you get to a point (at which) some companies, some enterprises, are going to think about making a longer-term bet on using that technology, that system, and that at that point they’re asking the question, ‘Hey, what is going to happen?’”
Companies, Stoica explained, want confidence. They want to bet on technologies.
“They want to make sure that there is an entity which is well financed, which is going to support that project, is going to further evolve it, fix the bugs, make (everything) production-ready, and all of those things,” he said. “And so if you want to go one step beyond that and have a broader impact… someone has to support (the product) longer-term, and commit to it.”
The AI Hype Cycle
“It’s definitely a mix of, some areas are overhyped, some areas are probably underhyped,” Zaharia said, in response to Chaddha’s question about where the business community is in the AI hype cycle. “It’s taking people a while to wrap their minds around what you can do well with AI, and what you can’t do, but they’re getting a sense of it. And there are a lot of applications that are landing every day in production that are working well.”
He cited coding agents as one obvious example.
“(Agents are) actually useful for a lot of things,” Zaharia said, “but (there are) many other things as well, with documents, and just in the enterprise space, there are a lot of workloads that people are doing. And in the consumer space, there’s obviously a huge amount.”
The Timeline
“Things are evolving so fast,” Stoica pointed out, “when you ask these questions, you have to remember that ChatGPT was released less than three years ago. It’s amazing where we are today, that everyone is talking about these models – before, only the researchers were talking about these models.”
Progress on AI, he suggested, is not done.
“I think that … these models are very knowledgeable, but also imprecise, and hallucinating – we cannot say that (there’s been) a lot of progress on that side. I think that where you see a lot of progress is in domains and applications … you know, you have to hope that (people will still) work to improve these models to be not only powerful, in terms of the kind of answers they are going to give and solutions they are going to find, but also in terms of the accuracy, the trustfulness and reliability of the market. I think that there’s still a lot of work to be done in that area, probably also foundational work.”
Slowing Down a Bit
“For the LLMs and VLMs, it’s sort of clear that in terms of capabilities, you know, they’re kind of slowing down a little bit, but the other thing that’s very clear is, for the same level of capabilities, you can make the cost go down dramatically,” Zaharia said. “So all these things that now can only run in a data center, could run on the edge, or could become less expensive, because it’s a combination of hardware people working on it, software people, and also just training dollars spent. If you just train a small model for longer, it will get better.”
The group talked about inference, and bottlenecks, and other design aspects, revolving around the kinds of innovation that Databricks is demonstrably doing.
The Ecosystem Approach
Part of what I find fascinating about the Databricks/OpenAI partnership is the idea of embedding AI agents in a platform and tool use ecosystem that makes all of this more accessible to a greater number of users. In classes where MIT students interact with experts, we’ve talked a lot about no-code and low-code approaches, and how that democratizes tech, and pilots like this provide a clearer pathway toward that kind of dynamic. Stay tuned for more.