Perplexity CEO Says Curiosity, Not Hype, Will Shape AI’s Future

Perplexity CEO Aravind Srinivas at HubSpot INBOUND 25
Ron Schmelzer at HubSpot INBOUND 25
Speaking at HubSpot’s INBOUND 2025 conference, Aravind Srinivas, CEO of Perplexity, shared a message that cuts against today’s noisy AI hype cycle. While most AI companies race to make models more all-encompassing and humanlike, he argued the real breakthrough lies in spurring curiosity.
“Before AI, if you were curious about something, you had to gather a team, hire consultants, debate in a committee,” he said. “Now you can just ask.” For Srinivas, the value of AI is not in replacing work, but in helping people ask better questions. The most successful employees, he claims, will be the ones who enter meetings with sharper questions, not pre-baked answers.
Curiosity as Leverage
Srinivas compared curiosity to leverage in business and science. He pointed to moments in history where curiosity drove innovation. Transistors grew out of challenges to the limits of vacuum tubes. John Deere reshaped agriculture by asking if steel could replace brittle iron.
“If you aren’t hearing ‘that’s impossible’ or ‘why would you even ask that,’” he said, “you’re probably not asking hard enough questions.”
This perspective echoes comments from Satya Nadella, Microsoft’s CEO, who has emphasized that AI should help people “reason over data, not drown in it.” Where Nadella leans toward reasoning tools for enterprise scale, Srinivas takes a more personal approach, focusing on how individuals can transform the way they think inside an organization.
Why Accuracy Matters When AI Goes Wrong
Perplexity launched in December 2022, just a week after ChatGPT’s big launch. At the time, users found AI’s mistakes entertaining, sharing screenshots of absurd outputs that went viral. Perplexity’s investors told Srinivas that focusing on a drier approach, adding citations and verifications to responses, made answers boring.
He disagreed, “Only an accurate answer leads to the next good question.”
That view has become Perplexity’s calling card. Its product doesn’t just give responses. It cites sources and nudges users toward follow-up questions. “Curiosity doesn’t stop with an answer,” Srinivas said. “It begins there.”
Critics of LLMs point out that hallucinations remain a persistent problem. Gary Marcus, a cognitive scientist and long-time AI critic, has argued that trust in AI hinges on verifiability, not fluency. In that sense, Perplexity’s model, grounded responses with visible sources, aligns more with his call for reliability than with Silicon Valley’s preference for style, moving fast, and breaking things.
Building Assistants That Actually Help
Srinivas wants more than citations. His team is building Comet, a browser assistant that watches Slack, email, and dashboards, then brings the right context into view. Instead of copying and pasting into a chatbot, the tool surfaces material as you work. He calls it a second brain, always nearby, never in the way.
Other startups are chasing similar ideas. Adept is training AI to click around inside software. Rewind wants to record every screen interaction for recall later. OpenAI has experimented with custom GPTs that handle narrow tasks. Perplexity takes a different tack: one assistant that adapts to you, not the other way around.
“You shouldn’t need prompt engineering classes to get your job done,” Srinivas said.
Competing Philosophies in the AI Race
AI companies are drifting apart in how they see the future. Some race to build bigger, faster models with broad capabilities. Perplexity is betting that reliability will win over flashy scale. Anthropic has tried to split the difference with Claude, designing it to say nothing when it isn’t sure. Critics say that makes it timid.
Ethan Mollick, a Wharton professor studying AI in business, puts it bluntly, “For students, fluency matters. For executives, reliability does.” That explains why consumer chatbots, prone to hallucinations, catch fire on social media but stumble in boardrooms.
Srinivas didn’t sugarcoat it. “Beware of AIs that always tell you what you want to hear. Those aren’t assistants. Those are sycophants.”
Building Curiosity as a Culture
Behind the product pitch, Srinivas kept returning to a cultural theme. He argued that meetings should be judged less by polished slides and more by the quality of the questions raised. Too often, bureaucracy, status updates, and formatting bury curiosity. A good assistant, in his view, would clear the busywork so people can get back to the thrill of inquiry.
That idea borrows from Slack’s co-founder Stewart Butterfield, who once said software should reduce “work about work.” Srinivas extends it. He thinks AI can revive the creative spark inside organizations, giving employees more room to think and question instead of just executing operations.
The hard part will be execution. Digital assistants have been promised before, and Siri, Alexa, and Google Assistant all stumbled. Perplexity is wagering that a focus on accuracy and curiosity will keep it from the same fate. Whether that pays off depends on whether people really want more than quick answers from LLMs.