1Password And Perplexity Partner To Secure The AI Browsing Era

The integration between 1Password and Perplexity is more than a product launch—it points to a future where trust, not just intelligence, defines the value of AI.
SOPA Images/LightRocket via Getty Images
The Internet is entering a new phase—one where agentic AI doesn’t just retrieve information but actively reasons, interprets and acts on behalf of the user. That’s the promise behind Perplexity’s new AI-powered browser, Comet. But with new capabilities come new risks. If AI is going to handle more of our online interactions, security has to be built in from the start.
That’s the rationale behind a new partnership between Perplexity and 1Password. The companies are working together to ensure that the productivity gains of AI browsing don’t come at the expense of safety.
The Next Chapter of Browsing
For decades, web browsers have acted as passive tools. Users typed in URLs, bookmarked pages, or ran searches to locate the information they needed. Comet aims to flip that model by weaving AI directly into the browsing experience. Instead of pulling up a static list of results, Comet interprets intent, synthesizes information and presents users with contextual answers and next steps.
I sat down with Anand Srinivas, VP of product and AI at 1Password, and Kyle Polley, head of security at Perplexity, to talk about the shifting technology landscape and how the partnership between the two companies addresses emerging challenges.
Kyle explained, “We’re one of the first browsers to ship with a full AI assistant and AI agent that could actually perform tasks on behalf of users. And with that comes access to things like credentials. Instead of reinventing the wheel ourselves, we wanted to partner with someone like 1Password, who has been hyper-focused on this problem since day one.”
In other words, Comet isn’t just a browser with AI features tacked on—it’s an AI-native environment that can complete real-world tasks. That difference is what makes the security conversation more urgent.
Where Security Fits In
Identity sits at the heart of digital trust. Without strong identity controls, it doesn’t matter how advanced the AI layer becomes—credentials can be stolen, accounts hijacked and data exposed.
Anand made the stakes clear. “With agentic AI, the risk becomes double or even tenfold. Credentials should never be passed directly into the model, because once they enter the context window there’s a chance they could be leaked. That’s why we built this integration so credentials remain encrypted and are only filled on behalf of the AI—never fed directly into it.”
That separation may seem like a small design choice, but it has big implications. By keeping credentials out of the AI’s reach, the risk of unintended exposure drops dramatically.
Balancing Productivity and Privacy
During our conversation, I found myself comparing the trade-offs of agentic AI to the long-running debates about virtual assistants. People worry about Alexa or Siri “listening” to them or granting access to sensitive data, yet those tools only become useful when given enough access to personal information. AI browsing raises the same dilemma. The more access the system has, the more powerful it becomes—but also the more users have to trust it.
Kyle acknowledged that tension. He said the goal was to empower users with choice: “If you don’t invoke the AI on those systems, we’re not collecting that data. We’re not seeing that data—it doesn’t leave your device. It really empowers the user to choose when they want the AI as part of the workflow and when they don’t.”
That philosophy aligns with the cloud’s evolution. Just as companies once feared moving data off-premises only to later embrace cloud providers as more secure, AI assistants may eventually become trusted not just for convenience but for stronger protections than individuals could build themselves.
Secure by Default: Why It Matters Now
The idea of “secure by default” has long been a guiding principle in cybersecurity, but AI raises the stakes. When humans click a bad link, the damage is limited to one action. When AI agents execute tasks on behalf of users, a single compromised credential could cascade into a chain of breaches.
Kyle stressed that this was foundational to Comet’s design: “We don’t want your credentials, we don’t want to see your data. We want to provide a really great AI assistant experience while still not even seeing your credentials or needing it.”
That principle—security without visibility—marks a shift in how companies think about trust. By embedding protections at the browsing layer, the partnership ensures security isn’t an afterthought.
Where AI Security Goes Next
For 1Password, the Comet integration is just the start. Anand said the company sees this as part of a broader move toward Extended Access Management, which aims to reduce the “access trust gap” created when employees use applications outside of enterprise single sign-on. Future capabilities could include detailed audit logs to show when and why credentials were invoked—an important step if AI assistants are going to play a bigger role in daily work.
The bigger picture is clear: AI tools won’t be judged solely on their intelligence or speed. They’ll be judged on whether users feel safe letting them act on their behalf. If trust is the new currency, then identity is the mint. Embedding strong protections into the browsing layer could set a precedent for how AI tools evolve across the digital ecosystem.
The browsing experience is being rewritten. With Comet and 1Password working together, it will hopefully be rewritten with trust at the center.