AI Teammates Are Changing The SOC Without Replacing People

Role-based AI “teammates” are emerging in SOCs, taking on repetitive tasks so human analysts can focus on higher-value work like anticipating threats and strengthening resilience.
getty
The conversation about AI in security operations centers has long been framed around automation replacing humans. But many security leaders see a different path—one where AI works alongside analysts as a “teammate,” taking on repetitive, low-value tasks so humans can focus on higher-value work.
I had an opportunity recently to speak with Brian Murphy, founder and CEO of ReliaQuest, and James Lowry, director of information security for Signature Aviation, to get some expert perspective on the issue.
From Burnout to Breathing Room
When Lowry joined Signature Aviation three years ago, his team was doing everything—detection, response, architecture, and engineering—with no automation to help shoulder the load.
He described how the company has leveraged technology to automate tier one SOC tasks and noted that about 80% of all alerts are now handled by automated response plays. “The more you digest to get that common operational picture, the faster it’s going to be and the lighter the load is going to be on my team to allow them to get back to what they really want to do anyway, right? They want to architect. They want to engineer. They want to be interfacing with the business, helping solve problems.”
Murphy views agentic AI as an “exponential multiplier” that lets security teams redirect their time to more strategic work. “It’s not taking over for that security analyst or replacing that… it’s freeing up some of their time,” he explained. By removing duplicates, false positives, and other distractions, Murphy believes AI “weaponizes” analyst knowledge for faster, more consistent execution.
That philosophy underpins ReliaQuest’s recent launch of GreyMatter Agentic Teammates, billed as the industry’s first autonomous, role-based AI agents for security operations. Built to represent specific SOC functions—such as Threat Intelligence Researchers, Detection Engineers, and Threat Hunters—these AI teammates work together and with human analysts to anticipate threats, model risk, and strengthen long-term resilience.
According to ReliaQuest, GreyMatter’s native agentic AI already performs investigations 75% faster than traditional methods with over 99% accuracy on malicious activity. The new role-based teammates are designed to extend those capabilities, giving SOC teams around-the-clock, burnout-proof coverage while allowing human experts to focus on predictive and business-aligned security work.
Why Context Matters
Both leaders pointed to the importance of AI adapting to an organization’s environment. Lowry offered an example: his CEO often visits multiple states in a day, which would normally set off “impossible travel” alerts.
“It knows that’s our CEO. It knows that this is common for him, and so it allows us not to hear that noise in the background,” he said. But if the travel was from a location far outside normal business operations, such as Thailand, it would still trigger a review.
Murphy described the same principle in broader terms. For him, AI teammates should operate within the context of each organization’s unique architecture, tools, and policies.
Trust, Transparency, and Human Oversight
AI is not flawless. “AI models do hallucinate from time to time. It’s just what they do,” Lowry acknowledged. Early deployments provided recommendations rather than taking direct action, allowing his team to validate and tune the system over time. Spot checks remain in place, especially for high-value assets.
Murphy emphasized transparency as a safeguard: “Show your work… here’s the plan, here are the steps that I took, here’s the data… here’s the conclusion I came to.” That visibility not only makes it easier to audit decisions but also helps train the AI to improve over time without losing organizational context.
The Skills Question
Some worry that handing tier-one tasks to AI could leave future analysts without the foundational skills needed for higher-level work. And by “some,” I mean me. It has occurred to me lately that we are saying that people should validate results from AI and not accept it as gospel, but that assumes you have the institutional knowledge to know what the answer should be and the skills and experience to second-guess the AI.
Murphy likens AI teammates to an executive’s support staff: “I can’t do my job without the right teammates around me. And then my question becomes, well, what work should they not be doing? Like, when do we get time to think as humans?”
Lowry sees the concern but believes organizations will draw boundaries: “At some point we’ll draw a line in the sand and say, ‘How much are we going to allow AI to take over?’” For him, AI should handle routine, repeatable work so people can focus on developing deeper skills.
Matching the Adversary’s Speed
Adversaries are already using AI to increase their speed and scale. “When the adversaries are using it, you have two options, use it in defense or get beat,” Murphy said.
Practical Advice for CISOs
Lowry recommends a gradual approach when adopting AI in the SOC. Rather than deploying it across the most sensitive systems immediately, organizations should start with lower-risk use cases to observe how the AI behaves and refine its performance. He also stresses the importance of training AI models on an organization’s own data and keeping them within secure, internal environments to reduce risk.
Murphy’s advice centers on ensuring that automation serves the operational goals of the security team rather than fitting a vendor’s platform strategy. He emphasizes flexibility—keeping data where it already resides, applying automation where it delivers clear value, and avoiding unnecessary costs or dependencies.
Augmentation, Not Replacement
Both leaders agree that the role of AI in the SOC is to enhance human capabilities, not replace them. AI teammates can help organizations operate at the speed and scale of modern cyber threats, while freeing human analysts from repetitive tasks that cause fatigue and slow response times. The aim is to strengthen security operations by combining the efficiency of automation with the judgment and adaptability of experienced professionals.