Cybersecurity’s Fast And Furious Era Demands AI At The Core

Posted by Tony Bradley, Senior Contributor | 4 hours ago | /cybersecurity, /enterprise-tech, /innovation, Cybersecurity, Enterprise Tech, Innovation, standard, technology | Views: 14


Cybersecurity has always been an arms race. Defenders build stronger walls; attackers find taller ladders. But the introduction of generative and agentic AI has fundamentally altered the pace of that race. What once unfolded over weeks or months now happens in hours or even minutes.

This isn’t a gradual evolution. It’s a non-linear change, and the industry isn’t ready.

The Fast And Furious Era Of Cyber Threats

Generative AI has made it easy to create convincing phishing emails, impersonate executives or automate code exploits. Ransomware groups that once relied on patient reconnaissance can now launch high-volume, rapid-fire attacks. The result is compressed timelines: defenders no longer have days or weeks to investigate and respond. They may only have minutes.

Lior Div, co-founder and CEO of 7AI, describes this inflection point as a shift from patience to aggression: “We’re going to see the move from the low and slow to the fast and furious. They’re not coming slow and trying to sneak in.”

Michelle Abraham, research director for security and trust at IDC, highlights how attackers are already using AI to sharpen their tools: “GenAI has enabled threat actors to improve their phishing emails, making them more personal and less obvious, as well as translating them into more languages and reducing the time it takes to develop each one. Investigating each phishing email would overwhelm the SOC; the only way to improve triage efficiency is to use automation and AI as a first line of defense.”

The SOC Bottleneck

Security operations centers have become the nerve center of defense—but also its biggest bottleneck. The traditional model—tiered analysts in shifts supported by MSSPs and a patchwork of tools—was never designed for this level of speed or scale.

Analysts are drowning in alerts. Each new tool adds more signals to sift. The notion of “helping analysts work faster” misses the point when many teams don’t have enough analysts to begin with.

Div is blunt about the limits of the status quo: “There is no chance that we as industry will be able to deal with it. It’s like we’re too slow. We just can’t.”

Richard Stiennon, chief research analyst at IT-Harvest, underscores the urgency: “Proof of concepts have already been published for both AI automated attacks all the way through breach and exploit creation from published CVEs. The time for enterprises to detect and respond has to be shortened by two orders of magnitude. Only AI can do that.”

Re-Balancing The Human–Machine Partnership

The path forward is not to replace people but to re-balance the work. Machines are better at toil—repetitive, high-volume, rules-based tasks. Humans are better at strategy, creativity and nuanced problem-solving.

Agentic AI offers a way to divide that labor. Instead of only surfacing alerts, AI can investigate end-to-end, document every step, then present a conclusion for human review. That gives analysts back time and focus for higher-order threats.

As Div puts it, you still need people to do what people are best at: complex problem solving, strategic thinking, creative thinking. While toil, and boring, repeatable work shifts to AI.

Trust And Transparency

Giving AI more autonomy raises valid questions of trust. How much control is too much? How do you prevent bad calls in sensitive environments?

Transparency is the answer. AI must document the steps the way a good analyst does—every query run, every log reviewed, every decision point documented—so teams can audit the process rather than accept conclusions blindly.

Div emphasizes this standard: “I’m a big believer that you have to show the work. We’re documenting what the agent did,” from VirusTotal lookups to sandbox detonations to KQL queries, including requests, responses and conclusions.

I drew an analogy to the on-screen strike zone in baseball. Human umpires still call balls and strikes for Major League Baseball games, but a digital strike zone is displayed on screen that shows precisely where the ball actually goes. Everyone can see the data and immediately verify whether the decision of the umpire is right or wrong. Cybersecurity needs the same model, but in reverse: machine speed decisions with human visibility to validate if those decisions are correct.

The Skills Debate

If AI handles the basics, will the next generation of analysts lose foundational skills?

That is a valid question that I have been pondering more lately. But, history suggests tools reshape skills rather than erase them. Calculators changed what math students memorize. GPS changed how we navigate.

In security, AI’s ability to document investigations can double as a teaching tool, accelerating how analysts learn to frame better questions, validate AI work and apply judgment at the strategic level.

The Geopolitical Imperative

Div pointed out that there’s also a harder truth to consider. While the West debates AI guardrails, adversaries are pressing forward without hesitation. State actors and criminal syndicates are already weaponizing AI to scale attacks.

His warning is clear: defenders don’t have the luxury of waiting. If attackers adopt non-linear change first, they will gain the upper hand.

Adapting To Non-Linear Change

Cybersecurity stands at a turning point. Incremental improvements won’t match the speed of AI-driven attackers. The industry must embrace non-linear change: deploy AI not as a helper for humans but as a partner that takes on repetitive work while humans focus on what only humans can do.

The question isn’t whether AI belongs in the SOC. The question is how quickly defenders can adapt before attackers leave them behind.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *