From Vibe Coding To Vibe Hacking — AI In A Hoodie

Is vibe hacking the next big cyber thing?
Artificial intelligence, or at least the AI that we know from the use of large language models and, in particular, the various generative pre-trained transformer services that we have become so accustomed to, has already been weaponized by threat actors. We’ve seen attack after attack against Gmail users employing AI-powered phone calls, 51% of all spam is now reported as generated by AI, and deepfakes have been a cybersecurity issue for the longest time. But just how advanced is the AI cyberattack threat and, more importantly, how close are we to fully autonomous attacks and vibe hacking emerging from the vibe coding phenomenon?
From Vibe Coding To Vibe Hacking — The Reality Of AI In Cyberattacks
Vibe coding isn’t what a lot of people seem to think it is. I’ve seen numerous folk, many of whom should know better, describe it as a method of letting AI generate code from nothing and develop an application from scratch, without requiring coding input from the “programmer” directing it so to do. This is, of course, a nonsense seeded with more than a little reality. Vibe coding makes the life of a developer much easier, delegating some of the programming to AI, based on outcomes, but it doesn’t negate the requirement to provide direction and demonstrate a high level of understanding. That said, LLMs and vibe coding are making leaps and bounds in producing surprisingly efficient code. But what about hackers using the same techniques, vibe hacking, if you will, to do the same with cyberattacks? Using LLMs to discover and exploit vulnerabilities, reliably and with malicious impact?
According to Michele Campobasso, a senior security researcher at Forescout, there is “no clear evidence of real threat actors” doing this. Rather, Campobasso said, “most reports link LLM use to tasks where language matters more than code, such as phishing, influence operations, contextualizing vulnerabilities, or generating boilerplate malware components.” Vibe hacking has a long way to go to catch up to vibe coding, it would seem, according to the latest analysis by Campobasso.
“Between February and April 2025,” Campobasso said, “we tested over 50 AI models against four test cases drawn from industry-standard datasets and cybersecurity wargames.” The results were, to say the least, informative:
- Open-source LLMs performed poorly at all the tasks, remaining “unsuitable even for basic vulnerability research.”
- Criminal, underground, LLMs fared marginally better but were “hampered by usability issues, including limited access, unstable behavior, poor output formatting, and restricted context length.”
- Commercial models, perhaps unsurprisingly, did best, but even then, only three of 18 could produce a working exploit for the most difficult of test cases.
- LLMs found exploit development itself much harder than vulnerability research with no model completing all the tasks set.
“Attackers still cannot rely on one tool to cover the full exploitation pipeline,” Campobasso said. LLMs produced inconsistent results, with high failure rates. “Even when models completed exploit development tasks,” Campobasso said, “they required substantial user guidance.”
To conclude, Campobasso stated that we are “still far from LLMs that can autonomously generate fully functional exploits,” while the “confident tone” of the models, when incorrect, will mislead the inexperienced attackers most likely to rely upon them.
The age of vibe hacking is approaching, although not as fast as the vibe coding phenomenon would imply, and defenders should start preparing now. Luckily, this isn’t too difficult, according to Campobasso. “The fundamentals of cybersecurity remain unchanged: An AI-generated exploit is still just an exploit, and it can be detected, blocked, or mitigated by patching.”