Lack Of AI Oversight Increases Data Breach Risks

The Wiretap is your weekly digest of cybersecurity, internet privacy and surveillance news. To get it in your inbox, subscribe here.
As more companies adopt AI without oversight, the more they risk their own security. That’s one of the implications of IBM’s annual report on data breaches, which looks at the impact of AI for the first time this year. The tech giant found that 16% of breaches in the past year involved the use of AI tools. Additionally, 20% of organizations reported that they’d experienced a breach due to an employee using unsanctioned AI tools on company computers.Of the organizations that saw AI-related breaches, 97% didn’t have any access controls in place and 63% didn’t have an AI governance policy.
“The data shows that a gap between AI adoption and oversight already exists, and threat actors are starting to exploit it,” Suja Viswesan, IBM’s vice president of security said in a statement.
The stakes are high: In the United States, the average cost per data breach has reached a record $10.22 million–even as the average cost globally has declined to $4.44 million. Healthcare is the most expensive sector when it comes to a data breach: the average incident costs about $7.42 million, though that is a big decline from 2024’s $9.77 million figure.
Companies are also getting better at managing data breaches: the average lifecycle of a data breach incident–from discovery to recovery–dropped to 241 days, compared to last year’s 258 and the 280 days IBM identified in 2020. This is in part because more companies are discovering breaches on their own rather than hearing it first from their attackers–in part, because more companies are using AI to monitor their networks and keep them secure.
Got a tip on surveillance or cybercrime? Get me on Signal at +1 929-512-7964.
THE BIG STORY:
How Scrubbing Your Social Media Could Backfire–And Even Hurt Your Job Prospects
Illustration by Samantha Lee for Forbes; Photos by EasyBuy4u/Getty Images; Mark Mawson/Getty Images
For college students looking for jobs or internships, the standard advice about social media has been this: Build up your professional profile on LinkedIn, but scrub other social media accounts (the ones displaying your political opinions or party antics) or just make them private.
Yet recent developments could make that playbook obsolete as students face a potential Catch-22: What they’ve said on social media can hurt them when they are job hunting. But students erasing or cloaking their public online presence could also backfire in less predictable ways.
Some prospective employers are adopting AI tools to screen social media to determine if applicants are real, because AI has led to an explosion of fake (or stolen) identities by scammers. Those tools screen for things like age of social accounts, posting and liking activity as well as LinkedIn connections, which makes scrubbing your profile a riskier proposition.
Read the whole story at Forbes
Stories You Have To Read Today
Over 300 companies have been infiltrated by online scammers from North Korea pretending to be working remotely from elsewhere, according to a new report from Crowdstrike.
AI search engine Perplexity is obscuring the identity of its crawlers to sidestep websites that block them, per a new Cloudflare report.
The Senate confirmed Sean Cairncross, a Republican political operative with no professional cybersecurity experience, as the new head of the Office of the National Cyber Director, which advises the President on cyber defense issues.
Hackers backed by the Russian government are attempting to break into systems at foreign embassies in Moscow, Microsoft has warned.
Senators Marsha Blackburn (R-Tenn.) and Gary Peters (D-Mich.) have introduced legislation to develop a national cybersecurity strategy for protecting federal systems from quantum computers.
Winner of the Week
Cybersecurity researchers stand to win tens of thousands of dollars if they can find security issues in popular software at the Pwn2Own contest being held this October in Ireland. The biggest prize? Meta announced last week that it is offering $1 million to any team that can find a 0-day exploit in WhatsApp.
Loser of the Week
Security researchers found major security vulnerabilities in AI-coding tool Cursor which would allow hackers to remotely execute malicious code and bypass other protections. The vulnerabilities were patched in the latest release.
More On Forbes