Trends

Key takeaways on the state of AI cybersecurity 2025 [Darktrace report]

Key takeaways on the state of AI cybersecurity 2025 [Darktrace report]

AI is no longer the “future of cybersecurity.” It’s the here and now. That’s the message from Darktrace’s State of AI Cybersecurity 2025 report, which surveyed more than 1,500 cybersecurity pros across 14 countries.

What they revealed is a landscape in motion. AI is helping security teams act faster, spot more, and plug gaps before they turn into breaches. At the same time, it’s arming attackers with new tricks — speeding up phishing campaigns, tailoring ransomware, and even automating parts of their operations.

This mix of opportunity and threat means one thing for 2025: your defenses now need AI in their DNA.

And we will tell you more about it.

AI cybersecurity quick overview for 2025

Here’s what’s shaping the day-to-day for security teams this year.

AI-powered threats’ impact is growing

If you think AI-powered threats are still on the horizon, think again. 78% of CISOs say they’re already making a significant impact on their organization. That’s a 5% jump from 2024, which makes us think that these attacks are multiplying and maturing. 

AI has shifted from being a background concern to an everyday pressure point for security teams across industries. The tactics driving this change are both creative and relentless:

  • AI-enhanced phishing has moved past the obvious typos and clunky wording. 
  • Messages can be tailored to the recipient’s role, company, or even current events, making them far more believable. 
  • Attackers are using AI to churn out massive volumes of phishing attempts at a pace humans can’t match.

And let’s not forget the ransomware. 

With AI in the mix, new strains can morph quickly to bypass detection, often learning from failed attempts to improve their success rate. They can identify high-value targets inside a network faster, making the damage more targeted and more costly.

Plus, there is also cybercrime-as-a-service. Yes, you heard us right. 

This model now includes AI toolkits that let even inexperienced actors launch sophisticated campaigns. For the price of a subscription, they can automate reconnaissance, craft malicious code, and even adapt attacks mid-stream.

For businesses, the speed and scale of these AI-powered threats raise the stakes. A phishing email that once took hours to craft now takes seconds. Malware that used to need weeks of testing can be deployed and fine-tuned in real time. That compresses the window for detection and response down to minutes — sometimes less.

Why should you bother? Because the fallout from a successful attack is no longer limited to stolen data. 

We’re talking about prolonged downtime, reputational damage, compliance fines, and loss of customer trust.

Thus, treat AI-powered threats as part of your baseline risk. Build defenses that can detect unknown threats, automate responses, and learn on the fly.

Confidence is rising, but knowledge gaps stay wide

What about confidence? In 2024, 60% of CISOs say they feel unprepared to defend against AI-powered threats. In 2025, it’s “only” 45%. So, on paper, it’s moving in the right direction.

The real gap shows up when you ask about understanding. 

Only 42% of respondents feel fully confident in knowing what types of AI in cybersecurity are in their stack. Drill down by role and it’s even starker: CISOs lead at 60%, but security operators scrape by at 10%, and administrators hit just 14%.

Going deeper, we find out that two of the top three inhibitors to defending against AI-powered threats are tied to insufficient knowledge. Either not knowing how to use AI countermeasures effectively or lacking the skills to handle AI technology in the first place. 

So, well, it’s hard to fight with tools you don’t fully understand.

Closing that gap means more than adding another product to the stack. You need to get everyone — from the SOC floor to the executive suite — fluent in what AI in cybersecurity is doing, how it’s making decisions, and where it fits into the bigger picture of defense.

The hiring paradox: too few people, no big hiring plans

Another growing challenge? There aren’t enough people to handle the workload — and no big hiring plans on the horizon.

When security leaders talk about what’s holding them back, one answer comes up more than any other — too few people to handle the tools and alerts. It tops the list with a score of 3.10 out of 5. 

Close behind? A lack of AI know-how. But when you look at hiring plans for 2025, only 11% of organizations say they’ll add more cybersecurity staff. For executives, it’s even lower — just 8%.

But why is that?

Instead of adding more people, security leaders are investing in tools that work smarter and faster. 

64% of respondents plan to add AI-powered solutions to their security stack this year. And 88% believe using AI is critical to freeing up time for proactive work. In other words, AI is becoming the “extra team member” in the SOC — the one who:

  • never sleeps, 
  • never takes a holiday, 
  • and can handle the grunt work

Thus, humans can focus on strategy and complex threats.

It’s quite a pragmatic approach in a talent-short market: If you can’t hire enough people, make the ones you have faster, sharper, and better equipped.

Policies: plenty of talk, less action

The skills gap we’ve just talked about ties directly into policy. If teams aren’t fully confident in how AI cybersecurity is working in their stack, it’s no surprise that governance is patchy. 

The Darktrace report shows 95% of organizations are either discussing an AI policy or have already implemented one. That sounds impressive, until you see that only 45% actually have a policy in place right now.

Even fewer are putting their AI cybersecurity use under a regular microscope. Just 37% say they audit or monitor AI usage consistently. For a technology that’s evolving this quickly, that’s a thin safety net.

Where there is action, it’s mostly about controlling risk at the data and security perimeter:

  • 67% have measures to prevent unwanted exposure of corporate data when using AI.
  • 62% protect against other AI-related threats or risks.
  • 48% invest in AI-specific training for application developers.

Those steps make sense — data leakage, untested models, and untrained developers can all open up new attack surfaces. But the big gap is in making these controls part of a live and enforced policy instead of a one-off project.

Without clear rules and regular checks, AI in cybersecurity can drift into risky territory. 

Policies bring everyone onto the same page — from the SOC team experimenting with detection models to the business units deploying AI-powered customer tools.

Done right, policy is an enabler. It sets guardrails so teams can move fast without breaking trust. The organizations that close the gap between “we’re talking about it” and “we’ve nailed it” will be the ones that get full value from AI without introducing silent vulnerabilities.

Where AI cybersecurity makes the biggest defensive impact

What’s interesting is that even with policy gaps, security teams are betting big on AI. 

When asked where AI will have the most impact, the top answer by far was improving detection of new or unknown threats at 56.9%. And yes, this is where traditional tools often stumble, relying on known signatures or patterns. AI, with its ability to learn what’s “normal” in a network, can spot the weird stuff before it becomes a headline.

Autonomous response came second at 43%. This is about AI taking direct action without waiting for a human to approve it — quarantining a device, cutting off suspicious traffic, or blocking a malicious file before it spreads.

Next is identifying exploitable vulnerabilities at 41.6%. Think of it as AI doing constant recon on your own environment, surfacing weak spots before an attacker does.

Accelerating threat investigation (36.7%) follows — turning days of manual log-sifting into minutes of correlation and insight. 

And while phishing and attack simulation scored lower at 13.7%, it’s still a valuable training and resilience tool when powered by AI.

The belief in this potential isn’t theoretical. 50% of respondents strongly agree that AI improves the speed and efficiency of prevention, detection, response, and recovery. Even more telling, 75% are confident AI cybersecurity solutions can defend better against AI-powered threats than traditional tools.

What ties all this together is speed. 

In a world where an AI-built phishing email or ransomware payload can adapt in seconds, defenses have to move just as fast. The big gains come when AI isn’t just spotting problems but acting on them instantly. And learning from every encounter so it’s sharper next time.

It’s why many organizations, even without perfect policies, are leaning into AI deployments. 

The return on speed, coverage, and insight is too big to ignore. But that also circles us back to the earlier points: these capabilities work best when the people using them understand what’s under the hood and have the right guardrails in place.

Top tech priorities for 2025

A year ago, most security teams were still figuring out where AI fit in. Now, they’re rather deciding how to center their entire defense around it. Three priorities are leading that shift, and they’re defining what smarter, faster security will look like this year.

First, proactivity. 88% of respondents say AI tools are critical for freeing up time so their teams can be more proactive. 

We can call it an enormous change from firefighting mode, where analysts spend most of their day reacting to alerts. With AI taking on repetitive detection and triage work, humans can spend their energy on deeper investigations, threat hunting, and improving the overall security posture. It’s the difference between barely keeping up and actually getting ahead.

Then there’s the platform approach. 87% of those surveyed would rather have a single integrated platform than a patchwork of point solutions. 

It’s not hard to see why. Too many tools can mean too many dashboards, disconnected data, and extra work just to keep systems talking to each other. An AI-driven platform brings all those signals together, spotting patterns across email, cloud, network, and endpoints in one place. It’s cleaner, faster, and reduces the chances of something slipping through the cracks.

Finally, privacy. AI is powerful, but the trust factor is non-negotiable. 84% say they want solutions that don’t require sending their data out for external model training. 

That means keeping sensitive information inside the organization’s own environment while still getting the benefits of advanced detection and response. It’s a sign that the conversation around AI in cybersecurity is no longer just about capability — it’s about capability without compromise.

Put together, these three priorities sketch a clear picture of 2025’s AI cybersecurity mindset: get ahead of threats, manage everything in one intelligent system, and protect the privacy of your data along the way.

Looking ahead: cloud and network security lead the way

And maybe you ask where AI cybersecurity will have the biggest defensive impact in the coming years? According to Darktrace, the answer is cloud security and network security

Just see:

66% of respondents think the cloud is the top domain for AI’s future impact. The reason is simple — cloud environments are sprawling, dynamic, and critical to business operations. Applications spin up and down, workloads move between regions, and user access shifts constantly. That’s fertile ground for attackers, but also for AI, which can monitor activity in real time, spot anomalies, and shut down threats before they spread.

55% put network security next. Even with workloads in the cloud, the network is still the nervous system of any organization. It’s where lateral movement happens after an initial breach, and where AI can be a huge advantage in spotting unusual traffic flows, device behavior changes, or command-and-control activity that traditional rules-based systems might miss.

The focus on these two areas makes sense. 

They’re high-impact targets for attackers and high-value protection points for defenders. Cloud breaches can expose massive amounts of sensitive data; network compromises can quietly escalate into full-scale incidents. 

Fortunately, by putting AI in these layers, security teams gain visibility and speed — the two things you need most when defending modern, distributed infrastructure.

Last words on AI cybersecurity

Every coin has two sides, so does AI.

AI is in every part of the security conversation — both as a threat and as a defense. Attackers are using it to move faster, hit harder, and stay hidden longer. Defenders are leaning on it to cut through noise, spot unknown threats, and act before damage is done.

Now we know that the gap between confidence and knowledge is still wide. Many teams believe in AI’s potential but don’t yet understand it deeply enough to use it to its fullest. And while most organizations are talking about AI policies, far fewer have them in place — and even fewer are actively auditing usage.

For leaders, the next move should be clear: close these gaps, lock in governance, and deploy AI where it can have the highest impact. Because in this era, waiting to adapt is an open invitation for the threats to get there first.

***