When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.

Is 2025 shaping up to be the year of agenticAI?

But where there is opportunity, there is also risk.

An abstract image of digital security.

(Image credit: Shutterstock)

Agentic AI offers threat actors new ways to reach sensitive data and sabotage business-critical systems for gain.

The autonomous nature of the technology could also lead to unintended consequences and potentially unsafe decision-making.

Field CTO at Trend Micro.

Agentic AI for good and bad

Does the tech match the hype?Salesforcecertainly thinks so.

It describes agentic AI as a third wave of innovation, following predictive AI modelling and LLM-powered generative AI.

PwC claims the technology could generate between $2.6tn and $4.4tn annually for global GDP by 2030.

With unauthorized access, they can feed incorrect or biased data into the AI system to manipulate its behavior/outputs.

All of which could result in data breaches, extortion, service outages and major reputational/financial risk.

Unintentional misalignment

Yet theres more.

There are plenty of examples of unintentional misalignment to be concerned about.

Consider a self-driving car programmed to prioritize passenger safety.

RAG risk is already here

These arent necessarily theoretical risks.

Thats why its increasingly popular in use cases like financial analysis, patient care and online product recommendations.

However, these are riddled with security holes.

The same research reveals vulnerabilities in popular vector databases like Weaviate.

In the case of Ollama, this could provide threat actors with access as many as 15,000 discrete LLMs.

Stepping back and managing risk

So how can organizations hope to get back on the front foot?

Most importantly, by approaching AI from asecurity-by-design perspective.

That means ensuring security leaders get a seat at the table when new projects are being discussed.

And that data protection impact assessments (DPIAs) are run before any new initiative is launched.

Finally, organisations should look to AI security leaders to help mitigate immediate cyber-related risks.

We’ve featured the best AI phone.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc.