Weaponized AI attacks targeting identities will be the greatest enterprise cybersecurity threat by 2025.
Large Language Models (LLMs) are the new power tool of choice for rogue attackers, cybercrime syndicates and nation-state attack teams.
84% of IT and security leaders found AI-powered tradecraft more complex to identify and stop, according to a recent survey.
Deepfakes lead all other forms of adversarial AI attacks, and were involved in nearly 20% of synthetic identity fraud cases.
Synthetic identity fraud is on pace to defraud financial and commerce systems by nearly $5 billion this year alone.
Ivanti’s recent report finds that 74% of businesses are already seeing the impact of AI-powered threats.
Adversarial AI techniques are expected to advance faster than many organizations’ existing approaches to securing endpoints.
Every security and IT team needs to treat endpoints as already compromised, focus on new ways to segment them and minimize vulnerabilities at the identity level.
The answer is not necessarily spending more money, but about finding practical ways to harden existing systems.
AI’s ability to protect identities and enforce least privileged access will become more pronounced in 2025.