AI adoption is increasing rapidly, but organizations face significant security risks that need to be addressed for trust, privacy, and business continuity.
Cisco's 'State of AI Security in 2025' report emphasizes the gap between AI adoption rates and organizational readiness to secure AI systems effectively.
New threats in AI security include infrastructure attacks targeting AI frameworks like NVIDIA's Container Toolkit and Ray, as well as supply chain vulnerabilities.
Emerging AI-specific attacks like prompt injection, jailbreaking, and training data extraction pose challenges to traditional cybersecurity methods.
Attack vectors targeting AI systems include jailbreaking, indirect prompt injection, and training data poisoning, making AI systems vulnerable at various stages of their lifecycle.
Cisco's research reveals vulnerabilities in top AI models, risks in fine-tuning models, training data extraction methods, and the ease and impact of data poisoning.
AI is not only a target for cyber threats but also a tool for cybercriminals, enabling more effective attacks and personalized scams.
Best practices for securing AI systems include risk management across the AI lifecycle, using established cybersecurity practices, focusing on vulnerable areas like supply chains, and educating employees on AI security risks.
As AI adoption continues to rise, organizations need to prioritize security alongside innovation to navigate evolving security risks and opportunities in the AI landscape.