The offensive potential of AI is a growing concern in society, with evidence showing its use in violating security and privacy objectives.
This paper aims to provide a systematic analysis of the heterogeneous capabilities of offensive AI, considering risks to humans and systems.
The authors analyze 95 research papers, 38 InfoSec briefings, a user study with 549 individuals, and the opinions of 12 experts to reveal overlooked ways in which AI can be used offensively.
The findings not only highlight current threats but also lay the groundwork for addressing this issue in the future.