Artificial intelligence can automate and accelerate all phases of incident handling such as from detection to response. However, many applications are underperforming when it comes to cybersecurity.
The two long-tested techniques for AI cybersecurity are: attack detection and anomaly detection. Although they may assist in reducing human workload, neither approach is considered entirely reliable.
AI can be helpful in filtering FPs, alert prioritization, anomaly detection, and detecting suspicious behavior, but it is not a silver bullet to solve all detection problems, and it cannot work autonomously.
Large Language Models in cybersecurity LLMs for routine tasks, such as generating detailed cyberthreat descriptions and initial analysis of source code can be helpful.
Available research on the performance of skilled employees aided by LLMs shows mixed results. Therefore, these solutions should be implemented gradually upon an evaluation of the time investment and the quality of the result.