Artificial intelligence (AI) and large language models (LLMs) can make sense of data and aid attackers and defenders.
Adversarial emulation engagements require making sense of the vast quantity of structured and unstructured data.
An LLM can be used to process unstructured data to convert it into structured data.
The guardrails-ai Python library allows creating guardrails for an LLM to output data in a specific format.
Case studies show how LLMs can help identify potential targets in a network, find valuable information, crawl computer data, and correlate users to their computers.
LLMs can also be used to search for target systems, generate password candidates for cracking passwords, and summarize internal website content or documentation.
LLMs have limitations and may produce false positives or be slow to process large amounts of data.
Further improvements may be possible with testing variations in the prompts, and investigating other data sources to analyze them.
Linear regression models, clustering, and pathfinding algorithms may be used to evaluate attack paths in a network.
Using the LLMs, it is useful for defenders to monitor LDAP queries to see if a large amount of data is being retrieved from LDAP.