This article provides a practical overview of the protective measures required for different components when developing robust AI systems.
While LLM security is relatively new, challenges like prompt injection have surfaced with the increased accessibility of ML models through interfaces and APIs.
Membership Inference Attacks may not be applicable to LLMs due to massive training datasets and low training Epochs, but other attacks like data and model poisoning are still relevant.
Establishing a security threat model is essential to determine what aspects of the AI system need protection.
Various tools, including Garak, have been developed to address vulnerabilities like prompt injection, data leakage, and hallucinations in LLMs.
It's crucial to be aware of potential security risks when utilizing models like Qwen Coder within agent architectures, as vulnerabilities could lead to significant security breaches.
Model and data poisoning can result from supply chain vulnerabilities, potentially leading to harmful alterations in model outputs and remote code execution possibilities.
Verifiable ML models and tools like AI Bill of Materials (AI BOM) aid in ensuring the integrity of models before deployment, contributing to a more secure AI supply chain.
Being cautious about downloading models from platforms like HuggingFace and verifying datasets and algorithms is essential to mitigate AI supply chain risks.
Tools like model transparency and AI BOM provide ways to verify model provenance and enhance security measures in AI systems.