Amazon Bedrock Guardrails enhances generative AI application safety with new capabilities to help implement responsible AI policies at enterprise scale effectively.
The new capabilities include detecting harmful multimodal content, filtering sensitive information, and preventing hallucinations across diverse foundation models.
Amazon Bedrock Guardrails integrates safety and privacy safeguards that work with models available in Amazon Bedrock and custom models.
It assists organizations in implementing AI safety controls across multiple models while maintaining compliance and tailored safeguards.
The service seamlessly integrates with AWS Identity and Access Management, Amazon Bedrock Agents, and Amazon Bedrock Knowledge Bases.
New guardrails policy enhancements provide improved content protection capabilities for generative AI applications.
Multimodal toxicity detection for image content is now generally available, offering comprehensive safeguards with up to 88% accuracy.
Amazon Bedrock Guardrails now offers enhanced privacy protection for PII (personally identifiable information) detection in user inputs.
Additional features include mandatory guardrails enforcement with IAM and optimized policy application for improved performance and protection.
These enhancements aim to help customers maintain safety standards, streamline governance processes, and implement responsible AI practices effectively.