The Microsoft.Extensions.AI.Evaluation.Safety package has been added to the Microsoft.Extensions.AI.Evaluation libraries for detecting harmful content in AI-generated applications.
The safety evaluators, powered by Azure AI Foundry Evaluation service, can be seamlessly integrated into existing workflows.
Steps to set up Azure AI Foundry for safety evaluations include creating an Azure subscription, resource group, AI hub, and project.
A C# example shows how to configure and run safety evaluators to check AI responses for various criteria.
Running unit tests and generating reports can be done using Visual Studio, Visual Studio Code, or the .NET CLI.
The API usage samples cover scenarios like evaluating content safety of AI responses with images and running safety and quality evaluators together.
Updates to Microsoft.Extensions.AI.Evaluation libraries include enhanced quality evaluators and improved reporting functionality.
New reporting features allow searching and filtering scenarios using tags, viewing rich metadata for metrics, and tracking historical trends.
Developers are encouraged to explore the content safety evaluators and provide feedback for further enhancements.
The aim is to elevate the quality and safety of AI applications using these tools.
The post emphasizes continuous improvement and invites engagement for better utilization of the libraries.