Federated learning (FL) enables training a shared model without transmitting private data to a central server.FL is vulnerable to Byzantine attacks from compromised edge devices, degrading model performance.The proposed plugin integrates into existing FL techniques to achieve Byzantine resilience.By generating virtual data samples and evaluating model consistency scores, compromised devices can be filtered out, maintaining the benefits of FL.