<ul data-eligibleForWebStory="true">Federated learning (FL) enables multiple data holders to train a ML model without sharing data externally.FL involves workers updating a model locally and sharing their gradients with a central server.Byzantine-resilient FL prevents malicious participants from harming model convergence.Common strategies in FL ignore outlier gradients to thwart attacks.In heterogeneous data settings, distinguishing outliers is challenging.A new approach, Worker Label Alignement Loss (WoLA), aligns honest worker gradients in heterogeneous data settings.WoLA helps in identifying malicious gradients and outperforms existing methods in such settings.The paper includes theoretical insights and empirical evidence supporting WoLA's effectiveness.