This paper presents an empirical analysis of federated learning models subjected to label-flipping adversarial attacks.Various models such as MLR, SVC, MLP, CNN, RNN, Random Forest, XGBoost, and LSTM are considered.Experiments are conducted with different percentages of adversarial clients and flipped labels.The study reveals variations in the robustness of models to these attack vectors.