A new language model fine-tuning paradigm called Mask Fine-Tuning (MFT) has been introduced.MFT breaks the integrity of the model to improve its performance.Extensive experiments show consistent performance boosts across various domains and backbones.MFT extends the functionality of mask learning for model compression to a more general scope.