<ul data-eligibleForWebStory="true">Model merging aims to integrate task-specific expert models into a unified architecture while maintaining multi-task generalization capabilities.Parameter interference between models often leads to reduced performance.Resolving interference without extra data or computations during testing is a challenge.The paper suggests minimizing interference by utilizing task vectors in the linear layer.A method called WUDI-Merging is proposed, focusing on eliminating interference without additional data or rescaling coefficients.Empirical evaluations across vision and language benchmarks show the effectiveness of the method in data-free model merging.WUDI-Merging surpasses baseline methods by an average improvement of 10.9% and even outperforms mainstream test-time adaptation approaches by 3.3%.The method exhibits superior performance while requiring minimal computing resources.The code for WUDI-Merging will be made publicly available soon.