Model merging based on task vectors, i.e. parameter differences between fine-tuned models and a shared base model, is an efficient way to integrate multiple task-specific models into a multitask model without retraining.
Recent works have tried to address conflicts between task vectors through sparsification, but they are limited by high parameter overlap and unbalanced weight distribution.
To overcome these limitations, the authors propose a framework called CABS (Conflict-Aware and Balanced Sparsification) consisting of Conflict-Aware Sparsification (CA) and Balanced Sparsification (BS).
The experiments demonstrate that CABS outperforms state-of-the-art methods in various tasks and model sizes.