<ul data-eligibleForWebStory="false">Multi-Task Learning (MTL) in shared networks can lead to negative transfer due to differences in task objectives.Pre-trained transformers have limitations in adaptability, motivating the development of Dynamic Token Modulation and Expansion (DTME-MTL).DTME-MTL addresses gradient conflicts in token space to enhance adaptability and reduce overfitting without duplicating network parameters.Experiments show that DTME-MTL offers a scalable and efficient solution for improving transformer-based MTL models.