Performance of large language models depends on the composition of their training data.Existing approaches for selecting data mixtures for LLMs can be expensive and suboptimal.AutoMixAlign (AMA) is a new algorithm that adaptively mixes datasets during training to balance performance across tasks.AMA outperforms standard alignment approaches and model merging methods in multitask alignment setups.