A new method called Prompt Gradient Alignment (PGA) is proposed for improving Unsupervised Domain Adaptation (UDA) in vision-language models.PGA leverages large-scale pre-trained vision-language models to learn both domain-invariant and specific features.The method aligns per-objective gradients to foster consensus between them, and prevents overfitting by penalizing the norm of the gradients.Experimental results show that PGA outperforms other vision-language model adaptation methods for UDA.