The study explores the role of embedding collapse in Prompt-Tuning for language models.Embedding priors are designed and compared with Soft and Deep Prompt-Tuning methods.The findings suggest that priors strongly influence the position of tuned embeddings.The research raises questions about the significance of a single activation cluster for large language models' generalization abilities.