Large Language Models (LLMs), built on Transformer architectures, exhibit remarkable generalization across a wide range of tasks.Fine-tuning LLMs for specific tasks remains resource-intensive due to extensive parameterization.Two remarkable phenomena related to the attention mechanism during fine-tuning of LLMs are investigated.Insights from the investigation lead to a new strategy to improve fine-tuning efficiency in terms of storage and time.