Driven by privacy protection laws and regulations, unlearning in Large Language Models (LLMs) is gaining attention.Researchers propose a metric called Memory Removal Difficulty (MRD) to quantify sample-level unlearning difficulty.The study explores the characteristics of hard-to-unlearn and easy-to-unlearn samples in LLM unlearning.An MRD-based weighted sampling method is proposed to optimize existing unlearning algorithms, improving efficiency and effectiveness.