Large models have made significant progress in natural language generation tasks, but their parameter scale poses challenges in fine-tuning.Parameter-Efficient Fine-Tuning (PEFT) offers a solution to efficiently adjust parameters of large pre-trained models for specific tasks.PEFT minimizes the introduction of additional parameters and reduces the computational resources required.This review provides an overview of PEFT, including its core principles, applications, and future research directions.