R1-style Reinforcement Learning (RL) has enhanced Large Language Models' reasoning capabilities.Small-scale fine-tuning (SFT) has a significant influence on RL but lacks efficiency.An analytical framework comparing SFT and RL efficiency through sample effect analysis was proposed.Introduction of Re-distillation technique showed surprising efficiency in fine-tuning pretrain models with fewer samples.