LoFT is a new low-rank adaptation method that aligns optimizer dynamics with full fine-tuning.It behaves like full fine-tuning by learning weight updates in a low-rank subspace and projecting optimizer's moments.LoFT eliminates the need for tuning extra hyperparameters and narrows the performance gap between adapter-based tuning and full fine-tuning.Empirically, LoFT outperforms standard LoRA-style methods without increasing inference cost.