Uni-LoRA is introduced as a unified framework for parameter-efficient fine-tuning of large language models.It involves reconstructing LoRA parameters through a projection from a subspace, allowing for global parameter sharing.The design of Uni-LoRA requires only a single trainable vector, making it a 'one-vector-only' solution.Experiments demonstrate that Uni-LoRA achieves state-of-the-art parameter efficiency and performance on various benchmarks.