Adapting pre-trained foundation models for diverse downstream tasks is a core practice in artificial intelligence.Parameter-efficient fine-tuning (PEFT) methods like LoRA have emerged and are becoming a growing research focus.A generalization of matrix-based PEFT methods to higher-dimensional parameter spaces is proposed, preserving the structural properties.Extensive experiments on computer vision and natural language processing validate the effectiveness and versatility of the approach.