Parameter generation has emerged as a novel paradigm for neural network development, offering an alternative to traditional neural network training by synthesizing high-quality model weights directly.
In this paper, a novel conditional recurrent diffusion framework called ORAL is introduced, which addresses the limitations of existing methods in achieving scalability and controllability.
ORAL incorporates a novel conditioning mechanism to generate task-specific Low-Rank Adaptation (LoRA) parameters that can seamlessly transfer across evolving language models.
Extensive experiments show that ORAL generates high-quality LoRA parameters, achieving comparable or superior performance to vanilla trained counterparts across various language, vision, and multimodal tasks.