<ul data-eligibleForWebStory="true">TreeLoRA (K-D Tree of Low-Rank Adapters) is a novel approach for efficient continual learning (CL) for large pre-trained models (LPMs).The approach constructs layer-wise adapters using hierarchical gradient similarity to update models online while preventing catastrophic forgetting.Efficiency is crucial for CL due to the computational demands and growing parameter sizes of LPMs.To reduce task similarity estimation computational burden, bandit techniques with lower confidence bounds are employed.Sparse gradient updates are used to optimize parameters, especially suited for LPMs.The approach is justified theoretically, with experiments on vision transformers (ViTs) and large language models (LLMs) showing effectiveness.Experiments across vision and natural language processing tasks demonstrate the effectiveness and efficiency of TreeLoRA.