<ul data-eligibleForWebStory="true">Researchers propose a novel approach using large language models (LLMs) for multi-task Bayesian optimization.The goal is to leverage experience from optimizing existing tasks to improve the efficiency of optimizing new tasks.Existing approaches with multi-task Gaussian processes or deep kernel transfer show limited performance improvement at scale.The new approach scales to about 1500 tasks and involves fine-tuning an LLM on high-quality solutions from Bayesian optimization.The fine-tuned LLM generates initialization points for new task searches, creating a feedback loop.This method was evaluated in database query optimization and antimicrobial peptide design domains.Results show that the LLM's initializations gradually improve, leading to better optimization performance.The LLM eventually generates solutions for new tasks with fewer oracle calls, surpassing solutions from Bayesian optimization starting from scratch.The approach forms a positive feedback loop, enhancing optimization efficiency over time.The research demonstrates the potential of LLMs in improving multi-task Bayesian optimization.The study showcases the benefits of leveraging LLMs to enhance optimization processes across various domains.The feedback loop with LLMs shows promise in accelerating optimization and achieving superior results with fewer resources.This research contributes to advancing optimization techniques through the integration of large language models.The study underscores the importance of leveraging existing optimization trajectories to enhance future task optimization.The proposed method aims to improve optimization efficiency and performance by utilizing large language models.Overall, the approach presents a promising strategy for enhancing multi-task Bayesian optimization using LLMs.