Tensor networks provide efficient representations of high-dimensional data, but optimal TN structures identification, known as tensor network structure search (TN-SS), is challenging due to computational expense and lack of transparency in current methods.
A novel TN-SS framework called tnLLM has been proposed, which incorporates domain information from real-world tensor data and utilizes large language models (LLMs) to predict suitable TN structures.
The tnLLM framework includes a domain-aware prompting pipeline that guides the LLM to infer appropriate TN structures based on real-world relationships between tensor modes, enabling both objective function optimization and domain-aware explanations for identified structures.
Experimental results show that tnLLM achieves comparable TN-SS objective function values with significantly fewer function evaluations compared to current algorithms, and the LLM-enabled domain information can enhance convergence of sampling-based methods while maintaining performance guarantees.