A new study proposes a flow field prediction framework based on knowledge transfer from a large language model (LLM) to address computational costs in computational fluid dynamics methods and limited cross-condition transfer capabilities of existing deep learning models.
The framework integrates Proper Orthogonal Decomposition (POD) dimensionality reduction with fine-tuning strategies for pretrained LLM, enabling compressed representation of flow field features and encoding system dynamics in state space.
Fluid dynamics-oriented text templates are designed to improve predictive performance by providing enriched contextual semantic information, leading to outperformance of conventional Transformer models in few-shot learning scenarios and exceptional generalization across different inflow conditions and airfoil geometries.
The approach significantly reduces prediction time to seconds while maintaining over 90% accuracy compared to traditional Navier-Stokes equation solvers, potentially impacting aerodynamic optimization, flow control, and other engineering applications.