LaMo is a new framework that combines language models with offline reinforcement learning.It uses pre-trained language models to improve motion control with limited data.LaMo features four key components: sequential pre-training, LoRA fine-tuning, MLP transformation, and language prediction loss.It performs well in sparse-reward tasks and matches the performance of value-based methods, particularly with small datasets.