<ul data-eligibleForWebStory="true">Large Language Models (LLMs) often require guidance to perform well in complex environments.A new framework has been introduced to enhance LLM agent planning through in-context learning.The framework uses atomic fact augmentation and lookahead search to improve planning capabilities.The agent extracts task-critical 'atomic facts' from interaction trajectories.These facts augment prompts for LLM-based components for better decision-making.Planning involves a depth-limited lookahead search guided by accumulated facts and history.The approach helps the agent improve understanding and decision-making without weight updates.Theoretical motivation links performance to fact-based abstraction and LLM simulation accuracy.Empirically, the agent shows improved performance and adaptability on interactive tasks like TextFrozenLake and ALFWorld.