<ul data-eligibleForWebStory="true">Autonomous agents are changing GUI interactions using natural language as an intermediary.Supervised Fine-Tuning methods in GUI agents struggle with accurate positional data perception.Reinforcement learning methods often fall short in assessing positional accuracy effectively.Location Preference Optimization (LPO) is introduced to optimize interaction preferences using locational data.LPO uses information entropy to predict interaction positions and introduces a dynamic location reward function based on physical distance.LPO, supported by Group Relative Preference Optimization (GRPO), enhances interaction precision in GUI environments.Experiments demonstrate LPO's superior performance, achieving state-of-the-art results in offline benchmarks and real-world online evaluations.The code for LPO will be publicly available soon on GitHub at https://github.com/AIDC-AI/LPO.