Agents in multi-agent reinforcement learning (MARL) struggle to assess the relevance of input information for cooperative tasks.In communication-limited scenarios, agents are unable to access global information, limiting their collaboration abilities.A novel cooperative MARL framework based on information selection and tacit learning is introduced.The framework enables agents to develop implicit coordination and adaptively filter information for enhanced decision-making capabilities.