Recently, deep Multi-Agent Reinforcement Learning (MARL) has demonstrated its potential to tackle complex cooperative tasks, pushing the boundaries of AI in collaborative environments.
To enhance MARL performance, a novel sample reuse approach called Multi-Agent Novelty-Guided sample Reuse (MANGER) is introduced.
MANGER utilizes a Random Network Distillation (RND) network to measure the novelty of each agent's current state and assigns additional sample update opportunities based on the uniqueness of the data.
Evaluations show significant improvements in MARL effectiveness in scenarios such as Google Research Football and StarCraft II micromanagement tasks.