Deep reinforcement learning (RL) agents face issues with neuronal activity loss affecting their ability to adapt to new data.
Current methods like the tau-dormant neuron ratio may lose statistical power in complex architectures.
A new metric called GraMa (Gradient Magnitude Neural Activity Metric) focuses on gradients instead of activations to quantify neuron-level learning capacity.
Resetting neurons based on GraMa (ReGraMa) has shown consistent improvements in learning performance across various deep RL algorithms and benchmarks.