Researchers integrate a computational neuroscience model of scale invariant memory into deep reinforcement learning (RL) agents.
Agents built with scale invariant memory can learn robustly across a wide range of temporal scales, unlike agents built with commonly used recurrent memory architectures such as LSTM.
This integration of computational principles from neuroscience and cognitive science enhances adaptability to complex temporal dynamics in deep neural networks.
The result mirrors some of the core properties of human learning.