This paper presents the Robust Recurrent Deep Network (R2DN), a scalable parameterization of robust recurrent neural networks.
R2DNs are constructed as a feedback interconnection of a linear time-invariant system and a 1-Lipschitz deep feedforward network, making the models stable and robust to small input perturbations by design.
The parameterization of R2DNs is similar to recurrent equilibrium networks (RENs) but does not require iterative solution of an equilibrium layer at each time-step, resulting in faster model evaluation and backpropagation on GPUs.
Comparisons of R2DNs to RENs on different problems show that R2DNs have faster training and inference times with similar test set performance, and their scalability outperforms RENs with respect to model expressivity.