Perturbative Gradient Training (PGT) is a new training paradigm introduced to address a limitation in physical reservoir computing.
PGT uses random perturbations in the network's parameter space to approximate gradient updates through forward passes, overcoming the inability to perform backpropagation in physical reservoirs.
The feasibility of PGT was demonstrated on both simulated neural network architectures and experimental hardware using a magnonic auto-oscillation ring as the physical reservoir.
Results indicate that PGT can achieve performance levels comparable to standard backpropagation methods in scenarios where backpropagation is impractical or impossible, showing promise in integrating physical reservoirs into deeper neural network architectures for improved energy efficiency in AI training.