Gradient-based learning requires neural networks to be fully differentiable, posing a challenge for integrating non-differentiable greedy sparse recovery algorithms like Orthogonal Matching Pursuit (OMP) and Iterative Hard Thresholding (IHT) into neural networks.
The paper proposes permutation-based variants of OMP and IHT, named Soft-OMP and Soft-IHT, which use soft permutation matrices derived from softsort to approximate the non-differentiable argsort operator. This enables these algorithms to be fully compatible with neural network training.
Soft-OMP and Soft-IHT are demonstrated, both theoretically and numerically, to effectively approximate OMP and IHT while allowing for a controllable degree of accuracy. This leads to the development of OMP-Net and IHT-Net, trainable network architectures based on these differentiable variants.
By incorporating structure-aware trainable parameters for weights, the approach connects to structured sparse recovery, showcasing the capability to extract latent sparsity patterns from data.