In inverse problems, sparsity regularization is used to regularize the solution by assuming that the unknown can be represented with only a few significant components.
A new probabilistic sparsity prior based on a mixture of degenerate Gaussians is proposed to model sparsity in a versatile manner in this study.
A neural network is designed as the Bayes estimator for linear inverse problems under the probabilistic sparsity prior framework.
Supervised and unsupervised training strategies are suggested to estimate the parameters of this neural network.
The effectiveness of the proposed approach is evaluated against common sparsity-promoting techniques like LASSO, group LASSO, iterative hard thresholding, and sparse coding/dictionary learning.
Comparison results show that the reconstructions using the new approach consistently have lower mean square error values on 1D datasets compared to traditional techniques.