menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Learning a...
source image

Arxiv

1d

read

301

img
dot

Image Credit: Arxiv

Learning a Gaussian Mixture for Sparsity Regularization in Inverse Problems

  • In inverse problems, sparsity regularization is used to regularize the solution by assuming that the unknown can be represented with only a few significant components.
  • A new probabilistic sparsity prior based on a mixture of degenerate Gaussians is proposed to model sparsity in a versatile manner in this study.
  • A neural network is designed as the Bayes estimator for linear inverse problems under the probabilistic sparsity prior framework.
  • Supervised and unsupervised training strategies are suggested to estimate the parameters of this neural network.
  • The effectiveness of the proposed approach is evaluated against common sparsity-promoting techniques like LASSO, group LASSO, iterative hard thresholding, and sparse coding/dictionary learning.
  • Comparison results show that the reconstructions using the new approach consistently have lower mean square error values on 1D datasets compared to traditional techniques.

Read Full Article

like

18 Likes

For uninterrupted reading, download the app