menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Meta-learn...
source image

Arxiv

2d

read

129

img
dot

Image Credit: Arxiv

Meta-learning Representations for Learning from Multiple Annotators

  • Researchers have introduced a meta-learning approach to handle learning from multiple noisy annotators.
  • The method addresses scenarios like crowdsourcing where supervised learning labels come from different annotators with varied skills and biases.
  • Existing methods usually demand a large amount of noisy labeled data to train accurate classifiers, which may not always be available.
  • To mitigate data scarcity, the new approach leverages labeled data from related tasks.
  • It involves embedding examples into a latent space using a neural network and constructing a probabilistic model to learn task-specific classifiers while estimating annotators' abilities.
  • The neural network is meta-learned to optimize test classification performance with a small amount of labeled data by adapting the classifier using the expectation-maximization (EM) algorithm.
  • The EM algorithm's steps are computed efficiently and backpropagated through the neural network for meta-learning.
  • The effectiveness of the method is demonstrated using both synthetic noise and real-world crowdsourcing datasets.

Read Full Article

like

7 Likes

For uninterrupted reading, download the app