menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Adversaria...
source image

Arxiv

4d

read

117

img
dot

Image Credit: Arxiv

Adversarial Resilience against Clean-Label Attacks in Realizable and Noisy Settings

  • This paper investigates the challenge of establishing stochastic-like guarantees for learning from a stream of data that includes both unknown clean-label adversarial samples and noise.
  • The approach allows the learner to abstain from making predictions when uncertain, and measures regret in terms of misclassification and abstention error.
  • The study corrects inaccuracies in the work of Goel, Hanneke, Moran, and Shetty and explores methods for the agnostic setting with random labels.
  • The paper introduces the concept of a clean-label adversary in the agnostic context and provides a theoretical analysis of a disagreement-based learner subject to a clean-label adversary with noise.

Read Full Article

like

7 Likes

For uninterrupted reading, download the app