This paper investigates the challenge of establishing stochastic-like guarantees for learning from a stream of data that includes both unknown clean-label adversarial samples and noise.
The approach allows the learner to abstain from making predictions when uncertain, and measures regret in terms of misclassification and abstention error.
The study corrects inaccuracies in the work of Goel, Hanneke, Moran, and Shetty and explores methods for the agnostic setting with random labels.
The paper introduces the concept of a clean-label adversary in the agnostic context and provides a theoretical analysis of a disagreement-based learner subject to a clean-label adversary with noise.