menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Sharp Rate...
source image

Arxiv

1d

read

260

img
dot

Image Credit: Arxiv

Sharp Rates in Dependent Learning Theory: Avoiding Sample Size Deflation for the Square Loss

  • This work focuses on statistical learning with dependent data and square loss in a specific hypothesis class.
  • The objective is to find a sharp noise interaction term, or variance proxy, in learning with dependent data.
  • The empirical risk minimizer achieves a rate that depends only on the complexity of the class and second-order statistics, termed as a 'near mixing-free rate'.
  • The study combines the concept of a weakly sub-Gaussian class with mixed tail generic chaining to compute optimal rates for various problems.

Read Full Article

like

15 Likes

For uninterrupted reading, download the app