Schmidt-Hieber (2020) showed the minimax optimality of deep neural networks with ReLu activation for least-square regression estimation.
The paper extends these results by considering dependent data, removing the i.i.d. assumption.
Observations are now allowed to be a Markov chain with a non-null pseudo-spectral gap.
A more general class of machine learning problems, including least-square and logistic regression, is studied.
The study uses PAC-Bayes oracle inequalities and a version of Bernstein inequality by Paulin (2015) to derive upper bounds on estimation risk for a generalized Bayesian estimator.
For least-square regression, the bound matches Schmidt-Hieber's lower bound up to a logarithmic factor.
The paper establishes a lower bound for classification with logistic loss and proves the optimality of the proposed deep neural network estimator in a minimax sense.