An important challenge in machine learning is to predict the initial conditions under which a given neural network will be trainable.
A method for predicting the trainable regime in parameter space for deep feedforward neural networks (DNNs) is presented.
The method involves reconstructing the input from subsequent activation layers via a cascade of single-layer auxiliary networks.
The method shows promise in reducing overall training time and generalizes to residual neural networks (ResNets) and convolutional neural networks (CNNs).