A new projection-based framework for neural network training has been introduced, focusing on gradient-free and parallel learning.
The framework uses projection operators and iterative projection algorithms, different from traditional gradient-based optimization methods.
Training is reformulated as a large-scale feasibility problem, involving finding network parameters and states that satisfy local constraints from elementary operations.
The framework, named PJAX, supports GPU/TPU acceleration and offers a NumPy-like API, showing promising results in training various architectures on standard benchmarks.