A new training framework has been introduced for training Polynomial Neural Networks (PNNs) effectively, crucial for privacy-preserving inference via Homomorphic Encryption.
The framework addresses challenges such as limited model expressivity with low-degree polynomials and numerical instability with high-degree polynomials.
Key innovations in the framework include a Boundary Loss that penalizes activation inputs outside a stable range and Selective Gradient Clipping to manage gradient magnitudes.
The framework enables training of PNNs with polynomial degrees up to 22, showcasing stable training, strong performance, and compatibility with Homomorphic Encryption across various datasets.