Semidefinite programming (SDP) relaxation is a promising method for neural network verification, but faces a critical limitation known as interior-point vanishing when applied to deep networks.
Interior-point vanishing causes the loss of strict feasibility in SDP relaxation, impacting numerical stability and optimality.
Increasing the depth of deep neural networks can lead to the loss of strict feasibility, creating a barrier to scaling SDP-based verification.
Five solutions have been designed to improve the feasibility conditions of SDP-based verification, successfully solving a significant percentage of previously unsolved problems.
Analysis reveals that constraints inherited from prior work for ReLU units are not only unbeneficial but can also be harmful to problem feasibility.
This research sheds light on the challenges of SDP-based neural network verification and proposes practical solutions to enhance its applicability to deeper networks, ultimately contributing to the development of more reliable and secure systems.