A novel framework is proposed for ownership verification of deep neural network (DNN) models for image classification tasks.
It allows verification of model identity without presenting the original model, suitable for scenarios where an unauthorized user has a copied model in a cloud environment.
The framework uses a white-box adversarial attack to align output probabilities, enabling the rightful owner to identify the model.
The proposed method based on the iterative Fast Gradient Sign Method (FGSM) with control parameters shows effective identification of DNN models using adversarial attacks.