Transfer-based attacks generate adversarial examples on the surrogate model, which can mislead other black-box models.
Non-linear layers like ReLU and max-pooling truncate the gradient during backward propagation, undermining the transferability of adversarial examples.
A novel method called Backward Propagation Attack (BPA) is proposed to increase the relevance between the gradient and loss function, boosting adversarial transferability.
Conditional Backward Propagation of Chaos investigates the well-posedness of a backward stochastic differential equation and proves propagation of chaos in an associated interacting particles system.