Dataset distillation (DD) allows datasets to be distilled to fractions of their original size while preserving the rich distributional information so that models trained on the distilled datasets can achieve a comparable accuracy while saving significant computational loads.
This paper explores a new perspective of dataset distillation by embedding adversarial robustness, enabling models trained on these datasets to maintain high accuracy and better adversarial robustness.
The proposed method incorporates curvature regularization into the distillation process, resulting in improved accuracy and robustness compared to standard adversarial training, with lower computational overhead.
Empirical experiments demonstrate that the method generates robust distilled datasets capable of withstanding various adversarial attacks.