Traditional deep neural networks face issues like catastrophic forgetting and vulnerability to adversarial attacks.
A new approach called SHIELD (Secure Hypernetworks for Incremental Expansion and Learning Defense) is introduced to address these challenges.
SHIELD integrates a hypernetwork-based continual learning approach with interval arithmetic to create separate networks for each subtask while aggregating information across all tasks.
The target models generated by SHIELD provide strict guarantees against possible attacks for data samples within defined interval ranges, enhancing security in continual learning.