Deep neural networks are not resilient to parameter corruptions: even a single-bitwise error in their parameters in memory can cause an accuracy drop of over 10%.
Hessian-aware training is proposed as an approach to improve resilience to bitwise corruptions in neural network parameters.
The approach promotes models with flatter loss surfaces and shows a reduction in the number of bits leading to a significant accuracy drop.
This method can work synergistically with existing hardware and system-level defenses.