Certifying the IID generalisation ability of deep networks is crucial for trusting AI in high-stakes applications like medicine and security.
Obtaining non-vacuous guarantees for generalisation bounds with contemporary large models on small-scale data is challenging.
A novel connection is made between learning methods based on model fusion and generalisation certificates, showing existing strategies can provide non-trivial generalisation guarantees.
Learning with as low as 100 examples using models like VIT-B and mistral-7B can now have non-trivial generalisation guarantees, enhancing trustworthiness of AI systems.