<ul data-eligibleForWebStory="false">Deep neural networks are often viewed as different due to their generalization behavior, including benign overfitting and double descent.The anomalies observed in neural networks can be understood using traditional generalization frameworks like PAC-Bayes.The concept of soft inductive biases is key in explaining neural networks' generalization behavior, advocating for a flexible hypothesis space.While deep learning shares commonalities with other model classes, it stands out in representation learning and universality.