Adversarial vulnerability in AI models stems from a lack of physical understanding of the world.Neural networks rely on statistical correlations rather than causal, physical reasons for object recognition.Humans use physical priors like gravity and light reflection for consistent object perception.AI systems lack physical grounding, making them vulnerable to adversarial perturbations.Physics provides invariances and symmetries essential for robust perception in humans.AI's learned representations exist apart from the physical manifold of the environment, leading to vulnerabilities.The uncertainty of reality's dimensionality and structure presents a challenge to achieving adversarial robustness.Neural networks have fundamental limitations due to their disconnect from actual physical perception.To enhance AI robustness, a physics-informed approach with differential geometry and causal relationships is needed.Adopting a physics-grounded framework can lead to AI systems that understand and reason about the world reliably.Embracing uncertainty, interdisciplinary research, and collaborations are key to advancing AI with a physics-informed perspective.