The rapid evolution of machine learning has led to the widespread adoption of complex "black box" models.Efforts are focused on explaining these models instead of developing ones that are inherently interpretable.In this position paper, the imperative need for model interpretability is emphasized.An experimental evaluation of hybrid learning methods that integrate symbolic knowledge into neural network predictors is provided.