Effective explanation methods are crucial for ensuring transparency in high-stakes machine learning models.A new framework based on spectral analysis of explanation outcomes has been proposed.The framework uncovers two factors of explanation quality: stability and target sensitivity.Experiments on MNIST and ImageNet demonstrate the trade-offs between these factors in popular evaluation techniques.