Spiking Neural Networks (SNNs) are energy-efficient neural networks for neuromorphic hardware.Weight pruning and quantization are used to improve SNNs' efficiency on hardware with limited resources.A new one-shot post-training pruning/quantization framework, Optimal Spiking Brain Compression (OSBC), is proposed.OSBC achieves efficient SNN compression by minimizing the loss on spiking neuron membrane potential with a small sample dataset.