Spiking Neural Networks (SNNs) are highly energy-efficient during inference, making them suitable for deployment on neuromorphic hardware.A benchmark study evaluates two quantization pipelines for fixed-point computations in SNNs optimized for the SpiNNaker2 chip.The first approach employs post training quantization (PTQ) with percentile-based threshold scaling.The second method uses quantization aware training (QAT) with adaptive threshold scaling, both achieving accurate 8-bit on-chip inference.