<ul data-eligibleForWebStory="true">State-space language models like Mamba have billions of parameters which hinder deployment.SparseSSM is introduced as a training-free pruning framework for state space architectures.SparseSSM extends the optimal brain surgeon framework to state space models.The algorithm calculates saliency scores to identify redundant parameters and guide pruning.Component sensitivity analysis is used to identify where redundancy exists in the architecture.SparseSSM can be extended to semi-structured and structured sparsity.Empirical results show that 50% of SSM weights can be pruned without fine-tuning, maintaining accuracy.No zero-shot accuracy loss is observed with SparseSSM, setting a new benchmark for pruning Mamba-based LLMs.