Explainable artificial intelligence (XAI) aims to help human decision-makers understand complex machine learning models.Logic-based XAI offers rigorous explanations but can be complex, especially for highly complex ML models.Recent work proposes distance-restricted explanations that are rigorous within a small input distance.This paper investigates algorithms to scale up logic-based explainers for computing explanations with a large number of inputs.