<ul data-eligibleForWebStory="false">Discrete diffusion models have become powerful for language modeling, competing with auto-regressive models in training-time scaling.A new approach using particle Gibbs sampling for inference-time scaling in discrete diffusion models is introduced.The particle Gibbs sampling algorithm refines diffusion trajectories iteratively using Sequential Monte Carlo to improve text generation.Empirical results show that this new approach outperforms prior inference-time strategies in reward-guided text generation tasks.