Diffusion LLMs are a promising alternative to autoregressive LLMs, providing enhanced runtime efficiency.
Existing diffusion models lack the ability to enforce user-specified formal constraints, hindering their reliability for tasks requiring structured outputs.
DINGO is proposed as a dynamic programming-based constrained decoding strategy to address this limitation in diffusion LLMs.
DINGO allows sampling of high-probability output strings while adhering to user-specified regular expressions, showing significant improvements in benchmarks.