Study investigates generalization behavior of neural architectures (Transformers, Graph Convolution Networks, LSTMs) using propositional logic.
Models were tested on generating satisfying assignments for logical formulas, emphasizing structured and interpretable settings.
While all models performed well in-distribution, generalization to unseen operator combinations, especially negation, remained challenging.
Findings suggest persistent limitations in standard architectures' ability to learn systematic representations of logical operators, indicating a need for stronger inductive biases.