<ul data-eligibleForWebStory="true">A new energy-based transformer model architecture enhances AI systems' reasoning capabilities and robustness.The model proposes 'thinking as optimization' approach, using a learned verifier for candidate predictions.Unlike existing approaches, this model effectively combines generators and verifiers for better generalization.The model outperforms existing architectures in efficiency during pretraining and inference for reasoning tasks.Researchers believe the model's scalability and generalization benefits make it promising for AI applications.