Transformers have shown potential in wireless communication problem-solving through in-context learning, but current models require many layers for satisfactory performance, leading to high costs.
A new approach, CHOOSE, enhances shallow Transformers for wireless symbol detection by incorporating autoregressive reasoning steps in the hidden space.
CHOOSE significantly boosts the reasoning capacity of 1-2 layer models without increasing model depth, allowing for lightweight Transformers to achieve detection performance comparable to deeper models.
Experimental results indicate that CHOOSE outperforms traditional shallow Transformers, offering performance similar to deep models while maintaining storage and computational efficiency, making it suitable for resource-constrained mobile devices.