Researchers propose a transformer-based sign language production (SLP) framework.A pose autoencoder encodes sign poses into a compact latent space using an articulator-based disentanglement strategy.A non-autoregressive transformer decoder predicts latent representations from sentence-level text embeddings.Channel-aware regularization aligns predicted latent distributions with ground-truth encodings using KL-divergence loss.