A formal framework is introduced to analyze length generalization in transformers with learnable absolute positional encodings.The framework characterizes identifiable functions from long inputs and proves the possibility of length generalization for a wide range of problems.Experimental validation shows the theory as a predictor of success and failure of length generalization in various tasks.The theory offers explanations for empirical observations and allows for provably predicting length generalization capabilities in transformers.