Language models have emerged as powerful predictors of the viability of biological sequences.In-context learning can distort the relationship between fitness and likelihood scores of sequences.This distortion is prominently seen in sequences containing repeated motifs.The phenomenon affects transformer-based models and is mediated by a look-up operation.