In-context learning is a key characteristic of transformers.Self-attention mechanism in transformers lacks flexibility in certain tasks.Linear self-attention is extended by introducing a bias matrix.The extended linear self-attention enables flexible matrix manipulations.