Task vectors are vital for accelerating inference in in-context learning (ICL) by distilling task-specific information into a single, reusable representation.
The Linear Combination Conjecture proposes that task vectors are single in-context demonstrations formed through linear combinations of the original ones, supported by theoretical and empirical evidence.
The emergence of task vectors is shown in linear transformers trained on triplet-formatted prompts through loss landscape analysis.
However, task vectors may fail in representing high-rank mappings, as evidenced on practical LLMs, emphasizing the need for enhancement by injecting multiple vectors into few-shot prompts.