LLMs store knowledge in patterns woven into a vast multi-dimensional tapestry, understanding relationships between information rather than memorizing facts.
LLMs convert text into tokens and transform them into vectors placed in a multi-thousand-dimensional space where similar meanings or relationships point in similar directions.
Inside LLMs are Multi-Layer Perceptrons (MLPs) that process vectors, adding layers of meaning and association to understand information.
LLMs utilize superposition and distributed nature to pack large amounts of information efficiently, challenging traditional neuron-based knowledge representations.