Researchers have developed a computing architecture for autonomous learning that resembles traditional (von Neumann) computing with numbers but performs operations on high-dimensional vectors.
The architecture includes a high-capacity memory for vectors, similar to random-access memory (RAM) for numbers, and is inspired by models of human and animal learning.
This approach, which aligns with ideas from psychology, biology, and traditional computing, provides insights into how brains compute and can potentially be applied to enable learning by robots.
To realize the vision of computing with minimal material and energy usage, further development of a mathematical theory and large-scale experiments are required.