Human learning relies on specialization with distinct cognitive mechanisms, while most neural networks rely on gradient descent over an objective function.
Research investigates if human learners' faster learning with fewer examples compared to data-driven deep learning is due to using multiple specialized mechanisms in combination.
A study on inductive human learning simulations in tutoring environments shows that decomposing learning into multiple mechanisms significantly improves data efficiency, aligning it with human learning.
Efforts to improve machine learning efficiency should consider integrating multiple specialized learning mechanisms to bridge the efficiency gap between data-driven approaches and human learning.