A new study from MIT reveals that large language models (LLMs), such as those used in powerful chatbots, do not understand anything.
MIT researchers developed new metrics to test AI models, focusing on deterministic finite automations (DFAs).
The study found that LLMs performed well in providing driving directions in New York City but struggled when faced with detours, revealing a lack of accurate internal models.
The research emphasizes that LLMs rely on prediction rather than reasoning and understanding, and their accuracy can quickly diminish in real-world scenarios.