Top AI researchers like Fei-Fei Li and Yann LeCun are moving beyond large-language models (LLMs) and focusing on building world models that mimic human mental constructs.
World models predict events based on how humans mentally conceptualize the world around them, in contrast to LLMs that rely on statistical relationships in training data.
Fei-Fei Li is leading the development of world models at World Labs, aiming to endow AI models with spatial intelligence to operate in 3D worlds.
Yann LeCun, Meta's chief AI scientist, is also working on world models using video data and simulations to understand and predict changes in abstract representations.
The goal of building world models is to enable AI to make mental models akin to human cognition and enhance AI's ability to learn new tasks quickly and understand the physical world.
Fei-Fei Li's World Labs received an initial backing of $230 million to advance spatial intelligence in AI models for various applications including creative fields, robotics, and military uses.
Both Li and LeCun stress the challenges in building world models due to the lack of sufficient data for spatial intelligence compared to language data.
To overcome data challenges, researchers like Li emphasize the need for advanced data engineering and synthesis to generate the complex models required for world models.
Yann LeCun's team at Meta is focusing on training models with video data at abstract levels to simplify predictions and map out changes in the world.
The push towards world models signifies a shift in AI research towards creating truly intelligent AI that can understand the physical world, exhibit common sense, reason, plan, and learn new tasks rapidly.