Large Language Models (LLMs) could be used as autonomous researchers to speed up scientific discoveries.
Research explores how well LLMs can identify hidden structures in black-box systems through passive vs. active data collection.
LLMs struggle to extract information from passive observations but improve when actively intervening in the black-box system, allowing them to refine beliefs and test edge cases.
Actively engaging LLMs in the intervention process helps overcome common failure modes, providing practical insights for better reverse-engineering of black-box systems.