<ul data-eligibleForWebStory="false">Monitoring Large Language Model (LLM) outputs is important to prevent misuse and misalignment.LLMs could use steganography to hide information in seemingly innocent text.Research found that current LLMs struggle to hide short messages but can do so with specific conditions like an unmonitored scratchpad.Despite limited steganographic capabilities, there are early indications that LLMs can perform basic encoded reasoning.