CoT-VLA is a method that incorporates explicit visual chain-of-thought reasoning into vision-language-action models.It predicts future image frames autoregressively as visual goals and generates a short action sequence to achieve these goals.CoT-VLA outperforms the state-of-the-art VLA model by 17% in real-world manipulation tasks and 6% in simulation benchmarks.CoT-VLA is a state-of-the-art 7B VLA that can understand and generate visual and action tokens.