Large Language Models (LLMs) lack robust temporal intelligence, struggling with integrating past reasoning with future predictions and generations.
Existing methods focus on isolated temporal skills and exhibit poor generalization beyond their knowledge cutoff or requiring creative foresight.
The Time-R1 framework is introduced to empower a moderate-sized LLM with comprehensive temporal abilities including understanding, prediction, and creative generation.
Time-R1 outperforms much larger models on future event prediction and creative scenario generation benchmarks, showcasing the effectiveness of engineered reinforcement learning fine-tuning for superior temporal performance in smaller models.