Large Language Models (LLMs) are vulnerable to multi-turn manipulation attacks.A novel defense framework called Temporal Context Awareness (TCA) is introduced to address this challenge.TCA continuously analyzes semantic drift, cross-turn intention consistency, and evolving conversational patterns.Preliminary evaluations demonstrate TCA's potential to identify subtle manipulation patterns and enhance security in conversational AI systems.