Graph prompt tuning has emerged as a promising paradigm for transferring general graph knowledge from pre-trained models to downstream tasks.A backdoor attack known as Krait has been introduced, which disguises benign graph prompts to evade detection.Krait efficiently embeds triggers to a small fraction of training nodes, achieving high attack success rates without sacrificing clean accuracy.The study analyzes how Krait can evade both classical and state-of-the-art defenses and provides insights for detecting and mitigating such attacks.