Large Language Models (LLMs) rely on input prompts for performance optimization.Existing prompt optimization work has focused on task-specific user prompts, neglecting system prompts applicable across tasks.This study introduces bilevel system prompt optimization to create robust and transferable system prompts.A meta-learning framework is proposed to optimize system prompts over various user prompts, enhancing adaptability to unseen tasks.