Teaching AI to admit 'I don’t know' intelligently can prevent broken trust with users.Gaps in knowledge base, weak retrieval, and overconfident models contribute to missing content in AI.Prompt engineering strategies like 'I Don’t Know' safeguard, setting confidence thresholds, and escalating to humans can improve AI responses.Implementing these strategies can lead to fewer escalations in customer support and safer interactions in healthcare bots.