Crossmodal knowledge distillation (KD) enhances a unimodal student using a multimodal teacher model.A multi-teacher crossmodal KD framework is proposed, integrating CLIP image embeddings with WordNet-relaxed text embeddings.This approach reduces label leakage and introduces more diverse textual cues for improved knowledge transfer.The method achieves state-of-the-art or second-best results on six public datasets, demonstrating its effectiveness in crossmodal KD.