menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Crossmodal...
source image

Arxiv

1w

read

241

img
dot

Image Credit: Arxiv

Crossmodal Knowledge Distillation with WordNet-Relaxed Text Embeddings for Robust Image Classification

  • Crossmodal knowledge distillation (KD) enhances a unimodal student using a multimodal teacher model.
  • A multi-teacher crossmodal KD framework is proposed, integrating CLIP image embeddings with WordNet-relaxed text embeddings.
  • This approach reduces label leakage and introduces more diverse textual cues for improved knowledge transfer.
  • The method achieves state-of-the-art or second-best results on six public datasets, demonstrating its effectiveness in crossmodal KD.

Read Full Article

like

14 Likes

For uninterrupted reading, download the app