menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Pre-Traini...
source image

Arxiv

6h

read

319

img
dot

Pre-Training Graph Contrastive Masked Autoencoders are Strong Distillers for EEG

  • Researchers have introduced a Unified Pre-trained Graph Contrastive Masked Autoencoder Distiller, EEG-DisGCMAE, to enhance performance by leveraging unlabeled high-density EEG data to aid limited labeled low-density EEG data.
  • The approach integrates graph contrastive pre-training with graph masked autoencoder pre-training and introduces a graph topology distillation loss function to facilitate knowledge transfer from teacher models trained on high-density data to lightweight student models trained on low-density data.
  • The method effectively addresses missing electrodes through contrastive distillation, and it has been validated across four classification tasks using clinical EEG datasets.
  • The research paper and source code can be accessed at arXiv:2411.19230v2 and https://github.com/weixinxu666/EEG_DisGCMAE, respectively.

Read Full Article

like

19 Likes

For uninterrupted reading, download the app