menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Multi-task...
source image

Arxiv

1d

read

89

img
dot

Image Credit: Arxiv

Multi-task parallelism for robust pre-training of graph foundation models on multi-source, multi-fidelity atomistic modeling data

  • Graph foundation models using graph neural networks are being used for atomistic modeling to handle multi-source, multi-fidelity data during pre-training.
  • Recent studies employ multi-task learning where shared layers process atomistic structures initially regardless of the source, routing them to different decoding heads for data-specific predictions.
  • A new multi-task parallelism method is proposed to distribute each head across computing resources with GPU acceleration, implemented in the open-source HydraGNN architecture.
  • The method was trained on over 24 million structures from five datasets and tested on supercomputers like Perlmutter, Aurora, and Frontier, showing efficient scaling on heterogeneous computing architectures.

Read Full Article

like

5 Likes

For uninterrupted reading, download the app