menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Backward P...
source image

Medium

1w

read

171

img
dot

Backward Propagation is here to stay part3(AI 2024)

  • Researchers have used integer arithmetic for both forward and backward propagation in the fine-tuning of language models like BERT to save memory and computation.
  • The metric performance of fine-tuning 16-bit integer BERT matches both 16-bit and 32-bit floating-point baselines. Using 8-bit integer data type, integer fine-tuning loses an average of 3.1 points compared to the FP32 baseline.
  • A new efficient sparse training method with completely sparse forward and backward passes has been proposed for deep neural networks. This method achieves faster training and saves memory usage.
  • The proposed algorithm for sparse training is much more effective in accelerating the training process, up to an order of magnitude faster compared to previous methods.

Read Full Article

like

10 Likes

For uninterrupted reading, download the app