menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

32 is the ...
source image

Medium

3w

read

316

img
dot

Image Credit: Medium

32 is the Magic Number: Deep Learning’s Perfect Batch Size Revealed!

  • While large batch sizes can improve computational parallelism, they may degrade model performance.
  • Yann LeCun suggests that a batch size of 32 is optimal for model training and performance.
  • In a recent study, researchers found that batch sizes between 2 and 32 outperform larger sizes in the thousands.
  • Smaller batch sizes enable more frequent gradient updates, resulting in more stable and reliable training.

Read Full Article

like

19 Likes

For uninterrupted reading, download the app