menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Gradient B...
source image

Medium

1M

read

398

img
dot

Image Credit: Medium

Gradient Boosting Machine Explained in Detail

  • Gradient boosting is a technique based on boosting that involves building many weak learners.
  • In the case of gradient boosting, each subsequent model focuses on pseudo-residuals instead of directly on the errors of the previous one.
  • Each new tree is trained to minimize the gradient of the loss function with respect to the current ensemble's predictions.
  • The gradient boosting algorithm was introduced by Jerome H. Friedman in 1999 and is widely used today.
  • There are numerous variations of the gradient boosting algorithm, including GBM, XGBoost, LightGBM, and CatBoost.
  • In gradient boosting, trees are constructed sequentially, meaning each tree is built based on information from previously built trees.
  • The algorithm for gradient boosting involves initialization, calculating pseudo-residuals, creating the next tree, and updating the model.
  • An important part of the gradient boosting method is regularization by shrinkage in the update rule.
  • For example, gradient boosting algorithm was explained step by step for a regression problem, with code implementation in Python.
  • The quality of the regressor should be evaluated on a test set.

Read Full Article

like

23 Likes

For uninterrupted reading, download the app