menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

LEAD: Iter...
source image

Arxiv

1d

read

155

img
dot

Image Credit: Arxiv

LEAD: Iterative Data Selection for Efficient LLM Instruction Tuning

  • Instruction tuning has been crucial for enhancing large language models (LLMs), but current iterative data selection methods are computationally intensive.
  • A new framework called LEAD has been proposed to address this by accurately estimating sample utility within the standard training loop.
  • LEAD utilizes Instance-Level Dynamic Uncertainty (IDU) to combine various factors for utility estimation and employs a two-stage selection strategy for efficiency.
  • Experiments demonstrate that LEAD outperforms existing methods, improving model performance by 6.1%-10.8% while using only 2.5% of the training data and reducing training time significantly.

Read Full Article

like

9 Likes

For uninterrupted reading, download the app