menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

LIFT the V...
source image

Arxiv

3d

read

374

img
dot

Image Credit: Arxiv

LIFT the Veil for the Truth: Principal Weights Emerge after Rank Reduction for Reasoning-Focused Supervised Fine-Tuning

  • Supervised fine-tuning of Large Language Models (LLMs) on high-quality datasets can improve reasoning capabilities.
  • Full fine-tuning (Full FT) is powerful but computationally expensive and prone to overfitting, especially with limited data.
  • Sparse fine-tuning, focusing on updating a small subset of important model parameters, strikes a balance between efficiency and effectiveness.
  • A new method called Low-rank Informed Sparse Fine-Tuning (LIFT) identifies Principal Weights, crucial for reasoning, through rank reduction, outperforming Full FT on reasoning tasks.

Read Full Article

like

22 Likes

For uninterrupted reading, download the app