menu
techminis

A naukri.com initiative

google-web-stories
Home

>

Data Science News

>

New Method...
source image

Analyticsindiamag

2w

read

412

img
dot

Image Credit: Analyticsindiamag

New Method Customises LLMs in Seconds, Beats Tuning: Research

  • Researchers from multiple universities introduce Drag-and-Drop (DnD) LLMs to customize large language models quickly.
  • DnD generates task-specific LoRA adapters from prompts, offering faster and more accurate results than traditional methods.
  • The new approach involves combining a frozen text encoder with a hyper-convolutional decoder for efficient adapter weight generation.
  • DnD collapses the conventional 'data→gradients→weights' loop into a single forward step, challenging the necessity of gradient descent for model specialization.
  • Compared to traditional fine-tuning methods like LoRA, DnD provides task-specific parameters up to 12,000 times faster and achieves up to 30% performance gains.
  • DnD significantly enhances accuracy on various datasets including ARC-e, BoolQ, HumanEval, GSM8K, and Math-Vision.
  • The method generalizes well across different domains and model sizes, demonstrating improved accuracy even on datasets it wasn't specifically trained on.

Read Full Article

like

24 Likes

For uninterrupted reading, download the app