menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Knowledge ...
source image

Arxiv

1w

read

176

img
dot

Image Credit: Arxiv

Knowledge Injection via Prompt Distillation

  • Large language models (LLMs) often need to incorporate new knowledge not present in their pre-training data.
  • Retrieval-augmented generation (RAG) is the industry standard for knowledge injection, but fine-tuning has not achieved comparable success.
  • A new fine-tuning technique called prompt distillation is proposed to learn new knowledge and match the performance of RAG.
  • Prompt distillation involves generating question-answer pairs about the new knowledge and training a student model to mimic the output distributions of a teacher model that receives the new knowledge in its prompt.

Read Full Article

like

10 Likes

For uninterrupted reading, download the app