menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Revisiting...
source image

Arxiv

2d

read

58

img
dot

Image Credit: Arxiv

Revisiting Diffusion Models: From Generative Pre-training to One-Step Generation

  • Diffusion distillation is a technique used to reduce sampling cost but can lead to degraded student performance.
  • Incorporating a GAN objective in diffusion distillation can improve results, though the mechanism is not fully understood.
  • Mismatched step sizes and parameter numbers between teacher and student models can hinder convergence in distillation.
  • A standalone GAN objective can convert diffusion models into efficient one-step generators without the need for distillation loss.
  • Diffusion training is proposed as a form of generative pre-training, enhancing models for lightweight GAN fine-tuning.
  • A one-step generation model was created by fine-tuning a pre-trained model with 85% frozen parameters.
  • Strong performance was achieved using only 0.2M images and near-SOTA results with 5M images in the one-step generation model.
  • Frequency-domain analysis was presented to explain the one-step generative capability acquired in diffusion training.
  • Overall, the study provides a new perspective on diffusion training, emphasizing its role as a powerful generative pre-training process.

Read Full Article

like

3 Likes

For uninterrupted reading, download the app