menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Mind the G...
source image

Arxiv

2d

read

98

img
dot

Image Credit: Arxiv

Mind the Gap: A Practical Attack on GGUF Quantization

  • Post-training quantization is standard for memory-efficient deployment of large language models.
  • Recent work has shown that basic rounding-based quantization schemes like GGUF pose security risks.
  • An attack has been introduced on GGUF quantization, exploiting quantization errors to construct malicious models.
  • The attack demonstrated effectiveness on three popular large language models across various scenarios.

Read Full Article

like

5 Likes

For uninterrupted reading, download the app