menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

SnatchML: ...
source image

Arxiv

4d

read

198

img
dot

Image Credit: Arxiv

SnatchML: Hijacking ML models without Training Access

  • Model hijacking can cause significant accountability and security risks.
  • SnatchML is a training-free model hijacking attack that targets inference-time.
  • SnatchML leverages the over-parameterization of ML models to infer different tasks.
  • The study proposes countermeasures to mitigate the risks of model hijacking.

Read Full Article

like

11 Likes

For uninterrupted reading, download the app