Model hijacking can cause significant accountability and security risks.SnatchML is a training-free model hijacking attack that targets inference-time.SnatchML leverages the over-parameterization of ML models to infer different tasks.The study proposes countermeasures to mitigate the risks of model hijacking.