Differential Privacy (DP) is used to protect sensitive personal information in location trajectories but balancing utility and privacy is difficult.
Deep learning-based generative models are used to create synthetic trajectories, lacking formal privacy guarantees and relying on conditional information.
A study evaluated the utility cost of enforcing DP in these models across two datasets and eleven utility metrics.
The evaluation looked at the impact of DP-SGD on generative models and proposed a novel DP mechanism for conditional generation with formal guarantees.
Diffusion, VAE, and GAN model types were analyzed for their effects on the utility-privacy trade-off.
Results indicated that DP-SGD significantly affects performance, with some utility remaining for large datasets.
The proposed DP mechanism enhances training stability, especially for GANs and smaller datasets.
Diffusion models show the best utility without guarantees, but GANs perform best with DP-SGD.
It suggests that the optimal non-private model may not be the best choice when considering formal guarantees.
DP trajectory generation remains challenging and formal guarantees are currently more feasible with large datasets and in specific use cases.