MIT researchers have developed a framework based on PAC Privacy to protect sensitive data in AI models.
The new PAC Privacy framework is more computationally efficient and minimizes the tradeoff between accuracy and privacy.
Researchers have created a four-step template to privatize various algorithms without needing to access their inner workings.
The team demonstrated that stable algorithms are easier to privatize using their method, as stable algorithms produce consistent predictions.
The use of PAC Privacy estimates the minimal noise required to protect an AI model's training data, enhancing privacy with minimal utility loss.
The new variant of PAC Privacy estimates anisotropic noise tailored to specific data characteristics, reducing overall noise while maintaining privacy levels.
More stable algorithms exhibit less variance in their outputs, requiring less noise for privatization, according to the research.
The researchers aim to explore co-designing algorithms with PAC Privacy for enhanced stability, security, and robustness from the outset.
The study showed that the new PAC Privacy requires fewer trials to estimate noise and successfully withstands state-of-the-art attacks in simulations.
The research marks a step towards automated and efficient private data analytics without requiring individual query analysis, as highlighted by Xiangyao Yu.