Research investigates the effectiveness of differential privacy in protecting machine learning models from privacy attacks on their training data.
A reconstruction attack targeting state-of-the-art differential privacy random forests is introduced in the study.
Experimental results suggest that while differential privacy reduces the success of reconstruction attacks, forests robust to these attacks may have lower predictive performance.
Practical recommendations for constructing more resilient differential privacy random forests with maintained predictive performance are provided based on the study's findings.