Machine learning models can unintentionally reveal confidential data, making them susceptible to membership inference attacks (MIA).
New methods were introduced to assess the vulnerability of tree-based models efficiently against MIA: analyzing hyperparameter choices before training and examining the model structure after training.
These new approaches do not guarantee model safety but help in reducing the number of models needing extensive MIA evaluation through a hierarchical filtering process.
Consistent disclosure risk rankings for hyperparameter combinations across datasets allow the identification of high-risk models before training.
Analyze hyperparameters to avoid risky configurations during model training.
Simple human-interpretable rules can be developed to identify potentially high-risk models before training.
Structural metrics can serve as indicators of MIA vulnerability after model training.
Hyperparameter-based risk prediction rules show high accuracy in predicting vulnerable combinations without requiring model training.
Model accuracy does not necessarily correspond to privacy risk, indicating room for optimizing models for performance and privacy simultaneously.