As the use of predictive machine learning algorithms increases, ensuring fairness across sensitive groups is crucial.
Proxy-sensitive attributes are proposed as a solution for enforcing fairness in the absence of complete sensitive group information.
This work explores the use of proxy-sensitive attributes for multiaccuracy and multicalibration, providing bounds on fairness violations and demonstrating mitigation strategies.
Experiments on real-world datasets show that approximate multiaccuracy and multicalibration can be achieved even when sensitive group data is missing or incomplete.