Fairness through Unawareness (FtU) suggests avoiding discrimination against demographic groups by not considering group membership in decisions or predictions.
Critics in the machine learning literature argue that FtU alone may not ensure fairness and using additional features typically enhances prediction accuracy for all groups.
The paper demonstrates that FtU can reduce algorithmic discrimination without sacrificing accuracy, aligning with the Model Multiplicity concept.
The study highlights how FtU can promote more equitable policies in practical applications, emphasizing the need for a justified use of protected attributes in predictive models.