Fairness-aware learning aims to mitigate discrimination against specific protected social groups.Training and test data sampling can affect the reliability of reported fairness metrics.FairMatch, a post-processing method, utilizes propensity score matching to evaluate and mitigate bias.Experimental results show that FairMatch improves fairness evaluation and mitigation without sacrificing predictive performance.