ML systems continue to harm marginalized people despite efforts to mitigate biases.Creating models to identify biased language instead of removing biases is a more feasible approach.Limitations of ML in identifying bias include its contextual nature and potential to privilege or oppress different communities.Expanding ML approaches to bias and fairness is necessary to address the limitations and promote a mixed-methods investigation.