<ul data-eligibleForWebStory="true">Groundbreaking study uncovers biases in AI text detection tools impacting equity in academic publishing.Research reveals challenges posed by AI-driven text detection tools for non-native English speakers and various academic disciplines.Study emphasizes the need for ethical frameworks around the use of AI technologies in scholarly publishing.Findings highlight inconsistent accuracy rates in popular AI detection tools like GPTZero and DetectGPT.AI-assisted writing complicates detection, posing challenges for algorithms to distinguish human and AI-enhanced content.Accurate AI tools exhibit biases against non-native English speakers and marginalized academic disciplines.Misclassifications of work as AI-generated may deter scholars, impacting equity in knowledge distribution.Urgent call to pivot from detection-based approaches towards responsible use of large language models in publishing.Research advocates for inclusive frameworks to address disparities and safeguard scholarly communication.AI's impact on fairness and access in publishing calls for critical discussions and ethical considerations.