Existing methods for the zero-shot detection of machine-generated text are dominated by three statistical quantities: log-likelihood, log-rank, and entropy.
A new method is introduced, targeting a defect in the way that decoding strategies normalize conditional probability measures, for detecting machine-generated text.
The new method is rigorously theoretically justified, easily explainable, and conceptually distinct from existing methods.
The performance of the new detector is at least comparable to other state-of-the-art text detectors and in some cases, it strongly outperforms them.