An established failure mode for machine learning models occurs when the same features are equally likely to belong to class 0 and class 1.
Standard neural network architectures like MLPs or CNNs are not equipped to handle this problem.
A simple activation function called quantile activation (QACT) is proposed to address this issue.
QACT produces the relative quantile of the sample in its context distribution, improving generalization across distortions compared to conventional classifiers.