Learning systems often need to produce accurate predictions in specific subsets of a domain, while accuracy in other regions may be less critical.
Selective matching loss functions are designed with increasing link functions over score domains, emphasizing high sensitivity regions.
Loss asymmetry in these functions helps models predict better in high sensitivity regions, distinguishing between regions of high and low importance.
Multiplying selective scalar losses with composite Softmax functions allows for multidimensional selective losses, providing advantages in various applications.