The research paper introduces novel loss formulations, RG$^2$ and RG$^ imes$, to address the computational overhead and scalability issues associated with Softmax (SM) Loss in ranking tasks.
The RG$^2$ Loss and RG$^ imes$ Loss are derived through Taylor expansions of the SM Loss and reveal connections between different ranking loss paradigms.
The proposed losses are integrated with the Alternating Least Squares (ALS) optimization method to provide convergence rate analyses and generalization guarantees.
Empirical evaluations on real-world datasets show that the new approach achieves comparable or superior ranking performance to SM Loss while accelerating convergence significantly.
The framework contributes theoretical insights and efficient tools for the similarity learning community, suitable for tasks requiring a balance between ranking quality and computational efficiency.