International Conference on Machine Learning (ICML 2010). Winner of best student paper award.
We present a theoretical analysis of supervised ranking, providing necessary and sufficient conditions for the asymptotic consistency of algorithms based on minimizing a surrogate loss function. We show that many commonly used surrogate losses are inconsistent; surprisingly, we show inconsistency even in low-noise settings. We present a new value-regularized linear loss, establishing its consistency under reasonable restrictions on noise and showing that it outperforms conventional ranking losses in a collaborative filtering experiment.