Wednesday, 2:00–3:30 PM
Chair: Ravi Kumar

Classification-Enhanced Ranking

Paul N. Bennett, Krysta Svore, Susan Dumais

Many have speculated that classifying web pages can improve a search engine’s ranking of results relevant to a query. Intuitively results should be more relevant when they match the class of a query. We present a simple framework for classification-enhanced ranking that uses clicks in combination with the classification of web pages to derive a class distribution for the query. We then go on to define a variety of features that capture the match between the class distributions of a web page and a query, the ambiguity of a query, and the coverage of a retrieved result relative to a query’s set of classes. Experimental results demonstrate that a ranker learned with these features significantly improves ranking over a competitive baseline. Furthermore, our methodology is agnostic with respect to the classification space, and therefore is general enough to be used as a mechanism to derive query classes for a variety of orthogonal taxonomies.

Ranking Specialization for Web Search: A Divide-and-Conquer Approach by Using Topical RankSVM

Jiang Bian, Xin Li, Fan Li, Zhaohui Zheng, Hongyuan Zha

Many ranking algorithms applying machine learning techniques have been proposed in informational retrieval and Web search. However, most of existing approaches do not explicitly take into account the fact that queries vary significantly in terms of ranking and entail different treatments regarding the ranking models. In this paper, we apply a divide-and-conquer framework for ranking specialization, i.e. learning multiple ranking models by addressing query difference. We first generate query representation by aggregating ranking features through pseudo feedback, and employ unsupervised clustering methods to identify a set of ranking-sensitive query topics based on training queries. To learn multiple ranking models for respective ranking-sensitive query topics, we define a global loss function by combining the ranking risks of all query topics, and we propose a unified SVM-based learning process to minimize the global loss. Moreover, we employ an ensemble approach to generate the ranking result for each test query by applying a set of ranking models of the most appropriate query topics. We conduct experiments using a benchmark dataset for learning ranking functions as well as a dataset from a commercial search engine. Experimental results show that our proposed approach can significantly improve the ranking performance over existing single-model approaches as well as straightforward local ranking approaches, and the automatically identified ranking-sensitive topics are more useful for enhancing ranking performance than pre-defined query categorization.

Generalized Distances between Rankings

Ravi Kumar, Sergei Vassilvitskii

Spearman’s footrule and Kendall’s tau are two well established distances between rankings. However, they fail to take into account concepts crucial to evaluating a result set in information retrieval, element relevance and positional information. That is changing the rank of a highly-relevant document should result in a higher penalty than changing the rank of a irrelevant document; similar logic holds for results top positions, versus the bottom of the order. In this work, we extend both of these metrics to those with position and element weights, and show that a variant of the Diaconis-Graham inequality still holds—two measures remain within a constant factor of each other for all permutations. We continue by extending the element weights into a distance metric between elements. For example, in search evaluation, swapping the order of two nearly duplicate results should result in little penalty, even if these two are highly relevant and appear at the top of the list. We extend the rank similarity metrics to this, more general case, and show that they remain within a constant factor of each other. We conclude by conducting simple experiments on web search data with the proposed measures. Our experiments show that the weighted generalizations are more robust and consistent with each other than their unweighted counterparts.

.

Back to full list of papers