Thursday, 1:30–3:00 PM
Chair: Panayiotis Tsaparas

Money, Glory and Cheap Talk: Analyzing Strategic Behavior of Contestants in Simultaneous Crowdsourcing Contests on TopCoder.com

Nikolay Archak

Crowdsourcing is a new Web phenomenon, in which a firm takes a function once performed in-house and outsources it to a crowd, usually in the form of an open contest. Designing efficient crowdsoucring mechanisms is not possible without deep understanding of incentives and strategic choices of all participants. This paper presents an empirical analysis of determinants of individual performance in multiple simultaneous crowdsourcing contests using a unique dataset for the world’s largest competitive software development portal: TopCoder.com. Special attention is given to studying the effects of the reputation system currently used by TopCoder.com on behavior of contestants. We find that individual specific traits together with the project payment and the number of project requirements are significant predictors of the final project quality. Furthermore, we find significant evidence of strategic behavior of contestants. High rated contestants face tougher competition from their opponents in the competition phase of the contest. In order to soften the competition, they move first in the registration phase of the contest, signing up early for particular projects. Although registration in TopCoder contests is non-binding, it deters entry of opponents in the same contest; our lower bound estimate shows that this strategy generates significant surplus gain to high rated contestants. We conjecture that the reputation + cheap talk mechanism employed by TopCoder has positive effect on allocative efficiency of the simultaneous all-pay contests and should be considered for adoption in other crowdsourcing platforms.

Factorizing Personalized Markov Chains for Next-Basket Recommendation

Steffen Rendle, Christoph Freudenthaler, Lars Schmidt-Thieme

Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned—thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization.

A Contextual Bandit Approach to Personalized News Article Recommendation

Lihong Li, Wei Chu, John Langford, Robert Schapire

Personalized web services strive to adapt their services (advertisement, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of contents, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computationally cheap. In this work, we model the personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the user and articles, while simultaneously adapting its article-selection strategy based on user-click feedback. The contributions of this work are three-fold. First, we propose a novel, general contextual bandit algorithm that is computationally efficient and well motivated from information theory. Second, we argue that any bandit algorithm can be reliably evaluated \emph{offline} using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a commercial product dataset containing over 33 million events, resulting in a 12.5% click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce.

.

Back to full list of papers