Incentivized bandit learning with self-reinforcing user preferences
In this paper, we investigate a new multi-armed bandit (MAB) online learning model that considers real-world phenomena in many recommender systems: (i) the learning agent cannot pull the arms by itself and thus has to offer payments to users to incentivize arm-pulling indirectly; and (ii) if users with specific arm preferences are well rewarded, they induce a “self-reinforcing” effect in the sense that they will attract more users of similar arm preferences. Besides addressing the tradeoff of exploration and exploitation, another key feature of this new MAB model is to balance reward and incentivizing payment. The goal of the agent is to minimize the accumulative regret over a fixed time horizon T with a low total payment. Our contributions in this paper are twofold: (i) We propose a new MAB model with random arm selection that considers the relationship of users’ self-reinforcing preferences and incentives; and (ii) We leverage the properties of a multi-color Polya urn with nonlinear feedback ´models to propose two MAB policies termed “AtLeast-n Explore-Then-Commit” and “UCB-List.” We prove that both policies achieve O(log T) expected regret with O(log T) expected payment over a time horizon T. We conduct numerical simulations to demonstrate and verify the performances of these two policies and study their robustness under various settings.