A General Framework to Analyze Stochastic Linear Bandit

arXiv preprint arXiv:2002.05152

In this paper, we study the well-known stochastic linear bandit problem where a decision-maker sequentially chooses among a set of given actions in $\mathbb{R}^d$, observes their noisy linear reward, and aims to maximize her cumulative expected reward over a horizon of length $T$. We first introduce a general family of algorithms for the problem and prove that they achieve the best-known performance (aka, are rate optimal). Our second contribution is to show that several well-known algorithms for the problem such as optimism in the face of uncertainty linear bandit (OFUL), Thompson sampling (TS), and OLS Bandit (a variant $\epsilon$-greedy) are special cases of our family of algorithms. Therefore, we obtain a unified proof of rate optimality for all of these algorithms, for both Bayesian and frequentist settings. Our new unified technique also yields a number of new results such as obtaining poly-logarithmic (in $T$) regret bounds for OFUL and TS, under a generalized gap assumption and a margin condition as in Goldenshluger and Zeevi (2013). A key component of our analysis technique is the introduction of a new notion of uncertainty complexity that directly captures the complexity of uncertainty in the action sets that we show is connected to regret analysis of any policy.

Our third and most important contribution, from both theoretical and practical points of view, is the introduction of a new rate-optimal algorithm called Sieved-Greedy (SG) by combining insights from uncertainty complexity and a new (and general) notion of optimism in expectation. Specifically, SG works by filtering out the actions with relatively low uncertainty and then chooses one among the remaining actions greedily. Our empirical simulations show that SG significantly outperforms existing benchmarks by combining the best attributes of both greedy and OFUL algorithms.