Mostly Exploration-Free Algorithms for Contextual Bandits

Պահպանված է:
Մատենագիտական մանրամասներ
Հրատարակված է:arXiv.org (Apr 19, 2020), p. n/a
Հիմնական հեղինակ: Bastani, Hamsa
Այլ հեղինակներ: Bayati, Mohsen, Khosravi, Khashayar
Հրապարակվել է:
Cornell University Library, arXiv.org
Խորագրեր:
Առցանց հասանելիություն:Citation/Abstract
Full text outside of ProQuest
Ցուցիչներ: Ավելացրեք ցուցիչ
Չկան պիտակներ, Եղեք առաջինը, ով նշում է այս գրառումը!
Նկարագրություն
Համառոտագրություն:The contextual bandit literature has traditionally focused on algorithms that address the exploration-exploitation tradeoff. In particular, greedy algorithms that exploit current estimates without any exploration may be sub-optimal in general. However, exploration-free greedy algorithms are desirable in practical settings where exploration may be costly or unethical (e.g., clinical trials). Surprisingly, we find that a simple greedy algorithm can be rate optimal (achieves asymptotically optimal regret) if there is sufficient randomness in the observed contexts (covariates). We prove that this is always the case for a two-armed bandit under a general class of context distributions that satisfy a condition we term covariate diversity. Furthermore, even absent this condition, we show that a greedy algorithm can be rate optimal with positive probability. Thus, standard bandit algorithms may unnecessarily explore. Motivated by these results, we introduce Greedy-First, a new algorithm that uses only observed contexts and rewards to determine whether to follow a greedy algorithm or to explore. We prove that this algorithm is rate optimal without any additional assumptions on the context distribution or the number of arms. Extensive simulations demonstrate that Greedy-First successfully reduces exploration and outperforms existing (exploration-based) contextual bandit algorithms such as Thompson sampling or upper confidence bound (UCB).
ISSN:2331-8422
Աղբյուր:Engineering Database