Mostly Exploration-Free Algorithms for Contextual Bandits

Saved in:
Bibliographic Details
Published in:arXiv.org (Apr 19, 2020), p. n/a
Main Author: Bastani, Hamsa
Other Authors: Bayati, Mohsen, Khosravi, Khashayar
Published:
Cornell University Library, arXiv.org
Subjects:
Online Access:Citation/Abstract
Full text outside of ProQuest
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Abstract:The contextual bandit literature has traditionally focused on algorithms that address the exploration-exploitation tradeoff. In particular, greedy algorithms that exploit current estimates without any exploration may be sub-optimal in general. However, exploration-free greedy algorithms are desirable in practical settings where exploration may be costly or unethical (e.g., clinical trials). Surprisingly, we find that a simple greedy algorithm can be rate optimal (achieves asymptotically optimal regret) if there is sufficient randomness in the observed contexts (covariates). We prove that this is always the case for a two-armed bandit under a general class of context distributions that satisfy a condition we term covariate diversity. Furthermore, even absent this condition, we show that a greedy algorithm can be rate optimal with positive probability. Thus, standard bandit algorithms may unnecessarily explore. Motivated by these results, we introduce Greedy-First, a new algorithm that uses only observed contexts and rewards to determine whether to follow a greedy algorithm or to explore. We prove that this algorithm is rate optimal without any additional assumptions on the context distribution or the number of arms. Extensive simulations demonstrate that Greedy-First successfully reduces exploration and outperforms existing (exploration-based) contextual bandit algorithms such as Thompson sampling or upper confidence bound (UCB).
ISSN:2331-8422
Source:Engineering Database