Mostly Exploration-Free Algorithms for Contextual Bandits

Bewaard in:
Bibliografische gegevens
Gepubliceerd in:arXiv.org (Apr 19, 2020), p. n/a
Hoofdauteur: Bastani, Hamsa
Andere auteurs: Bayati, Mohsen, Khosravi, Khashayar
Gepubliceerd in:
Cornell University Library, arXiv.org
Onderwerpen:
Online toegang:Citation/Abstract
Full text outside of ProQuest
Tags: Voeg label toe
Geen labels, Wees de eerste die dit record labelt!

MARC

LEADER 00000nab a2200000uu 4500
001 2071664005
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2071664005 
045 0 |b d20200419 
100 1 |a Bastani, Hamsa 
245 1 |a Mostly Exploration-Free Algorithms for Contextual Bandits 
260 |b Cornell University Library, arXiv.org  |c Apr 19, 2020 
513 |a Working Paper 
520 3 |a The contextual bandit literature has traditionally focused on algorithms that address the exploration-exploitation tradeoff. In particular, greedy algorithms that exploit current estimates without any exploration may be sub-optimal in general. However, exploration-free greedy algorithms are desirable in practical settings where exploration may be costly or unethical (e.g., clinical trials). Surprisingly, we find that a simple greedy algorithm can be rate optimal (achieves asymptotically optimal regret) if there is sufficient randomness in the observed contexts (covariates). We prove that this is always the case for a two-armed bandit under a general class of context distributions that satisfy a condition we term covariate diversity. Furthermore, even absent this condition, we show that a greedy algorithm can be rate optimal with positive probability. Thus, standard bandit algorithms may unnecessarily explore. Motivated by these results, we introduce Greedy-First, a new algorithm that uses only observed contexts and rewards to determine whether to follow a greedy algorithm or to explore. We prove that this algorithm is rate optimal without any additional assumptions on the context distribution or the number of arms. Extensive simulations demonstrate that Greedy-First successfully reduces exploration and outperforms existing (exploration-based) contextual bandit algorithms such as Thompson sampling or upper confidence bound (UCB). 
653 |a Medical research 
653 |a Algorithms 
653 |a Exploration 
653 |a Computer simulation 
653 |a Greedy algorithms 
700 1 |a Bayati, Mohsen 
700 1 |a Khosravi, Khashayar 
773 0 |t arXiv.org  |g (Apr 19, 2020), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2071664005/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/1704.09011