Mostly Exploration-Free Algorithms for Contextual Bandits

Đã lưu trong:
Chi tiết về thư mục
Xuất bản năm:arXiv.org (Apr 19, 2020), p. n/a
Tác giả chính: Bastani, Hamsa
Tác giả khác: Bayati, Mohsen, Khosravi, Khashayar
Được phát hành:
Cornell University Library, arXiv.org
Những chủ đề:
Truy cập trực tuyến:Citation/Abstract
Full text outside of ProQuest
Các nhãn: Thêm thẻ
Không có thẻ, Là người đầu tiên thẻ bản ghi này!
Miêu tả
Bài tóm tắt:The contextual bandit literature has traditionally focused on algorithms that address the exploration-exploitation tradeoff. In particular, greedy algorithms that exploit current estimates without any exploration may be sub-optimal in general. However, exploration-free greedy algorithms are desirable in practical settings where exploration may be costly or unethical (e.g., clinical trials). Surprisingly, we find that a simple greedy algorithm can be rate optimal (achieves asymptotically optimal regret) if there is sufficient randomness in the observed contexts (covariates). We prove that this is always the case for a two-armed bandit under a general class of context distributions that satisfy a condition we term covariate diversity. Furthermore, even absent this condition, we show that a greedy algorithm can be rate optimal with positive probability. Thus, standard bandit algorithms may unnecessarily explore. Motivated by these results, we introduce Greedy-First, a new algorithm that uses only observed contexts and rewards to determine whether to follow a greedy algorithm or to explore. We prove that this algorithm is rate optimal without any additional assumptions on the context distribution or the number of arms. Extensive simulations demonstrate that Greedy-First successfully reduces exploration and outperforms existing (exploration-based) contextual bandit algorithms such as Thompson sampling or upper confidence bound (UCB).
số ISSN:2331-8422
Nguồn:Engineering Database