Download this Paper Open PDF in Browser

Robust Control of the Multi-Armed Bandit Problem

21 Pages Posted: 17 Sep 2013 Last revised: 15 Jul 2014

Felipe Caro

University of California, Los Angeles - Anderson School of Management

Aparupa Das Gupta

University of California, Los Angeles (UCLA) - Decisions, Operations, and Technology Management (DOTM) Area

Date Written: July 1, 2014

Abstract

We study a robust model of the multi-armed bandit (MAB) problem in which the transition probabilities are ambiguous and belong to subsets of the probability simplex. We characterize the optimal policy as a project-by-project retirement policy but we show that arms become dependent so the Gittins index is not optimal. We propose a Lagrangian index policy that is computationally equivalent to evaluating the indices of a non-robust MAB. For a project selection problem we fi nd that it performs near optimal.

Suggested Citation

Caro, Felipe and Das Gupta, Aparupa, Robust Control of the Multi-Armed Bandit Problem (July 1, 2014). Available at SSRN: https://ssrn.com/abstract=2326583 or http://dx.doi.org/10.2139/ssrn.2326583

Felipe Caro

University of California, Los Angeles - Anderson School of Management ( email )

110 Westwood Plaza
Los Angeles, CA 90095-1481
United States

Aparupa Das Gupta (Contact Author)

University of California, Los Angeles (UCLA) - Decisions, Operations, and Technology Management (DOTM) Area ( email )

Los Angeles, CA
United States

Paper statistics

Downloads
77
Rank
264,960
Abstract Views
694