An Approximation Approach for Response Adaptive Clinical Trial Design
54 Pages Posted: 1 Aug 2018 Last revised: 15 Jun 2019
Date Written: June 13, 2019
Multi-armed bandit (MAB) problems, typically modeled as Markov decision processes (MDPs), exemplify the learning vs. earning tradeoff. An area that has motivated theoretical research in MAB designs is the study of clinical trials, where the application of such designs has the potential to significantly improve patient outcomes. However, for many practical problems of interest, the state space is intractably large, rendering exact approaches to solving MDPs impractical. In particular, settings that require multiple simultaneous allocations lead to an expanded state and action-outcome space, necessitating the use of approximation approaches. We propose a novel approximation approach that combines the strengths of multiple methods: grid-based state discretization, value function approximation methods, and techniques for a computationally efficient implementation. The hallmark of our approach is the accurate approximation of the value function that combines linear interpolation with bounds on interpolated value and the addition of a learning component to the objective function. Computational analysis on relevant datasets shows that our approach outperforms existing heuristics (e.g. greedy and upper confidence bound family of algorithms) as well as a popular Lagrangian-based approximation method, where we find that the average regret improves by up to 58.3% (95% CI=58.1%-58.4%). A retrospective implementation on a recently conducted phase 3 clinical trial shows that our design could have reduced the number of failures by 17.03% (95% CI=17.02-17.05%) relative to the randomized control design used in that trial. Our proposed approach makes it practically feasible for trial administrators and regulators to implement Bayesian response-adaptive designs on large clinical trials with potential significant gains.
Keywords: Adaptive Clinical Trials, Markov Decision Process, Grid-Based Approximation, Adaptive Sampling, Approximate Dynamic Programming
Suggested Citation: Suggested Citation