Learning Hidden Action Principal-Agent Models
52 Pages Posted: 6 Aug 2019
Date Written: August 2, 2019
Principal-agent models are essential to the analysis of incentive contracts. Despite their prevalence, little attention has been directed toward estimating agent models from historical contracts, which can be valuable for assessing counterfactuals or designing incentives in practice. In this paper, we propose an estimator for a general class of principal-agent models where agent actions are hidden -- a salient feature of many agency problems. We show the estimator to be statistically consistent and NP-hard. While the estimator can be expressed exactly as an integer program, the resulting formulation scales poorly in the size of the data due to weak linear programming relaxations. We present an approximate estimator that bypasses this deficiency, show it to be asymptotically optimal, and characterize the approximation error under additional conditions. To solve the approximate formulation, we propose a statistical column generation algorithm that uses non-parametric hypothesis tests to identify variables to introduce into the model. We show that our solution algorithm preserves statistical consistency, and present a bound on the expected iteration count as a function of the Type I error rate. Numerical results show that the algorithm dramatically reduces the computational time required to obtain competitive parameter estimates. Lastly, we evaluate the predictive performance of the estimator using empirical data related to a class of widely implemented Medicare contracts.
Keywords: principal-agent problems, estimation, inverse optimization, integer programming
Suggested Citation: Suggested Citation