A Divide and Conquer Algorithm for Exploiting Policy Function Monotonicity

37 Pages Posted: 29 Jan 2015 Last revised: 6 Jul 2017

See all articles by Grey Gordon

Grey Gordon

Federal Reserve Banks - Federal Reserve Bank of Richmond

Shi Qiu

Indiana University Bloomington, Department of Economics, Students

Multiple version iconThere are 2 versions of this paper

Date Written: January 27, 2015

Abstract

A divide-and-conquer algorithm for exploiting policy function monotonicity is proposed and analyzed. To compute a discrete problem with n states and n choices, the algorithm requires at most 5n log2(n)n function evaluations and so is O(n log2 n). In contrast, existing methods for non-concave problems require n^2 evaluations in the worst-case and so are O(n^2). The algorithm holds great promise for discrete choice models where non-concavities naturally arise. In one such example, the sovereign default model of Arellano (2008), the algorithm is six times faster than the best existing method when n=100 and 50 times faster when n=1000. Moreover, if concavity is assumed, the algorithm combined with Heer and Mau├čner (2005)'s method requires fewer than 18n evaluations and so is O(n).

Keywords: Grid search, monotone policies, value function iteration

JEL Classification: C61, C63, C88

Suggested Citation

Gordon, Grey and Qiu, Shi, A Divide and Conquer Algorithm for Exploiting Policy Function Monotonicity (January 27, 2015). Available at SSRN: https://ssrn.com/abstract=2556345 or http://dx.doi.org/10.2139/ssrn.2556345

Grey Gordon (Contact Author)

Federal Reserve Banks - Federal Reserve Bank of Richmond ( email )

P.O. Box 27622
Richmond, VA 23261
United States

Shi Qiu

Indiana University Bloomington, Department of Economics, Students ( email )

Bloomington, IN
United States

Here is the Coronavirus
related research on SSRN

Paper statistics

Downloads
81
Abstract Views
608
rank
182,078
PlumX Metrics