header

Deep Reinforced Learning Enables Solving Discrete-Choice Life Cycle Models to Analyze Social Security Reforms

11 Pages Posted: 11 Nov 2020 Publication Status: Under Review

Abstract

Discrete-choice life cycle models can be used to, e.g., estimate how social security reforms change employment rate. Optimal employment choices during the life course of an individual can be solved in the framework of life cycle models. This enables estimating how a social security reform influences employment rate. Mostly, life cycle models have been solved with dynamic programming, which is not feasible when the state space is large, as often is the case in a realistic life cycle model. Solving such life cycle models requires the use of approximate methods, such as reinforced learning algorithms. We compare how well a deep reinforced learning algorithm ACKTR and dynamic programming solve a relatively simple life cycle model. We find that the average utility is almost the same in both algorithms, however, the details of the best policies found with different algorithms differ to a degree. In the baseline model representing the current Finnish social security scheme, we find that reinforced learning yields essentially as good results as dynamics programming. We then analyze a straight-forward social security reform and find that the employment changes due to the reform are almost the same. Our results suggest that reinforced learning algorithms are of significant value in analyzing complex life cycle models.

Suggested Citation

Tanskanen, Antti J., Deep Reinforced Learning Enables Solving Discrete-Choice Life Cycle Models to Analyze Social Security Reforms. SSHO-D-20-00807, Available at SSRN: https://ssrn.com/abstract=3727986 or http://dx.doi.org/10.2139/ssrn.3727986

Antti J. Tanskanen (Contact Author)

Confederation of Finnish Industries ( email )

P.O. Box30
Helsinki, FI-00131
Finland

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
94
Downloads
9
PlumX Metrics