Kernel Regularized Least Squares: Moving Beyond Linearity and Additivity Without Sacrificing Interpretability
Massachusetts Institute of Technology (MIT) - Department of Political Science
Massachusetts Institute of Technology (MIT)
March 26, 2013
MIT Political Science Department Research Paper No. 2012-8
We propose the use of Kernel Regularized Least Squares (KRLS) for social science modeling and inference problems. KRLS borrows from machine learning methods designed to solve regression and classification problems without relying on linearity or additivity assumptions. The method constructs a flexible hypothesis space that uses kernels as radial basis functions and finds the best-fitting surface in this space by minimizing a complexity-penalized least squares problem. We argue that the method is well-suited for social science inquiry because it avoids strong parametric assumptions, yet allows interpretation in ways analogous to generalized linear models while also permitting more complex interpretation to examine non-linearities and heterogeneous effects. We also extend the method in several directions to make it more effective for social inquiry, by (1) deriving estimators for the pointwise marginal effects and their variances, (2) establishing unbiasedness, consistency, and asymptotic normality of the KRLS estimator under fairly general conditions, (3) proposing and justifying a simple automated rule for choosing the kernel bandwidth, and (4) providing companion software. We illustrate the use of the method through several simulations and a real-data example.
Number of Pages in PDF File: 38
Keywords: regression, classification, machine learning, prediction
JEL Classification: C14, C21working papers series
Date posted: April 25, 2012 ; Last revised: March 27, 2013
© 2013 Social Science Electronic Publishing, Inc. All Rights Reserved.
This page was processed by apollo4 in 0.406 seconds