Interpreting Linear Beta Coefficients Alongside Feature Importances in Machine Learning
9 Pages Posted: 29 Mar 2021
Date Written: March 1, 2021
Abstract
Machine-learning regression models lack the interpretability of their conventional linear counterparts. Tree- and forest-based models offer feature importances, a vector of probabilities indicating the impact of each predictive variable on a model’s results. This brief note describes how to interpret the beta coefficients of the corresponding linear model so that they may be compared directly to feature importances in machine learning.
Keywords: machine learning, feature importances, linear regression, beta coefficients
JEL Classification: C18, C33
Suggested Citation: Suggested Citation
Chen, James Ming, Interpreting Linear Beta Coefficients Alongside Feature Importances in Machine Learning (March 1, 2021). Available at SSRN: https://ssrn.com/abstract=3795099 or http://dx.doi.org/10.2139/ssrn.3795099
Do you have a job opening that you would like to promote on SSRN?
Feedback
Feedback to SSRN
If you need immediate assistance, call 877-SSRNHelp (877 777 6435) in the United States, or +1 212 448 2500 outside of the United States, 8:30AM to 6:00PM U.S. Eastern, Monday - Friday.