Debiasing Treatment Effect Estimation for Privacy-Protected Data: A Model Audition and Calibration Approach

67 Pages Posted: 20 Sep 2023 Last revised: 25 Sep 2023

See all articles by Ta-Wei Huang

Ta-Wei Huang

Harvard Business School

Eva Ascarza

Harvard Business School

Date Written: September 18, 2023

Abstract

Data-driven targeted interventions have become a powerful tool for organizations to optimize business outcomes by utilizing individual-level data from experiments. A key element of this process is the estimation of Conditional Average Treatment Effects (CATE), which enables organizations to effectively identify differences in customer sensitivities to interventions. However, with the growing importance of data privacy, organizations are increasingly adopting Local Differential Privacy (LDP) – a privacy-preserving method that injects calibrated noise into individual records during the data collection process. Despite its privacy-protection benefits, we show that LDP can significantly compromise the predictive accuracy of CATE models and introduce biases, thereby undermining the effectiveness of targeted interventions. To overcome this challenge, we introduce a Model Auditing and Calibration approach that improves CATE predictions while preserving privacy protections. Built on recent advancements in cross-fitting, gradient boosting, and multi-calibration, our method improves model accuracy by iteratively correcting errors in the CATE predictions without the need for data denoising. As a result, we can improve CATE predictions while maintaining the same level of privacy protection. Furthermore, we develop a novel local learning with global optimization approach to mitigate the bias introduced by LDP noise and overfitting during the error correction process. Our methodology, validated with simulation analyses and two real-world marketing experiments, demonstrates superior predictive accuracy and targeting performance compared to existing methods and alternative benchmarks. Our approach empowers organizations to deliver more precise targeted interventions while complying with privacy regulations and concerns.

Keywords: targeted intervention, conditional average treatment effect estimation, differential privacy, model calibration, gradient boosting

JEL Classification: C51, C52, C53, C93, M31

Suggested Citation

Huang, Ta-Wei and Ascarza, Eva, Debiasing Treatment Effect Estimation for Privacy-Protected Data: A Model Audition and Calibration Approach (September 18, 2023). Available at SSRN: https://ssrn.com/abstract=4575240 or http://dx.doi.org/10.2139/ssrn.4575240

Ta-Wei Huang (Contact Author)

Harvard Business School ( email )

Boston, MA 02163
United States

Eva Ascarza

Harvard Business School ( email )

Soldiers Field
Boston, MA 02163
United States

HOME PAGE: http://evaascarza.com

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
31
Abstract Views
92
PlumX Metrics