Stochastic Algorithmic Differentiation of (Expectations of) Discontinuous Functions (Indicator Functions)
38 Pages Posted: 19 Nov 2018 Last revised: 1 Feb 2021
Date Written: November 14, 2018
In this paper, we present a method for the accurate estimation of the derivative (aka. sensitivity) of expectations of functions involving an indicator function by modifying a (stochastic) algorithmic differentiation, replacing the derivative of the indicator function with a suitable operator.
We show that we can split this operator into a conditional expectation operator and the density. This allows using different or improved numerical approximation methods for these operators, e.g., regression.
The method is an improvement of the approach presented in [Risk Magazine April 2018], [Quantitative Finance, Vol. 19, No 6, 2019], [Journal of Computational Finance, Vol. 22, No. 4, 2019].
The finite difference approximation of a partial derivative of a Monte-Carlo integral of a discontinuous function is known to exhibit a high Monte-Carlo error. The issue is evident since the Monte-Carlo approximation of a discontinuous function is just a finite sum of discontinuous functions and as such, not even differentiable.
The algorithmic differentiation of a discontinuous function is problematic. A natural approach is to replace the discontinuity by continuous functions. This is equivalent to replacing a path-wise automatic differentiation by a (local) finite difference approximation.
The decoupling of the integration of the Dirac delta and the remaining conditional expectation introduced here results in an improvement in terms of variance reduction. The method can be implemented by a local modification of the algorithmic differentiation.
Keywords: Algorithmic Differentiation, Automatic Differentiation, Adjoint Automatic Differentiation, Monte Carlo Simulation, Indicator Function, Object Oriented Implementation, Variance Reduction
JEL Classification: C15, G13, C63
Suggested Citation: Suggested Citation