Adjoints and Automatic (Algorithmic) Differentiation in Computational Finance
25 Pages Posted: 2 May 2011 Last revised: 12 Sep 2011
Date Written: September 12, 2011
Abstract
Two of the most important areas in computational finance: Greeks and, respectively, calibration, are based on efficient and accurate computation of a large number of sensitivities. This paper gives an overview of adjoint and automatic differentiation (AD), also known as algorithmic differentiation, techniques to calculate these sensitivities. When compared to finite difference approximation, this approach can potentially reduce the computational cost by several orders of magnitude, with sensitivities accurate up to machine precision. AAD can be applied in conjunction with any analytical or numerical method (finite difference, Monte Carlo, etc) used for pricing, preserving the numerical properties of the original method. Examples and a literature survey are included.
Keywords: Adjoint, Automatic Differentiation, Algorithmic Differentiation, Monte Carlo, Greeks, Calibration, computational efficiency
JEL Classification: C15, C61, C63, G12, G13
Suggested Citation: Suggested Citation
Do you have negative results from your research you’d like to share?
Recommended Papers
-
A Practitioner's Guide to Pricing and Hedging Callable Libor Exotics in Forward Libor Models
-
Monte Carlo Bounds for Callable Products with Non-Analytic Break Costs
-
By Christopher Beveridge, Mark S. Joshi, ...