Thomas Norman Dam
In today’s competitive environment, speed, efficiency and accuracy are key to the success of an organisation. A quicker calibration of rough volatility models would allow quicker risk calculation, pricing, and decision-making. Here, Thomas Norman Dam, Senior Analyst at Danske Bank, shows us how to do that using neural networks.
In this article, I will describe how neural networks can be used to quickly calibrate rough volatility models. This is highly desirable since it would allow financial institutions to use better models for pricing and calculating risk. An example of such model could be the rough Bergomi model introduced by Christian Bayer, Peter Friz and Jim Gatheral. In this model, the price process (St) has the following evolution under the risk-free measure
where (Bs) and (Zs) are standard Brownian motions with correlation ρ. This model is remarkably consistent with financial time series but all calculations need to be done via Monte Carlo simulations due to the fact that (vs) is non-markovian. The speed and accuracy of these Monte Carlo simulations has been vastly improved over the last two years, but model calibration via Monte Carlo methods is still too slow for practical use.
How are quant finance pioneers achieving more accurate results?
Watch our interviews with:
Blanka Horvath >>
Lorenzo Bergomi >>
Jessica James >>
In a recent paper by Christian Bayer and Benjamin Stemper, the authors calibrate the rough Bergomi model to a volatility surface in just 36 milliseconds on a home PC. To quickly explain the setup, we let μ = (α,η,ρ) be the vector of free parameters, M be the moneyness of a call-option, and T be the expiry of a call option. The procedure goes in three steps:
Note the first two steps can be done in advance while the third step is quite fast due to adjoint differentiation.
Find out the latest innovations in volatility modelling from quant experts in the field
A neural network approach to understanding implied volatility movements
High frequency markets’ volatility seasonality
The convexity profile of systematic strategies and diversification benefits of trend-following strategies
While the above procedures work with some success it is worth mentioning that a lot of work still needs to be done. First and foremost, we need to find a better network architecture than the one presented by Bayer et al. as the fit they get is far from perfect. For example, one could experiment with convolutional layers and different activation functions.
Another problem is, that different choices of μ can give rise to more or less the same volatility surface. This begs the question of how μ should be chosen. Recent research shows, that if ρ is not negative enough, then (St) has no second moment which is problematic for the precision of Monte Carlo simulations. In fact, one must have ρ≤-√(1-1/m) otherwise the m’th moment will not exist. One solution is to use a Bayesian framework to identify a probable region for μ and the pick ρ as small as possible within this region.
In conclusion, many problems are still open but a solid foundation for using neural networks in calibration of rough volatility models is already available. That rough volatility models are becoming usable in practice is going to be a game changer in the industry due to their superior accuracy.
Lorenzo Bergomi, Société Générale, on equity smiles and volatility modelling