Big Data & Risk Management in Financial Markets (Part I)

Image Credit: SergeyP/Shutterstock

I. Overview

We have seen how the interdisciplinary use of big data affected many sectors. Different examples are contagion spreading (Culotta, 2010); music albums success predictions (Dhar and Chang, 2009); or presidential election (Tumasjan et al., 2010).

In financial markets, the sentiment analysis probably represents the major and most known implementation of machine learning techniques on big datasets (Bollen et al., 2011). In spite of all the hype, though, risk management is still an exception. New information and stack of technologies did not bring as many benefits to the risk management as they did to trading for instance.

Risk is indeed usually addressed from an operational perspective, from a customer relationship angle, or specifically for fraud prevention and credit scoring. However, applications strictly related to financial markets are still not so widespread, mainly because of the following problem: in theory, more information should entail a higher degree of accuracy, while in practice it (also) exponentially augments the system complexity, making really complicated to identify and timely analyze unstructured data that might be extremely valuable in such fast-paced environments.

The likelihood (and the risk) of a network systemic failure is then multiplied by the increase in the interconnection degree of the markets. More and more data can help central institutions and regulators to predict in real-time symptoms of a future crisis, and acting in time to prevent it or weaken it.

The complexity introduced in the market made the common pricing techniques obsolete and slowly reactive and required a more comprehensive and detailed pricing approach than a simple net discounted value of derivative’s legs. This is the reason why banks and financial institutions need (but struggle) to simulate a single portfolio a hundred thousand times, or because an accurate fast forecast is considered a breakthrough achievement.

In fact, these two areas have been mostly disrupted by the increasing amount of data available: simulations and forecasting. I am going to discuss simulation in this post, after an introduction on (benchmark) traditional risk management tools, while I will talk about forecasting in the second part of the post.

II. Traditional Risk Management

Traditionally, there are few techniques every risk manager has in his toolbox.

The Value-at-risk (VaR) for example, it has been used for decades to assess market risk. In a nutshell, the VaR is a statistical technique used to measure the level of risk of a portfolio given a certain confidence interval and within a fixed timeframe.

There are even some extensions of this tool called coherent risk measures alternatives. The conditional VaR (CVaR) or expected shortfall, computes the expected return of the portfolio in the worst scenarios for a certain probability level. The entropic VaR (EVaR) (Ahmadi-Javid, 2011) instead represents the upper bound for both VaR and CVaR, and its dual representation is related to the concept of relative entropy.

The VaR is computed through:

  • Historical method;
  • Delta-Normal method;
  • Monte Carlo simulation.

The historical method simply lists historical returns in ascending order.

The Delta-Normal technique computes historical mean, variance, and correlation, and finally obtains the portfolio risk through a combination of linear exposure to factors and the covariance matrix (Jorion, 2006).

The last method, i.e., the Monte Carlo simulation, develops a model for the stock price/returns trajectories, and then run a multitude of simulated trials and finally averages the results obtained. The Monte Carlo is then a repeated sampling algorithm that could be used for solving any problem that may be stated through a probabilistic lens.

Another set of tools traditionally employed is the XVA approach. Credit counterparty risk is nowadays more difficult to assess, and it is essential to use new approaches such as the X-Value Adjustment (XVA) — a framework that includes credit valuation adjustment (CVA), debt valuation adjustment (DVA), and funding valuation adjustment (FVA), respectively the risk of the counterparty, the risk of the entity itself, and the market value of the funding cost of the instrument (Hull and White, 2014; Smith, 2015).

The intuitive interpretation for the CVA is that it represents the market value of counterparty credit risk, and it is obtained as the difference between the risk-free portfolio and the portfolio that embeds a potential counterparty’s default.

The DVA is instead defined in Smith (2015) as the expected loss of the firm is the bank defaults — contrarily to the CVA, which represents the expected loss in case of counterparty default.

Finally, the FVA represents the difference between funding costs and benefits, or alternatively the difference between of a portfolio of uncollateralized transactions calculated with the risk-free rate and with the bank average funding cost (Hull, 2015).

The problem with those factors is that they require a huge amount of computation power to be calculated effectively (Green, 2015; Veldhoen and De Prins, 2014).

III. How Can Big Data Help?

Veldhoen and De Prins (2014) claim that different data affect different risks with a distinctive intensity. The following table summarizes their findings rating from 1 (the feature with the strongest impact) to 4 (the weakest benefit) the impact of each characteristic on each risk (each cell is evaluated independently from others):

Impact of big data features on risk management (Veldhoen and De Prins, 2014).

The availability of those data cannot solve every single problem, and in fact, big data poses as many technical challenges as well as opportunities for organizations and regulators (Hassani and Silva, 2015). Examples of issues are lack of talents, technical problems related to hypothesis/testing/model, or hardware/software challenges. Silver (2013) proposes as main challenge the increase of noise into the signal ratio, to the detriment of the actual predictive power of the further data.

The forecasting techniques then have to be able to filter down that noise and leave the model with only the variables and data that matter, and at the same time able to provide accurate out-of-sample forecasts without abusing of a large number of predictors (Einav and Levin, 2013). In addition, according to Varian (2014), conventional statistics techniques face two additional issues when big data are added to the equation: it is required a higher degree of data manipulation, because every data problem is exponentially amplified, and large data allow for different relationships than linear ones.

Hence, it is important to study models that prevent over-fitting and that are able to manage large datasets efficiently. In general, simpler models work better for out-of-sample forecasts, and excessive complexity should be avoided.

IV. Big Data Simulation

Scenario simulations considering huge data amounts allow for an efficient realization of risk concentrations and quicker reactions to new market developments. In particular, Monte Carlo simulation is a powerful and flexible tool, and the challenge with that is finding the optimal number of paths to match speed and accuracy. A higher accuracy is achieved by the larger amount of simulation the model can project, but it has been always bounded by a lower processing speed as well as machine memory. Even though a set of techniques have been used to handle this burden, the only solution lies in splitting the data between many different workers.

Luckily, parallel computing is gaining popularity, and many algorithms have been developed in the last few years for making it less expensive(Scott et al., 2013).

In particular, two main approaches may be used in order to relief a single terminal from a great data burden

  • Dividing the terminal into different cores on the same chip
  • Dividing it through different machines.

In the first case, the splitting can be made on multi-core CPU, or on parallel GPU (Scott et al., 2013). In any of the two cases, three problems arise: difficulty in writing the splitting configuration, an absence of positive effect on memory, and difficulty in abstraction make those methods cumbersome to be used.

The second alternative instead is much more scalable: dividing data into different machines increases the processing power and efficiency, although it comes at a higher cost. A solution to this problem has been proposed by Scott et al. (2013), called consensus Monte Carlo: this new model runs a separate Monte Carlo algorithm in each terminal, and then average individual draw across machines. The final outcome resembles a single Monte Carlo set of simulations run on a single machine for a long time.

The future for Monte Carlo methods presents many possible developments. According to Kroese et al. (2014), at least three different elaborations can be pursued:

  • Quasi-Monte Carlo,
  • rare events
  • spatial processes.

Quasi-Monte Carlo uses quasi-random number generators especially in multi-dimensional integration problems; rare events will instead use simulations to spot events that rarely happen using variance reduction techniques; and finally, spatial processes are difficult to approximate because of the lack of independence between the simulations themselves, and a convergence is only achievable through an enormous number of simulations.

References

  1. Ahmadi-Javid, A. (2011). “Entropic Value-at-Risk: A New Coherent Risk Measure”. Journal of Optimization Theory and Applications 155(3): 1105–1123.
  2. Bollen, J., Mao, H., Zeng, X. (2011). “Twitter mood predicts the stock market”. Journal of Computational Science Volume 2 (1): 1–8.
  3. Culotta, A. (2010). “Towards detecting influenza epidemics by analysing Twitter messages”. Proceedings of the First Workshop on Social Media Analytics: 115–122.
  4. Dhar, V., Chang, E. A. (2009). “Does Chatter Matter? The Impact of User-Generated Content on Music Sales”. Journal of Interactive Marketing Volume 23 (4): 300–307.
  5. Einav, L., Levin, J. D. (2013). “The Data Revolution and Economic Analysis”. Working Paper №19035, National Bureau of Economic Research.
  6. Green, A. (2015). “XVA: Credit, Funding and Capital Valuation Adjustments”. Wiley, 1st edition.
  7. Hassani, H., Silva, E. S. (2015). “Forecasting with Big Data: A Review”. Annals of Data Science 2 (1): 5–19.
  8. Hull, J. (2015). “Risk Management and Financial Institutions”. Wiley, 4th edition.
  9. Hull, J., White, A. (2014). “Valuing Derivatives: Funding Value Adjustments and Fair Value”. Financial Analysts Journal, Vol. 70 (3): 46–56.
  10. Jorion, P. (2006). “Value at Risk: The New Benchmark for Managing Financial Risk”. (3rd ed.). McGraw-Hill.
  11. Kroese, D. P., Brereton, T., Taimre, T., Botev, Z. I. (2014). “Why the Monte Carlo method is so important today”. WIREs Computational Statistics 6: 386–392.
  12. Scott, S. L., Blocker, A. W., Bonassi, F. V., Chipman, H., George, E., McCulloch, R. (2013). “Bayes and big data: The consensus Monte Carlo algorithm”. EFaBBayes 250 conference 16.
  13. Silver, N. (2013). “The Signal and the Noise: The Art and Science of Prediction”. Penguin Books, Australia.
  14. Smith, D. J. (2015). “Understanding CVA, DVA, and FVA: Examples of Interest Rate Swap Valuation”. Available at SSRN: http://ssrn.com/abstract=2510970.
  15. Tumasjan, A., Sprenger, T. O., Sandner, P. G., Welpe, I. M. (2010). “Predicting Elections with Twitter: What 140 Characters Reveal about Political Sentiment”. Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media: 178–185.
  16. Varian, H. R. (2014). “Big Data: New Tricks for Econometrics.” Journal of Economic Perspectives, 28 (2): 3–28.
  17. Veldhoen, A., De Prins, S. (2014). “Applying Big Data to Risk Management”. Avantage Reply Report.

Note: I have written an extended technical survey on big data and risk management for the Montreal Institute of Structured Finance and Derivatives (2016), on the base of which this post has been created. If you want to read the complete work, here it comes the pdf.