Being Experimental in Risk – MRM

Navendu Sharma
Risk Breakfast
Published in
2 min readNov 11, 2019

Risk Management of today is rooted in statistics where methods are definite and outcomes are probabilistic based on assumptions. Much of the haze around quantification is removed through having established frameworks, processes and consistent approaches to determine parameters.

For example, In a VaR model – fitting the asset prices in a lognormal distribution which had previously followed a stochastic process results in estimation of probabilities. This is quite well known and so robust in its process that any deviation in a portfolio of instruments can be singled out easily and remedied.

Alternatively, the other approach of choosing or sometimes making your own time series to determine a returns vector and multiplying it with present day sensitivities which is sorted is commonly followed. All done to reduce unpredictability.

Human behaviour just like markets are randomised with a drift factor. Question is do we deviate from a standardised framework and take an unbeaten path to prove a point or chase alpha, suit a hunch that we are so sure of?

There could be experimentation based on practical assumptions or proven theory but most of us would hesitate because of the repercussions of a failed adventure i.e. non compliance due to a audit fallout that can just destroy your whole career. Indeed the costs to adventure are huge but it can also reap fortunes if the strategy succeeds.

So do we eat that forbidden apple?

Or do we stop all experimentation?

Do we adopt specific research which may not be widely used in order to realise that alpha?

Then, how do we create new better methods?

All these questions depend on your hunch for that piece of research. Whether it works at a point in time or through the cycle. Frameworks like GARCH for volatility estimation are given to work with almost any segment of data whereas some method adopted to prove a favorable point of view would outright be rejected as being biased and only suitable to a particular segment.

Another one that I remember is modelling out of sample data for conducting a Principal Component Analysis by constraining the residuals and choosing your factors carefully. This can be thought of as a preferential approach which suits your data by adopting only those algorithms which give you a favorable outcome.

Thus model documentation becomes necessary where segmentation, statistical assumptions, variable selection, data quality all become important. With that part of the execution taken care of we can better understand the risks involved in our offbeat approach (also open to further scrutiny of validators and auditors).

Of course, you cannot steamroll with just a gut feeling.

--

--