ERM Flashcards — Part 4 FINAL

Risk Assessment

JJ
16 min readAug 30, 2017

Final versions: [Pt.1] [Pt.2] [Pt.3] [Pt.4] [Pt.5] [Pt.6]

  • [20] Black Monday: October 1987, S&P 500 index fell by over 20%.
  • [20] Differences between actual financial data compared to a normal distribution (3): 1. more narrowly peaked; 2. fatter tails (leptokurtic); 3. returns are heteroscedastic
  • [20] Methods of defining Extreme Value (2): 1. max value in a set of n losses (block maximum); 2. all losses exceeding a certain threshold
  • [20] Main results of extreme value theory (2): 1. distribution of block maxima is approximately described by the GEV family of distributions if n is sufficiently large; 2. tail of the distribution above a threshold u can be approximated for large values of u by the GPD
  • [20] Parameters of the GEV family: 1. alpha (location); 2. beta (scale); 3. gamma (shape)
  • [20] Distributions in the GEV family (3): 1. Gumbel (gamma=0); 2. Weibull (gamma<0); 3. Frechet (gamma>0)
  • [20] Gumbel GEV characteristics: Tail falls exponentially
  • [20] Weibull GEV characteristics: Finite upper bound (absolute max), can fit reinsured losses and natural phenomenon such as temperature, wind speed, age of human population
  • [20] Frechet GEV characteristics: Have a lower bound and heavy tails (with infinite variance if gamma>0.5) that follow a power law (this is typically the distribution used to model extreme financial loss events)
  • [20] Underlying distributions for Weibull GEV (3): Distributions with finite upper limits; 1. beta; 2. uniform; 3. triangular
  • [20] Underlying distributions for Gumbel GEV (6): Light tailed distribution with finite moments of all orders; 1. chi-squared; 2. exponential; 3. gamma; 4. log-normal; 5. normal; 6. Weibull
  • [20] Underlying distributions for Frechet GEV (5): Heavy tailed distributions whose higher moments can be infinite; 1. Burr; 2. F; 3. log-gamma; 4. Pareto; 5. t
  • [20] Return level approach for GEV: Select the max observation in each block
  • [20] Return period approach for GEV: Count the observations in each block that exceed some set level
  • [20] Trade-off considerations for block size in return level approach for GEV: Granularity versus variance of estimates
  • [20] Key limitations of the GEV approach (2): 1. a lot of data is ignored; 2. choice of block size is subjective
  • [20] Parameters for GPD: 1. beta (scale); 2. gamma (shape)
  • [20] Characteristics of the GPD (by different gammas): 1. lower bound when gamma>0; 2. upper bound when gamma<0; 3. becomes Pareto when gamma>0
  • [20] Asymptotic property for GEV/GPD: When the maxima of a distribution converge to a GEV distribution, the excess distribution converges to a GPD with the same shape parameter
  • [20] Memory-less property of exponential distributions: Expected future waiting time for an event to occur is independent of the waiting time already elapsed
  • [20] Threshold selection for GPD: Determine the lowest threshold above which the mean excess function is linear in u (typically around 90–95th percentile of complete distribution)
  • [21] Stochastic uncertainty: Uncertainty arising from the randomness of a finite set of observations
  • [21] Ways to allow for parameter uncertainty through dynamic simulations (2): 1. use multivariate normal distribution (if covariance matrix is available); 2. use joint distribution for parameters (simulated by repeatedly re-fitting parameters to sets of data points each modeled using the original data set)
  • [21] Requirements for actuarial models (9): 1. model must be valid, sufficiently rigorous for its purposes; 2. adequately documented; 3. model points reflect the distribution of business being modeled; 4. components of model allow for all features of the business being modeled that could affect advice given; 5. input parameter values should be appropriate to the business; 6. workings of the model should be easy to appreciate and communicate; 7. outputs should be capable of independent verification; 8. capable of subsequent development; 9. not overly complex
  • [21] Requirements for actuarial models for ERM (6): Generic requirements plus; 1. amenable to an analysis of the impact of parameter uncertainty; 2. simulations should exhibit behaviors consistent with past events; 3. shortcomings of the model should be clearly stated; 4. reflect dynamics of the organization, now and in the future; 5. be comprehensive across all important well-defined risks; 6. produce outcomes that are balanced
  • [21] Sweeting’s model requirements (5): 1. be appropriate for purpose; 2. be robust; 3. be as simple as possible, while meeting its purpose; 4. be developed over time; 5. sometimes be avoided when more importance is attached to other activities
  • [21] Uses of ERM models (8): 1. price products or services; 2. assess economic value of company; 3. estimate possible volatility of future profits and earnings; 4. determine capital adequacy requirements; 5. project future capital or solvency position; 6. assess effect of risk management and mitigation techniques on profits and capital requirements; 7. assess effect of strategic decisions; 8. evaluate projects
  • [21] Steps in developing and applying model (10): 1. specify purpose; 2. collect data, group and modify as necessary; 3. choose model form, identify parameters; 4. select appropriate time period; 5. estimate parameters and correlations; 6. ensure acceptable goodness of fit; 7. ensure model is able to project all required outputs; 8. run model using estimated variables or stochastic simulations; 9. output results in appropriate format; 10. perform sensitivity tests
  • [21] Considerations in discussing model outputs (5): 1. corporate risk policy; 2. corporate risk appetite; 3. corporate risk preferences; 4. risk preferences and other investment opportunities of company owners; 5. qualitative factors, judgment and intuition
  • [22] Characteristics of actual returns on individual equities (4): 1. returns are rarely independent and identically distributed; 2. volatility appears to vary over time; 3. extreme returns appear in clusters; 4. return series are leptokurtic (heavy-tailed, non-symmetric)
  • [22] Characteristics of actual returns on equity portfolios (5): 1. correlations exist between returns of different equities at the same point in time; 2. correlations between different series vary over time; 3. multivariate returns show little evidence of cross-correlation (between time periods); 4. multivariate series of absolute or squared returns do show strong evidence of cross-correlation; 5. extreme returns in one series often coincide with extreme returns in several other series
  • [22] Characteristics of equity returns over a long time period (2): 1. volatility clustering is less marked; 2. returns appear to be more iid and less heavy-tailed (expected from the Central Limit Theorem)
  • [22] Excess kurtosis of daily financial data: Kurtosis measures “peakedness” of a distribution (kurtosis for daily financial data appears to be higher than that of a normal distribution, which means more of the variance is due to infrequent extreme variations)
  • [22] Steps to model portfolio returns using the multivariate normal distribution (6): 1. decide on frequency of calculation (daily, weekly); 2. decide on timeframe of historical data to use (volume versus relevance); 3. for each asset class, choose total return index; 4. calculate log-returns; 5. calculate average returns and variance of each asset class and co-variance between classes; 6. simulate series of returns with same characteristics based on a multivariate normal distribution
  • [22] Steps to model portfolio returns using PCA (10): 1.–5. same as using multivariate normal distribution; 6. derive matrix of deviations from average returns; 7. derive principal components that explain a sufficiently high proportion of the deviations; 8. project this number of independent normal distributed normal random variables using associated eigenvalues as variances; 9. weight projected series of deviations with eigenvectors; 10. add these weighted projected deviations to expected returns from each asset class
  • [22] Factors reflected by credit spread (4): 1. expected probability of default; 2. expected loss given default; 3. uncertainty around expected probability of default and LGD; 4. liquidity premium
  • [22] Common measures of credit spread (3): 1. nominal spread; 2. static spread; 3. option-adjusted spread
  • [22] Nominal measure of credit spread: Difference between the gross redemption yields of risky and risk-free bonds
  • [22] Static measure of credit spread: Addition to the risk-free rate at which discounted cashflows from a risky bond will equate to its price
  • [22] Option-adjusted measure of credit spread: Static measure plus stochastic modeling to allow for any options embedded in the bond
  • [22] Reasons why observed credit spreads tend to be higher than can be justified by historical bond defaults (6): 1. higher volatility of returns relative to risk-free asset (credit beta); 2. higher uncertainty of potential future returns; 3. greater skewness of potential future returns on corporate debt (more significant downside); 4. lower liquidity of corporate debt; 5. lower marketability of corporate debt (and higher costs of trade); 6. differences in taxation
  • [22] Features of a good benchmark for market risk (13): 1. unambiguous; 2. investable and trackable; 3. measurable on a reasonably frequent basis; 4. appropriate for investor’s objectives; 5. reflective of current investment opinion; 6. specified in advance; (7. contain high proportion of assets held in portfolio; 8. have a similar investment style to portfolio; 9. low turnover of constituents; 10. investable position sizes; 11. strong correlation with portfolio; 12. low correlation with market-benchmark; 13. variability of portfolio relative to benchmark is lower than portfolio relative to market)
  • [22] Market strategic risk: Risk of poor performance of benchmark used to judge manager’s performance, relative to liability-based benchmark
  • [22] Active risk: Risk of poor performance of manager’s actual portfolio relative to strategic benchmark
  • [22] Active return: Difference between return on actual portfolio and return on benchmark
  • [22] Brennan-Schwartz model assumptions (4): 1. changes in short term rates vary in line with steepness of the yield curve (differential between long and short term rates); 2. volatility of short term rates varies in proportion to the most recent level of short term rates; 3. changes in long term rates vary in proportion to the square of the level of long term rates and are also influenced by short term rates; 4. volatility of long term rates varies in proportion to the most recent level of long term rates
  • [22] Methods of modeling contagion risk (2): 1. consider contagion as a feedback risk (some serial correlation exists that can be modeled), though this in theory should be eliminated through arbitrage and thus ignored; 2. fit a t-copula with situation dependent correlations
  • [22] Methods of modeling market returns (5): 1. historical simulations (bootstrapping); 2. forward-looking data-based approaches (multivariate normal distribution); 3. forward-looking factor-based approaches (PCA); (If data is sufficient 4. multivariate distribution other than normal; 5. combining non-normal marginal distributions using a copula)
  • [22] Methods of modeling interest rate risk (3): 1. single factor models (for short term single interest rates); 2. two-factor models (Brennan-Schwartz); 3. PCA (for deviations from expected returns)
  • [23] Events that could be counted as default (4): 1. payment due is missed; 2. financial ratio falls above or below a certain level; 3. legal proceedings start against credit issuer; 4. present value of assets falls below liabilities
  • [23] Components of default risk assessment (3): 1. probability of default; 2. loss on default; 3. level and nature of interactions between various credit exposures and other risks in portfolio
  • [23] Methods of strengthening recoveries (2): 1. requiring collateral; 2. requiring third party guarantees
  • [23] Actions taken to minimize credit risk (3): 1. diversify credit exposure across a number of counterparties; 2. monitor exposures regularly; 3. take immediate action when default occurs
  • [23] Sources of information used to assess credit risk (4): 1. credit issuer (rating agencies); 2. counterparty; 3. publicly available data; 4. proprietary databases
  • [23] Factors considered in qualitative credit models (7): 1. nature of contractual obligation; 2. level and nature of any security; 3. nature of the borrower; 4. economic indicators; 5. financial ratios; 6. face-to-face meetings with credit issuer or counterparty; 7. how risks change over time
  • [23] Key advantage of qualitative credit models: A wide range of subjective factors can be incorporated into the assessment
  • [23] Disadvantages of qualitative credit models (4): 1. excessive subjectivity; 2. lack of consistency between ratings; 3. meaning of subjective ratings may change over the economic cycle; 4. ratings may fail to respond to changes in the economy or circumstances of the counterparty (reluctance to change rating)
  • [23] Examples of quantitative credit models (5): 1. credit-scoring; 2. structural (firm value); 3. reduced-form; 4. credit-portfolio; 5. credit-exposure
  • [23] Credit scoring models: Forecast the likelihood of a counterparty defaulting at a particular point in time given certain fundamental information about the counterparty (empirical models, expert models)
  • [23] Structural models (firm-value): Estimate the likelihood of default using market information such as the company’s share price and volatility of its share price (Merton, KMV)
  • [23] Reduced form models: Model default as a statistical process that typically depends on economic variables (credit migration models)
  • [23] Credit portfolio models: Estimate credit exposure across several counterparties, may allow for diversification effects of uncorrelated creditors (multivariate structural, multivariate credit migration)
  • [23] Credit exposure models: Estimate credit exposure for more complex circumstances, like when derivatives and guarantees are present (Monte Carlo)
  • [23] Common difficulties of quantitative credit models (4): 1. lack of publicly available data on default experience; 2. skewness of the distribution of credit losses; 3. correlation of defaults between counterparties; 4. model risk
  • [23] Merton model: Structural model using option pricing theory along with equity share price volatility to derive information on the value of company assets (and hence debt); Equity shares can be considered a call option on company assets, then the value of debt is equal to value of a risk-free bond less the value of a put option on company’s assets
  • [23] Key advantage of the Merton model for credit risk: Estimates an appropriate credit spread for a bond, even when the bond is unquoted
  • [23] Disadvantages of using the Merton model for credit risk (7): Unrealistic assumptions 1. markets are frictionless; 2. risk-free rate is deterministic; 3. asset value follows a log-normal random walk with fixed rate of growth and fixed volatility; 4. asset value is an observable traded security; 5. bond is a zero-coupon bond with only one default opportunity; 6. default results in liquidation; (7. Merton model results can be affect significantly by changes in market sentiment, absent any real changes to company prospects)
  • [23] Distance to default: Used in the KMV model, it is the number of standard deviations that the company assets have to fall in value before breaching a threshold
  • [23] Advantages of KMV model over Merton model (3): 1. coupon-paying bonds can be modeled; 2. more complex liability structures can be accommodated (KMV uses average coupon and overall gearing level); 3. starting asset values are not assumed to be observable and is derived from the value of the company’s equity shares
  • [23] Credit migration modeling process (3): 1. historical data is used to determine the probability that a company rated something at the start of the year will be rated something at the end of the year (recorded in rating probability matrices); 2. matrices are applied repeatedly to a counterparty’s current rating to estimate the likelihood of each possible rating in each future year; 3. using the probability of default for a company of a given rating, the model estimates the chance of default in each future year
  • [23] Advantages of credit migration models (2): 1. volatile equity markets should not overly impact results; 2. does not rely on publicly traded share information
  • [23] Assumptions of credit migration models (2): 1. credit migration process follows a time-homogeneous Markov chain; 2. there exists a credit rating that reflects the company’s default likelihood through the business cycle (rather than in the current environment)
  • [23] Disadvantages of credit migration models (7): 1. incorrect time-homogeneity assumption (recently downgraded company is more likely to be downgraded again); 2. assumes default probabilities for each rating in each future year can be estimated; 3. assumes likelihood of default can be determined solely by credit rating; 4. low number of distinct credit ratings results in a low level of granularity in default estimates; 5. rankings of organizations by different credit rating agencies do not always coincide; 6. not all companies have obtained a credit rating; 7. ratings are sometimes unavailable (withdrawn)
  • [23] CreditMetrics approach: Estimates variance of bond’s future value in one year’s time by combining default probabilities from credit migration models with estimated future values at each rating
  • [23] Assumptions of the multivariate CreditMetrics model (5): 1. each credit rating has an associated probability of default; 2. change in rating is a function of change in the company’s asset value and volatility of the asset value; 3. value of assets behaves log-normally; 4. correlation between asset values can be estimated from the correlation between corresponding equity values; 5. equity returns can be modeled using country-specific indices and independent firm-specific volatility
  • [23] Examples of credit portfolio models (5): 1. multivariate structural; 2. multivariate credit-migration (or financial) models; 3. econometric or actuarial models; 4. common shock models; 5. time-until-default (or survival) models
  • [23] Multivariate structural models: Multivariate KMV, model asset values by using correlation matrices or copulas
  • [23] Multivariate credit migration models: Multivariate CreditMetrics, assumes equity returns can be modeled using country-specific indices and firm specific volatility
  • [23] Econometric or actuarial models for credit risk: Estimates default rate of firms using external or empirical data
  • [23] Common shock models for credit risk: Determine probability of no defaults by assuming each bond defaults in line with a Poisson process, and considering shocks that cause the default of one or more of the bonds in the portfolio
  • [23] Time until default models for credit risk: Survival CDFs are linked by a suitably parameterized copula function to estimate the aggregate default rate for the bond portfolio
  • [23] Common measures of recovery (2): 1. price after default (short term measure); 2. ultimate recovery (larger, long term measure)
  • [23] Factors affecting the likely loss on default (6): 1. seniority of debt; 2. availability of collateral; 3. nature of industry; 4. point in economic cycle; 5. legal jurisdiction; 6. rights and actions of other creditors
  • [24] Reasons for increased interest in active management of operational risks (7): 1. advent of ERM; 2. new regulatory capital requirements; 3. increasing emphasis on sophisticated quantitative models for other types of risk; 4. operational risks have no inherent upside potential; 5. main driver behind many cases of major financial disasters; 6. inter-linked with credit and market risk and is particularly important to minimize during already stressed market conditions; 7. may otherwise be treated differently in different areas of the company
  • [24] Benefits of effective operational risk management (4): 1. minimizes impact of reputational damage (distinctive benefit); 2. minimizes day-to-day losses and reduces potential for more extreme incidents; 3. improves company’s ability to meet business objectives (reduces time spent on crisis management); 4. strengthens overall ERM process and framework
  • [24] Initial finding from analyzing publicly available data on operational risks (4): 1. distribution of operational losses is skewed to the right (large number of small losses, few large losses); 2. loss severities have a heavy tailed distribution; 3. losses occur randomly in time; 4. loss frequency may vary considerably over time
  • [24] Features of operational risk data that make applying statistical methods difficult (2): 1. low volume of data; 2. have a cyclical component and/or depend on current economic conditions
  • [24] Bottom up model for assessing operational risks: Estimates operational risk capital by starting the analysis at a low level of detail and then aggregating the results to obtain an overall assessment
  • [24] Top down model for assessing operational risks: Use readily-available data and fairly simple calculations to give a general picture of operational risks
  • [24] Steps in using scenario analysis to assess operational risks (5): 1. group risk exposures into broad categories; 2. for each group, develop a plausible adverse scenario; 3. for each scenario, calculate the consequences of the risk event; 4. calculate total costs for all scenarios; 5. assessments of frequency and severity can be displayed on a risk map
  • [24] Benefits of using scenario analysis to assess operational risk (5): 1. capture opinions, concerns and experience of risk managers; 2. not rely heavily on availability, accuracy and relevance of historical data; 3. provide opportunity to identify hard to predict, high impact events (black swans); 4. identify and improve understanding of cause and effect relationships; 5. reduce risk reward arbitrage opportunities
  • [24] Main advantage of bottom up models for operational risks: Give more robust picture of a company’s overall risk profile
  • [24] Limitations of bottom up models for operational risks (3): 1. difficult to break down reported aggregate losses into their constituent components; 2. there may be little robust internal historic data, especially for low probability and high severity events; 3. differences between companies mean that application of external data is difficult
  • [24] Basel AMA for operational risk assessment: Under the Basel advanced measurement approach, operational risk is assessed using internal models and scenario analysis, 1-year holding period and a 99.9% confidence interval
  • [24] Areas requiring credible data to use Basel AMA for operational risks (3): 1. internal data on repetitive, high frequency looses over a three to five year period; 2. external data on non-repetitive, low frequency losses; 3. suitable stress scenarios to consider
  • [24] Advantage of the implied capital model for operational risks: Simple and forward looking
  • [24] Limitations of the implied capital model for operational risks (3): 1. total risk capital needs to be estimated; 2. inter-relationships between different types of risk are ignored; 3. does not capture cause and effect scenarios
  • [24] Advantage of income volatility model over implied capital model for operational risks: There is better data availability for total income volatility than for total risk capital needed
  • [24] Limitations of the income volatility model for operational risks (2): 1. ignores rapid evolution of companies and industries; 2. does not capture the softer measures of risk (opportunity cost, reputation) by focusing on income rather than value
  • [24] Advantage of CAPM over the income volatility model for operational risks: Includes both the aggregate effect of specific risk events and the softer issues
  • [24] Limitations of CAPM for operational risks (4): 1. no information is provided on losses due to specific risks; 2. level of operational risk capital is unaffected by any controls put in place (little motivation to improve risk management process); 3. tail end risks are not thoroughly accounted for; 4. does not help anticipate incidents
  • [24] Stages in a comprehensive operational risk management process (5): 1. risk policy and organization; 2. risk identification and assessment; 3. capital allocation and performance measurement; 4. risk mitigation and control; 5. risk transfer and finance
  • [24] Components of a comprehensive operational risk management policy (6): 1. principles for operational risk management; 2. definitions and taxonomy; 3. objectives and goals; 4. processes and tools; 5. organizational structure as it applies to operational risk management; 6. roles and responsibilities of different business areas
  • [24] Purpose of loss incident database for operational risks: Help company learn lessons from past loss incidents, analyze trends and support analysis of root causes and mitigation strategies
  • [24] Controls self assessment: Internal analysis of key risks and their controls and management implications
  • [24] Examples of top down models for operational risks (4): 1. implied capital; 2. income volatility; 3. economic pricing models (CAPM); 4. analogue models (use data from similar companies)
  • [24] Factor based approach for assessing operational risks: Assume losses are related to the volume of transactions and apply a weighting to the actual or expected volume of transactions (Basel indicator and standardized approaches)
  • [25] Funding liquidity risk: Risk of money markets not being able to supply funding to a business when required
  • [25] Market liquidity risk: Lack of capacity in the market to handle asset transactions at a time when the deal is required
  • [25] Reasons why quantitative techniques are usually not applied to liquidity risk (2): 1. historic data on liquidity crisis is limited; 2. degree and nature of every organization’s exposure to liquidity risk is different
  • [25] Sources of cash inflows (3): 1. revenues/income generated by assets; 2. proceeds from sale of assets; 3. drawings upon sources of liquidity (issue of new debt or equity)
  • [25] Scenarios that should be considered when assessing liquidity risk (7): 1. rising interest rates; 2. ratings downgrade; 3. large operational loss; 4. large single insurance claim occurrence; 5. loss of control over a key distribution channel; 6. impaired capital markets; 7. sudden termination of a large reinsurance contract
  • [25] Level risk (or underwriting risk) in demographic risk: Risk that the particular underlying population’s loss costs are not as expected over the immediate future, due to shortcomings in the underwriting process
  • [25] Reserving risk in demographic risk (3): 1. volatility risk (finite policies); 2. catastrophe risk; 3. trend (or cycle) risk (future long term changes)
  • [25] Method of modeling volatility risk: Probabilistically or stochastically assuming some underlying statistical process
  • [25] Methods of modeling catastrophe risk (2): 1. scenario analysis; 2. more complex dependencies can be modeled by copulas
  • [25] Key distinctions between life and non-life insurance risk (3): 1. non-life policies may experience more than one claim; 2. move through different states over the lifetime of the policy; 3. severity may also need to be modeled
  • [25] Methods of assessing liquidity risk (2): 1. scenario analysis (when and why expected cash outflows might exceed inflows); 2. stress testing (examine effect on liquidity of an extreme event)

--

--