Management of Market Risk - Modern Banking


Recall that, from the mid-1980s, as major investment and commercial banks rapidly expanded into trading assets, new management techniques for market risk were needed, and as a result a great deal of academic and practitioner attention has been devoted to improving the management of market risk. An offshoot of this research has been the development of new methods to manage credit risk, especially at the aggregate or portfolio level. For example, JP Morgan’s Riskmetrics was published in 1994, and outlined the bank’s approach to the management of market risk. Similar principles were developed for the management of aggregate credit risk, and the outcome was Creditmetrics, produced in 1997. For this reason, this section begins with a review of the relatively new approaches for managing market risk, followed by a discussion of credit risk management techniques.

Once readers acquire a general knowledge of key terms in the context of market risk, it is reasonably straightforward to apply the same ideas to credit risk, though credit risk, as will be seen, presents its own unique set of problems.

The central components of a market risk management system are RAROC (risk adjusted return on capital) and value at risk (VaR). RAROC is used to manage risk related to different business units within a bank, but is also employed to evaluate performance. VaR focuses solely on giving banks a number, which, in principle, they use to ensure they have sufficient capital to cover their market risk exposure. In practice, the limitations of VaR make it necessary to apply other techniques, such as scenario analysis and stress tests.

Risk Adjusted Return on Capital

Bankers Trust introduced RAROC in the late 1970s, to assess the amount of credit risk embedded in all areas of the bank. By measuring the risk of the credit portfolio, the bank could decide on how much capital should be set aside to ensure that the exposure of its depositors was limited, for a given probability of loss. It was subsequently expanded to include all the business units at Bankers Trust, and other major banks adopted either RAROC or some variant of it. The difference between RAROC and the more traditional measures such as return on assets (ROA) or return on equity (ROE) is that the latter two measures do not adjust for the differences in degree of risk for related activities within the bank.

RAROC on the risk adjusted return on capitalis defined as:

Position’s Return Adjusted for Risk ÷ Total Capital

Position’s Return: usually measured as (revenue – cost – expected losses), adjusted for risk (volatility)Capital: the total capital (equity plus other sources of external finance)

Other, related measures of RAROC are used, though it is increasingly the sector standard. A bank wants to know the return on a position (e.g. a foreign exchange position, or a portfolio of loans or equity). RAROC measures the risk inherent in each activity, product or portfolio. The risk factor is assigned by looking at the volatility of the assets’ price – usually based on historical data. After each asset is assigned a risk factor, capital is allocated to it. For example, a trader is assigned a risk adjusted amount of capital, based on the risk factor for the type of assets being traded.

Using RAROC, capital is assigned to a trader, division or centre, on a risk adjusted basis. The profitability of the product/centre is measured by returns against capital employed. If a unit is assigned X amount of capital and returns are unexpectedly low, then the capital allocation is inefficient and therefore, costly for the firm. An attraction of RAROC is that it can be employed for any type of risk, from credit risk to market risk.

A bank’s overall capital will depend on some measure of volatility, and if looking at the bank as a whole, then the volatility of the bank’s stock market value is used. Capital allocations to the individual business units will depend on the extent to which that unit contributes to the bank’s overall risk. If it is not possible to price the asset or marking to market is irregular, then the volatility of earnings is one alternative that can be used. It will also depend on how closely correlated the unit’s earnings are with the bank as a whole.

Some units will have a volatility of market value that moves inversely with the rest of the firm, and this will lower the total amount of equity capital to be set aside. For example, suppose the bank is universal, and owns a liquidation subsidiary which deals with insolvent banks. Its market value is likely to be negatively correlated with the rest of the bank.

Once computed, RAROC is compared against a benchmark or hurdle rate. The hurdle rate can be measured in different ways. If it is defined as the cost of equity (the shareholder’s minimum required rate of return), then provided a business unit’s RAROC is greater than the cost of equity, shareholders are getting value for their investment, but if less than the cost of capital, it is reducing shareholder value. For example, if the return is 15% before tax, then if RAROC > 15%, it is adding value. The hurdle rate may also be more broadly defined as a bank’s weighted average cost of funds, including equity.

To compute RAROC, it is essential to have measures of the following.

  1. Risk. There are two dimensions to risk: expected loss and unexpected loss. Expected loss is the mean or average loss expected from a given portfolio. Suppose a bank makes ‘‘home’’ loans to finance house repairs. Then, based on past defaults on these types of loans, the bank can compute an expected loss based on an average percentage of defaults over a long time period. The risk premium charged on the loan plus fees should be enough to cover for expected losses. These losses are reported on a bank’s balance sheet, and their operating earnings should be enough to cover the losses. A bank will set aside reserves to cover expected losses. A bank also sets aside capital as a buffer because of unexpected losses, which, for home improvement (or any other type of loan) is measured by the volatility (or standard deviation) of credit losses. For a trading portfolio, it will be the volatility of returns, i.e. the standard deviation of returns. Figure illustrates the difference between expected and unexpected loss and the relationship between variance and unexpected loss.
  2. Expected Loss and Unexpected Loss (Variance).

    Expected Loss and Unexpected Loss (Variance).

  3. Confidence intervals. Capital is set aside as a buffer for unexpected losses, but there is the question of how much capital should be set aside. Usually, a bank estimates the amount of capital needed to ensure its solvency based on a 95% or 99% confidence interval. Suppose the 99% level of confidence is chosen. Then each business unit is assigned enough capital, on a risk adjusted basis, to cover losses for 10 out of 100 outcomes. Investment banks may opt to use a less restrictive confidence interval of 95% (covers losses for 5 out of 100 outcomes) if most of their business involves assets which are marked to market on a daily basis, so they can quickly react to any sudden falls in portfolio values.
  4. Time Horizon for Measuring Risk Exposure. Ideally, the risk measured would be based on a 5 or 10-year time horizon, but there are problems obtaining the necessary data. Usually there is an inverse relationship between the choice of confidence interval and the time horizon. An investment bank may have a higher confidence interval but a short holding period (days), because it can unwind its positions fairly quickly. A traditional bank engaged primarily in lending will normally set a time horizon of a year for both expected and unexpected losses, recognising that loans cannot be unwound quickly. Since it cannot react quickly, it sets a lower confidence interval of 99%.32 Note that if RAROC is being used to compare different units in the banks, the same time horizon will have to be used.
  5. Probability Distribution of Potential Outcomes. It is also necessary to know the probability distribution of potential outcomes, such as the probability of default, or the probability of loss on a portfolio. The prices of traded assets are usually assumed to follow a normal distribution (a bell-shaped curve), though many experts question the validity of this assumption. Furthermore, loan losses are highly skewed, with a long downside tail, as can be observed for the distribution of credit 32 Some major US commercial banks use a confidence interval of 99.97%. In this case, enough capital is assigned, on a risk adjusted basis to each business unit, to cover losses in all but 3 out of 10 000 outcomes.

Comparison of Distribution of Market Returns and Credit Returns.

Comparison of Distribution of Market Returns and Credit Returns.

The skew on the loss side is due to defaults, indicating that there is a large likelihood of earning quite small returns, together with a really small chance of very large losses. If a bank has a large portfolio of loans, these two possibilities explain why the distribution is skewed. Figure shows the contrast between normally distributed market returns and the skewed distribution of credit returns. However, if different distributions are allowed for, then it is not possible to compare one business unit against another.

RAROC has its limitations. First, the risk factor for each category is assigned according to the historicvolatility of its market price, using something between the past two to three months and a year. There is no guarantee that the past is a good predictor of the present/future. Second, it is less accurate when applied to untraded assets such as loans, some of which are difficult to price. The choice of the hurdle rate or benchmark is another issue.

If a single hurdle rate is used, then it is at odds with the standard capital asset pricing model (CAPM), where the cost of each activity reflects its systematic risk, or the covariance of the operation with the value of the market portfolio – the βs in standard CAPM. Furthermore, if RAROC is used as an internal measure, there are no data to compute the covariances.

This means the returns on the activity being screened are considered independently of the structure of returns for the bank. Any correlation between activities, whether positive or negative, is ignored.

To summarise, a RAROC measure can assess what areas a bank should be allocating more resources to, and where they should be divesting from. RAROC is also used to measure performance across a diverse set of business units within a bank and different parts of the business can be compared. However, the problems mentioned above mean RAROC is a somewhat arbitrary rule of thumb, not ideally suited to complex financial institutions. On the other hand, making some adjustment for risk is better than ignoring it.

Market Risk and Value at Risk

The VaR model is used to measure a bank’s market risk, and it therefore serves a different purpose from RAROC. It has since been adapted to measure credit risk, which is briefly reviewed in the next section.

Though VaR was originally used as an internal measure by banks, it assumed even greater importance after the 1996 market risk amendment to the 1988 Basel agreement – regulators encouraged banks to use VaR.

The distinguishing feature of VaR is the emphasis on losses arising as a result of the volatility of assets, as opposed to the volatility of earnings. The first comprehensive model developed was JP Morgan’s Riskmetrics.

The basic formula is:

VaRx = Vx× dV/P × _Pt ---------(VI)


Vx: the market value of portfolio x
dV/dP: the sensitivity to price movement per dollar market value
δPt: the adverse price movement (in interest rates, exchange rates, equity prices or commodity prices) over time t

Time t may be a day (daily earnings at risk or DEAR), a month, etc. Under the Basel market risk agreement, the time interval is 10 days.

Value at risk estimates the likely or expected maximum amount that could be lost on a bank’s portfolio as a result of changes in risk factors, i.e. the prices of underlying assets over a specific time horizon, within a statistical confidence interval. VaR models of market risk focus on four underlying instruments, and their corresponding prices: bonds (interest rates at different maturities), currencies (exchange rates), equity (stock market prices) and commodities (prices of commodities such as oil, wheat or pork bellies). The principal concern is with unexpected changes in prices or price volatility, which affects the value of the portfolio(s).

VaR answers the question: how much can a portfolio lose with x% probability over a stated time horizon? If a daily VaR is $46 million, and the confidence interval is 95%, the value of the portfolio could fall by at least $46 million in an average of 5 out of every 100 trading days (a 95% probability), or daily losses would not be less than $46 million, on average, on 5 out of every 100 trading days. The exact amount of the average daily trading losses on each of these 5 days is unknown – only that it will be in excess of $46 million.

Or, more conservatively, if the daily VaR measure for a portfolio is ¤25 million, at a 99% confidence level, there is a 99% probability that the daily losses will, on average, be ¤25 million or more on 1 out of every 100 trading days. If a 10-day VaR measure is ¤200 million at the 99% confidence level, then on average, in 1 out of every 100 10-day trading periods, the losses over a 10-day trading period will be not less than ¤200 million.

Any VaR computation involves several critical assumptions.

  1. How often it is computed, that is, daily, monthly, quarterly, etc.
  2. Identification of the position or portfolio affected by market risk.
  3. The risk factors affecting the market positions. The four risk factors singled out are: interest rates (for different term structures/maturities), exchange rates, equity prices and commodity prices.
  4. The confidence interval. The confidence interval chosen is usually 99% (as required by Basel) and one-tailed, since VaR is only concerned with possible losses and not gains.
  5. If the loss level is at 99%, the loss should occur 1 in 100 days or 2 to 3 days a year. The choice of 99% is a more risk averse or conservative approach. However, there is a trade-off: a choice of 99% as opposed to 95% means not as much historical data (if it is a historical database being used – see below) is available to determine the cut-off point.

  6. The holding period. The choice of holding period will depend on the objective of the exercise. Banks with liquid trading books will be concerned with daily returns, and hence the daily VaR or daily earnings at risk, DEAR. Pension and investment funds may want to use a month. The Basel Committee specifies 10 working days, reasoning that a financial institution may take more than 10 days to liquidate its holdings.
  7. Choice of the frequency distribution. Recall this issue was raised when RAROC was discussed. The options for VaR include the following.
    1. Non-Parametric Method. This method uses historical simulations of past risk factor returns, but makes no assumption on how they are distributed. It is known as a full valuation model because it includes every type of dependency, linear and non-linear, between the portfolio value and the risk factors. Basel requires that the historical data used date back at least one year.
    2. In the non-parametric approach, the researcher must specify the period to be covered, and the frequency, e.g. daily, monthly or annually. It is assumed that the contents of the portfolio is unchanged over the period, and the daily return (loss or gain) is determined.

      These are ranked from worst loss to best gain. Based on a chosen tolerance level, the loss is determined. If the frequency chosen is 2 years or 730 days, and the tolerance threshold is 10%, then the threshold bites at the 73rd worst daily loss, and VaR is the amount of this loss.

      A low tolerance threshold is more conservative and implies a larger loss and bigger VaR.

    3. Parametric Method. Use of a variance–covariance or delta normal approach, which was the method selected by Riskmetrics. Risk factor returns are assumed to follow a certain parametric distribution, usually a multivariate normal distribution. It is a partial valuation model because it can only account for linear dependencies (deltas) and ignores non-linear factors, for example, bond convexities or option gammas. This is why it is sometimes called the correlation or ‘‘delta-var’’ variation.
    4. If this frequency distribution is chosen, then VaR is estimated using an equation (VI) which specifies portfolio risk as a linear combination of parameters, such as volatility or correlation. It provides an accurate VaR measure if the underlying portfolio is largely linear (e.g. traditional assets and linear derivatives), but is less accurate if non-linear derivatives are present.

      Banks that use variance–covariance analysis normally make some allowances for nonlinearities. The Basel Amendment requires that non-linearities arising from option positions be taken into account.

      In approaches a and b, a data window must be specified, that is, how far back the historical distribution will go. The Basel Committee requires at least a year’s worth of data.

      Generally, the longer the data run, the better, but often data do not exist except for a few countries, and it is more likely the distribution will change over the sample period. In approach b, there is the question of which variances–covariances of the risk factor returns are computed.

    5. Monte Carlo35 approach. Another full valuation approach, involving multiple simulations using random numbers to generate a distribution of returns. Distributional assumptions on the risk factors (e.g. commodity prices, interest rates, equity prices or currency rates) are imposed – these can be normal or other distributions. If a parametric approach is taken, the parameters of the distributions are estimated, then thousands of simulations are run, which produce different outcomes depending on the distributions used.

The non-parametric approach uses bootstrapping, where the random realisations of the risk factor returns are obtained through iterations of the historical returns. In either approach, pricing methodology is used to calculate the value of a portfolio.

Unlike a and b, the number of portfolio return realisations is much greater in number, from which the VaR estimates are derived. The Monte Carlo approach is usually rejected because it involves a large number of computations, which present practical problems if traders are computing VaR once, or several times a day. Computation costs are high, too.

VaR, Portfolios and Market Risk

It is possible to show simple applications of VaR for individual trading positions involving two currencies or equities. However, banks compute VaR for large portfolios of equities, bonds, currencies and commodities. Management will want an aggregate number showing the potential value at risk for the bank’s entire trading position. This aggregate VaR is not just a simple sum of the individual positions because they can be positively or negatively correlated with each other, which will raise or reduce the overall VaR. The components of any portfolio are sensitive to certain fundamental risks, the so-called ‘‘Greeks’’.

These are as follows.

Delta or Absolute Price Risk: the risk that the price of the underlying asset will change (e.g. the stock or commodity price, exchange rate or interest rate). The delta risk is the effect of a change in the value of an underlying instrument on the value of the portfolio.

Gamma or Convexity Risk: the rate of change in the delta itself, or the change in the delta for a one point move in the underlying price. It allows for situations where there is a non-linear relationship between the price of the underlying instrument and the value of the portfolio.

Vega or Volatility Risk: this risk applies when an option is involved, or a product has characteristics similar to an option. It is the sensitivity of the option price for a given change in the value of volatility. An increase in volatility of the underlying asset makes the option more valuable. Therefore, if the market’s view of the volatility of the underlying instrument changes, so too will the value of the option.

Rho or Discount Risk: this risk applies primarily to derivatives, or any product which is valued using a discount rate, i.e. the value is determined by discounting expected future cash flows at a risk-free rate. If the risk-free rate changes, so too does the value of the derivative.

Theta or Time Decay Risk: the time value of the option. A change in the value of a portfolio because of the passage of time. For example, in an option theta rises with the length of time to the strike price.

To arrive at a VaR, the components of the portfolio are disaggregated according to the above risk factors (if they apply), netted out, then aggregated together.

Suppose a bank computes daily earnings at risk for its foreign exchange, bond and equity positions. Then it will end up with an interest DEAR, a foreign exchange DEAR and an equity DEAR. These will be summarised on a spread sheet, and if the bank operates in more than one country, their respective DEARs are reported too. Assume the bank is headquartered in Canada but also operates in the USA and the UK. Then a simplified version of the spread sheet will look like in Table. The interest rate column is highly simplified, for ease of exposition. Normally the interest rate risk would appear for a number of time buckets, with a column for each bucket. ‘‘Portfolio effects’’ is another name for benefits arising from diversification, which will depend on the degree to which various markets and assets are correlated with each other. There are two to account for. The first is the diversification effect arising from having a portfolio of currency, bonds and equity in one country. The other allows for the effects of holding bonds, foreign exchange and equity in more than one country. The portfolio/diversification effects will be calculated in a separate matrix and depend on numerous intercorrelations. In the table, it has been assumed that the diversification effects allow a total of $30 million to be reduced from the summed DEAR, giving a total DEAR of $45 million.

To show how VaR is reported by banks, the figures from Merrill Lynch’s Annual Report are provided. Merrill’s differentiates between trading and non-trading VaR, as can be seen from

A Hypothetical Daily Earnings at Risk for a Canadian bank

A Hypothetical Daily Earnings at Risk for a Canadian bank

Merrill Lynch: value at risk ($m)

Merrill Lynch: value at risk ($m)

Problems with the VaR Approach

Danielsonn (2000, 2002) has been one of the most vociferous critics of value at risk, to be discussed below. Other authors37 have voiced similar concerns.The first problem with VaR is that it does not give the precise amount that will be lost.

For example, if a bank reports VaR ≥ $1 million at the 99th percentile, it means that losses in excess of VaR would be expected to occur 1% of the time. However, it gives no indication as to how much VaR will be exceeded – it could be $2.5 million, $450 million or $1 billion – there is no upper bound on what can be lost. Statistically, rather than giving the entire tail, it is giving an arbitrary point in the tail.

Second, the simpler VaR models depend on the assumption that financial returns are normally distributed and uncorrelated. Empirical studies have shown that these assumptions may not hold, contributing to an inaccurate VaR measure of market risk.

Anecdotal evidence and remarks from traders suggest it is also possible to manipulate VaR by up to a factor of five. A trader might be told to lower VaR because it is too high. By lowering VaR the bank can increase the amount of risk, and expected profit.

VaR does not give a probability of bank failure, only losses that arise from a bank’s exposure to market risk. On the other hand, it was never meant to. It is only a measure of the bank’s exposure, reflecting the increased trading activities of many banks.

If all traders are employing roughly the same model, then the measure designed to contain market risk creates liquidity risk. This point was illustrated by Dunbar (2000), commenting on the 1998 Russian crisis. Market risk had been modelled using VaR, based on a period of relatively stable data, because for the previous five years (with the exception of the Asian crisis, which was largely confined to the Far East) volatility on the relevant markets had been low. Financial institutions, conforming to regulations, employed roughly the same market risk models. The default by Russia on its external loans caused the prices of some assets to become quite volatile, which breached the risk limits set by VaR-type models.

There was a flight from volatile to stable assets, which exaggerated the downward price spiral, resulting in reduced liquidity. Hence if all banks employ a similar VaR, it can actually escalate the crisis.

The above example also illustrates that statistical relationships applied to VaR which hold during a period of relative stability often break down and cannot be used during a crisis.

While there may be little in the way of correlation between asset prices in periods of stability, in a crisis, all asset values tend to move together. This means any portfolio/ diversification effects will disappear.

Variations in the model assumptions with respect to the holding period, confidence interval and data window will cause different risk estimates (Beder, 1995). Likewise, Danielsonn (2000) demonstrates the VaR models lack robustness, that is, the VaR forecasts across different assets are unreliable. To illustrate, Danielsonn employs a violation ratio.

Violation is defined as the case where the realised loss is greater than the VaR forecast. The violation ratio is the ratio of realised number of VaR violations to the expected number of violations. If the V-ratio >1, the model is under-forecasting the risk; if V-ratio <1, it is over-forecasting. Put another way, over-forecasting means the model is thick tailed relative to the data; under-forecasting means the model is relatively thin tailed. Danielsonn reports disappointing results using this test. Different estimation methods produce different violation ratios, but all vary between, for example, 0.38 and 2.18 (using variations of Riskmetrics).

For the above reasons, it is necessary to test the actual outcomes with the VaR predictions of losses. However, such tests also have a problem (Kupiec, 1995) because if the period over which the performance of the VaR model is relatively short,38 the tests lack statistical power. It is difficult to evaluate the accuracy of the model on the basis of a year of data.The choice of a 99% confidence interval allows for a loss to occur very 2.5 days in a year.

Danielsonn (2000) argues such an allowance is irrelevant in a period of systemic crises, or even for the probability of a bank going bankrupt. If VaR violations occur more than 2.5 times per year under a 99% confidence interval, it does not usually indicate the bank is in any difficulty, and in light of this point, when are VaR breaches relevant? In defence of VaR, it was never meant to indicate that a bank was in difficulty. It is a benchmark number for banks to use to track their market risk exposure.

Both the parametric and non-parametric frequency distributions produce measures which rely on historical data, an implicit assumption is that they are a good predictor of future returns. But historical simulation is sensitive to the sampling period (Danielsonn and de Vries, 1997). For example, in an equity portfolio, the VaR outcomes will be quite different if the October 1987 crash is included than if it is excluded. Or, looking at US share price data from mid-1983 to mid-2000 would suggest sizeable index price falls were the exception, and if they happened, quickly reversed themselves. Agents armed with this information would think the future was like the past (a popular assumption in many models of VaR) and would have found the subsequent share price declines completely mystifying, and outside anything remotely predictable.

However, the non-parametric or historical simulation approaches are superior to the variance–covariance approach (advocated by Riskmetrics) for two reasons. First, financial market returns do not always follow a normal distribution – large movements in the market (fat tails) occur more often than indicated by the normal distribution. Second, historical simulation allows for non-linearities between the position and risk factor returns, which are important when the VaR being computed includes derivatives, especially options.There are other criticisms of the use of VaR which relate to the actual 1996 Basel Amendment and ‘‘Basel 2’’ agreement.

Stress Testing and Scenario Analysis to Complement VaR

Given the limitations of VaR, most banks apply scenario analysis and stress testing to complement estimates of market risk produced by VaR. Banks begin by identifying plausible unfavourable scenarios which cause extreme changes to the value of one or more of the four risk factors, i.e. interest, equity, currency or commodity prices. These might include an event which causes most financial agents to act in a similar manner, prompting severe illiquidity, as illustrated by the LTCM case, discussed earlier. Or the unexpected collapse of Enron and WorldCom (two American financial conglomerates in 2002), creating widespread fears about the quality and accuracy of company financial statements which, in turn, contributed to unexpected, dramatic declines in a key or several stock market prices. Other scenarios might be ill-founded rumours which prompt unexpected cash margin calls or changes in collateral obligations.The stress test, based on a scenario, computes how much a bank’s portfolio could lose.

Kurtosis describes the relative thinness or fatness of the tails of a distribution compared to a normal distribution. For example, in a credit portfolio, if the loss distribution is leptokurtic (i.e. has fat tails), meaning extreme events are more likely than in a normal distribution (e.g. one large credit default results in massive losses). Thin (platykurtosis) tails suggests the opposite.

The size of the change in key risk factors will have to be computed. One group41 recommended changes in volatility such as currency changes of plus or minus 6%, or a 10% change in an equity index. The bank must also choose the frequency with which the stress tests should be conducted.

As a practical example, suppose there are two scenarios: 1 a 40% decline in UK and world equity prices or 2 a 15% decline in residential and commercial property prices. If these are the scenarios, the next step is to decide what stress tests should be conducted.

Banks could be asked to identify the potential impact of 1 and 2 on market risk, credit risk and interest rate risk. In addition (or alternatively), building societies could be asked to compute the impact of 1 and 2 on the retail deposit rate, the mortgage rate and the income of building societies. The complexity of the stress tests that must be performed is immediately apparent. Complicated models are required if the banks or building societies are going to produce realistic answers to the questions.

Afinal task is to decide how to use the results. This presents a very difficult problem for the bank because, by definition, a bank cannot forecast ‘‘surprises’’ or unexpected events. Nor can it judge how frequently they will occur, and therefore, whether or not a special reserve should be created. Furthermore, if such a reserve is kept, how much should the bank be setting aside?

All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd Protection Status

Modern Banking Topics