+ All Categories
Home > Documents > Risk Measures and Valuation

Risk Measures and Valuation

Date post: 02-Apr-2018
Category:
Upload: alexh
View: 221 times
Download: 0 times
Share this document with a friend

of 46

Transcript
  • 7/27/2019 Risk Measures and Valuation

    1/46

    Quantitative Risk Measures

    by

    Alex Healey

    Department of MathematicsKings College London

    The Strand, London WC2R 2LSUnited Kingdom

    Email: [email protected]

    3 September 2012

    Report submitted in partial fulfillment of

    the requirements for the degree of MSc in

    Financial Mathematics in the University of

    London

  • 7/27/2019 Risk Measures and Valuation

    2/46

    Abstract

    This piece is a review of quantitative risk measures with numeric examples

    and a discussion of methods for adjusting such models for liquidity risk. Inchapter 2 I present the framework for risk measures with a focus on value atrisk, expected shortfall and coherency properties. Chapter 3 details the tech-niques used to implement risk calculations. Chapter 4 looks at techniques forincorporating liquidity risk into these calculations and proposes a variationon the framework proposed by Acerbi and Scandolo [cAgS07] to take intoaccount empirical models of trading costs. In the last chapter I discuss somenumerical examples.

    1

  • 7/27/2019 Risk Measures and Valuation

    3/46

    Acknowledgements

    Thanks to my wife Sofia for supporting me throughout my Masters, my

    managers and colleagues at Nomura for giving me the time to complete thisdegree and my professors at Kings for their tuition and guidance.

    2

  • 7/27/2019 Risk Measures and Valuation

    4/46

    Contents

    1 Introduction 5

    2 Quantitative Risk Measures 7

    2.1 Risk Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    2.2 Coherency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    3 Risk Calculations 14

    3.1 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    3.1.1 Fixed Income . . . . . . . . . . . . . . . . . . . . . . . 14

    3.1.2 Stocks . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    3.1.3 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . 15

    3.2 Modelling Risk Factors . . . . . . . . . . . . . . . . . . . . . . 18

    3.2.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . 18

    3.2.2 Dependence . . . . . . . . . . . . . . . . . . . . . . . . 18

    3.2.3 Extreme Values . . . . . . . . . . . . . . . . . . . . . . 19

    3.2.4 Conditional Models . . . . . . . . . . . . . . . . . . . . 20

    3.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    3.3.1 Historic Simulation . . . . . . . . . . . . . . . . . . . . 21

    3.3.2 Analytic Methods . . . . . . . . . . . . . . . . . . . . . 21

    3.3.3 Monte-Carlo Methods . . . . . . . . . . . . . . . . . . 22

    3

  • 7/27/2019 Risk Measures and Valuation

    5/46

    4 Liquidation Risk Adjustments 24

    4.1 Liquidity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    4.2 Liquidity Policies . . . . . . . . . . . . . . . . . . . . . . . . . 25

    5 Numeric Results 29

    5.1 Portfolio One . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    5.1.1 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    5.1.2 Theta . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    5.1.3 Estimators . . . . . . . . . . . . . . . . . . . . . . . . . 33

    5.1.4 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    5.2 Portfolio Two . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    5.2.1 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    5.2.2 Vega . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    5.2.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    6 Conclusion 42

    4

  • 7/27/2019 Risk Measures and Valuation

    6/46

    Chapter 1

    Introduction

    Risk is defined as the possibility of suffering harm, loss or other adverseoccurrences. In a financial context these adverse occurrences are almostalways financial in nature either directly such as the loss occurred by holdingstock in a company whose share price falls or indirectly such as reputationaldamage causing loss of banking business. The risks faced by a financialorganisation are commonly broken into a number of categories. Market risk(which is the main focus of this piece) is the risk of financial losses due to

    adverse movements of market variables (stock prices, interest rates, etc.).Credit risk is the risk of losses resulting from credit events of counterparties(e.g. defaults, rating changes). Operational risk spans from possible failuresin the business processes of a company. Examples in financial organisationsinclude losses incurred by rogue traders or fines imposed by regulators forfailure to follow market rules (such as the recent Libor scandal).

    One of the main purposes of quantitative risk management is the mea-surement of risk. This is required by regulators to determine suitable capitalreserves for the solvency of banks. It is used by clearing houses to determinemargin requirements for brokers. Banks use risk measures externally to in-

    form shareholders on their exposures and internally to measure risk adjustedperformance of different groups. Commonly used risk measures include Valueat Risk (VaR) which is the benchmark measure used in the Basel 2 regula-tions, Scenario measures used by clearing houses and Expected shortfall.

    Clearing houses may present one of the strongest examples of the util-ity of quantitative risk management as they have survived some of the mostserious financial disasters including the collapse of Lehmans while still suc-cessfully settling trades. Conversely it should be noted that many of the

    5

  • 7/27/2019 Risk Measures and Valuation

    7/46

    most serious risks faced by financial organisations are not easy to model as

    they represent structural changes to markets which have no obvious histori-cal analogue (a Euro breakup would be an topical example of such a change)or involve model risk where the market norms for model use change suddenly(such as the appearance of the volatility smile following Black Friday). Theseare the unknown unknowns of risk management which we should be carefulnot to ignore. Further it should be noted that the aggregate effect of finan-cial institutions using these type of risk management policies to determinesolvency requirements could be argued to cause positive feedback where as-sets sales are required to boost capital reserves in response to severe marketmovements causing further depression of asset values (see Cintioli [dC10]).

    In this paper I review risk measures and their application to deriva-tives portfolios. The next two chapters cover a review of quantitative riskmanagement theory. The fourth chapter examines extensions to the method-ology suggested by Acerbi and Scandolo [cAgS07] for accounting for liquidityrisk in the computation of risk measures. The last looks at the implementa-tion of some of the techniques discussed using some example portfolios andpresenting empirical results from these investigations.

    6

  • 7/27/2019 Risk Measures and Valuation

    8/46

    Chapter 2

    Quantitative Risk Measures

    2.1 Risk Measures

    The mathematical context for considering risk is a probability space (, F, P)on which we define our random variables. We add to this a filtration Ft whichrepresents the information publicly available at time t. Defined on this prob-ability space are m+t a random vector of asset best ask prices and m

    t a

    random vector of asset best bid prices. The mid price series mt is

    mt :=m+t + m

    t

    2.

    We then consider a portfolio of financial assets p = (p0, p1, p2,...,pn) Pwhich has value P Vt at time t. Normally the function P Vt is calculated bymarking to market i.e. valuing at mid price as

    P Vt = p0 +n

    1

    (mi,tpi)

    although to account for liquidity (this will be considered in more detail inpart 4) the spread can be considered by instead valuing as

    P Vt = p0 +n1

    (m+i,tpi(pi)) + mi,tpi(pi))

    where (.) is the Heavyside function. Note that our first asset p0 is a specialasset which represents the cash held in the portfolio.

    7

  • 7/27/2019 Risk Measures and Valuation

    9/46

    Modelling the risk of portfolios is frequently a very high dimensional

    problem as the number of assets held may number in the tens of thousandswhen examining the exposures of a bank. To reduce the number of dimen-sions and to build more tractable models we typically look to model a numberof risk factors rather than building a model which covers all asset prices. Wedefine Xt a random vector process of risk factors. Examples of risk factorscommonly used are log stock prices or yields on non defaulting bonds. Thefiltration is typically assumed to be generated by the history of all risk factorsup to time t, Ft = (Xs : s

  • 7/27/2019 Risk Measures and Valuation

    10/46

    the loss for a particular portfolio over the period. These random variables

    should be almost surely finite. We will typically assume this set is a convexcone in other words if L1 M and L2 M L1 + L2 M also ifL1 Mand > 0 L1 M.

    In this context we have a number of approaches to quantifying the riskof a portfolio. Simple approaches include considering the sensitivity of theportfolio to changes in the risk factors by measuring derivatives of the portfo-lio value with respect to the factors. This approach called sensitivity analysisis common in derivatives trading but the information is not easily aggregatedover multiple risk factors so is not so useful in a portfolio which is exposedto many sources of risk. It also requires interpretation in the light of thevariability of that particular factor to understand the financial size of therisks entailed.

    Loss distributions provide a more suitable object for risk comparisons.They take into account diversification effects and can be compared acrossportfolios consisting of different assets. In order to more effectively presentthe information in the loss distribution we consider summary statistics of theloss distribution which give us a handle on the types of loss we might expect.

    Definition 3. A risk measure is a real valued function

    : M RIn other words a risk measure associates a number describing the risk witheach portfolio (more strictly each loss distribution) in a set.

    Note that in some papers the term risk measure is used to apply onlyto consistent risk measures (see below).

    Definition 4. Value at risk is defined as

    VaR(L) := inf

    {l

    R : FL(l)

    }The popularity of value at risk can be partially explained by its simplic-

    ity. It defines the size of loss we should expect to see exceeded 100 (1)%of the sampled time periods under the model assumptions.

    Definition 5. Expected shortfall is defined as

    ES(L) :=1

    1 1

    V aRu(L) du

    9

  • 7/27/2019 Risk Measures and Valuation

    11/46

    This definition shows that the expected shortfall is the average of VaR

    for all levels u . As such it takes into account all large losses rather thanfocusing on a particular quantile. We will see that this has some theoreticaladvantages over VaR. Expected shortfall is a special example of a spectralrisk measure which represents risk aversion using a risk spectrum function.

    Definition 6. A function L1([0, 1]) is a risk spectrum function if it hasthe following propertiesb

    a

    (u) du 0 0 < a < b < 1 (i)

    a+

    a

    (u) du

    a

    a

    (u) du (ii)

    |||| = 1 (iii)Definition 7. The spectral risk measure associated with a risk spectrum L1([0, 1]) is

    SR(L) :=

    10

    VaRu(L)(u) du

    Spectral risk measures are shown in Acerbi [cA02] to be a class of coher-

    ent risk measures (see below) where the risk spectrum provides a specificationof risk aversion. For expected shortfall we have equal aversion to all lossesover the corresponding value at risk.

    Definition 8. Scenario risk is based on evaluating the losses in a portfolioover a range of scenarios which describe changes to the risk factors. If wehave a set X = {z1,..., zn} of risk factor changes or scenarios and a set ofweights w = (w1,...,wn)

    then the scenario risk is

    [X,w] := max{w1l[t](z1),...,wnl[t](zn)}

    2.2 Coherency

    A risk measure should have certain properties to be a useful measure of risk.We think of our risk measures as representing a quantity of cash required tooffset the risk associated with holding a portfolio.

    Definition 9. (Monotonous) If L1, L2 M and L1 L2 a.s. then (L1) (L2).

    10

  • 7/27/2019 Risk Measures and Valuation

    12/46

    This is a natural property as we would not want a portfolio which almost

    surely has greater losses than another to have a lower risk value.

    Definition 10. (Translation Invariance) If L M and k R then (L +k) = (L) + k.

    This indicates that adding a deterministic quantity of cash to a portfolioshould have the effect of altering the risk by this amount. If our units of riskare to be cash we will require this property.

    Definition 11. (Positive Homogeneity) For all L M and > 0 then(L) = (L).

    This indicates that if we scale our portfolio loss by a positive scalar ourrisk should scale in the same way.

    Definition 12. (Subadditivity) For all L1, L2 M we have (L1 + L2) (L1) + (L2).

    This indicates that a combination of two portfolios will always havelower or equal risk to the sum of the risks of the individual portfolios. Thisis as a result of the diversification effect which may occur.

    Definition 13. (CoherentRiskMeasure) A coherent risk measure is a riskmeasure satisfying the four properties listed above: Monotonous, TranslationInvariance, Positive Homogeneity and Subadditivity.

    Theorem 1. Expected Shortfall is a coherent risk measure.

    Proof. We start by denoting

    1{Ll} := 1{Ll} if P[L = l] = 0

    1{Ll} +

    P[Ll]

    P[L=l]1{L=l} if P[L = l] > 0

    Then we haveE[1{LVaR(L)}] = (2.1)

    ES(L) = (1 )1E[L1{LVaR(L)}] (2.2)We define L3 := L1 + L2 and we have

    11

  • 7/27/2019 Risk Measures and Valuation

    13/46

    (1 )(ES(L1) + ES(L1) ES(L3))= E[L11

    {L1VaR(L1)} + L21

    {L2VaR(L2)} L31{L3VaR(L3)}]

    = E[L1(1{L1VaR(L1)} 1{L3VaR(L3)})+

    L2(1{L2VaR(L2)} 1{L3VaR(L3)})]

    VaR(L1)E[1{L1VaR(L1)} 1{L3VaR(L3)}]+

    VaR(L2)E[1{L2VaR(L2)} 1{L3VaR(L3)}]= VaR(L1)( ) + VaR(L2)( ) = 0.

    This shows the subadditivity of the measure. The other properties areeasier to prove.

    Theorem 2. VaR is not a subadditive measure and hence not a coherent riskmeasure.

    Proof. Consider two portfolios each holding a single different defaultablebond with value $1000 with a 3% chance of default independent of the otherbond. The 95% VaR of each portfolio is $0 while the 95% VaR of the com-bined portfolio is $1000.

    The counterexample shown above demonstrates a weakness of VaR asa measure. If we consider a trader constructing a portfolio of such bonds wehave the nonsensical result that the lowest possible risk is a portfolio entirelyinvested in one of the bonds due to the neglect that VaR has for the losses

    incurred above the threshold. If traders are set VaR limits this may endup incentivising them to game their risk limits by concentrating risks abovethe threshold level. This problem can apply when dealing with skewed lossdistributions or when there are toxic dependency relationships between therisks (see section 6.2 in McNeil, R. Frey and P. Embrechts [aMrFpE05]).

    It can be shown that for portfolios which are exposed only to ellipicalrisk factors VaR does respect subadditivity. As a result it provides reasonableresults when aggregating portfolios at the highest level when assumptions ofnormality of risk (a sub class of elliptical risks) are made.

    12

  • 7/27/2019 Risk Measures and Valuation

    14/46

    The axioms for coherent risk measures are not universally accepted as

    the properties which risk measures should observe (for example see Follmerand Schied [hFaS02]). When attempting to account for liquidity risk we cansee some problems with the axioms of Positive Homogeneity and Subadditiv-ity. If we multiply the size of all our holdings in a portfolio we might considerthat our new portfolio is subject to additional liquidity risk due to its largesize thereby violating these two axioms. A broader class of risk measureswhich does address these concerns is the class of Convex risk measures.

    Definition 14. (Convexity) For all L1, L2 M and (0, 1) we have

    (L1 + (1

    )L2)

    (L1) + (1

    )(L2).

    Definition 15. (ConvexRiskMeasures) A convex risk measure is a risk mea-sure satisfying the three properties: Monotonous, Translation Invariance andConvexity.

    The central result for convex risk measures is a representation theoremwhich shows that any convex measure of risk takes the form

    (L) = supQP

    (EQ[L] (Q)),

    where P is the set of all probability measures defined on and : P (, ] is a penalty function.

    As discussed in Acerbi and Scandolo [cAgS07] we can make coherentrisk measures account for liquidity as long as we drop the assumption thatour portfolio value P Vt acts as a linear function over the vector space ofportfolios.

    13

  • 7/27/2019 Risk Measures and Valuation

    15/46

    Chapter 3

    Risk Calculations

    3.1 Mapping

    In order to aggregate the risks associated with a portfolio of instruments andderivatives we typically map the sensitivity of our portfolio to a number ofrisk factors and value our derivatives by applying pricing formula (or in thecase of exotics using numerical methods).

    3.1.1 Fixed Income

    For fixed income portfolios the primary subject of modelling will be the yieldcurve and its volatility (in the case of options). Jorion [pJ06] notes commontypes of mappings for such portfolios include principal mapping where theportfolio is mapped to a risk factor with average portfolio maturity, durationmapping where a risk factor is picked which matches the portfolio duration,and cash-flow where the cash flows are grouped into maturity buckets. Prin-cipal mapping is problematic as it ignores the structure of intermediate cash

    flows and duration mapping has problems measuring risks associated withnon parallel shifts in the yield curve. PIMCOs very successful risk model[P00] uses risk factors including.

    1. 10 year duration yield curve level

    2. 2-10 year duration slope

    3. 10-30 year duration slope

    14

  • 7/27/2019 Risk Measures and Valuation

    16/46

    4. Mortgage spread over Treasury

    5. Corporate bond spread over Treasury

    which are chosen to capture the shape of the yield curve in a parsimo-nious fashion.

    3.1.2 Stocks

    If we consider a model with a large number of risk factors we rapidly run into

    problems with estimation of the covariance matrix. The number of terms inthis matrix are n(n+1)

    2and we typically do not have sufficient data to create

    good estimates of this especially if we wish to consider only more recent data(to make our model more conditional). Furthermore the estimated covari-ance matrix may prove to be non positive definite which prevents sensibleinterpretation of the model.

    To reduce the number of risk factors under consideration for a stockportfolio the most common approach is to build a factor model for the stocks.There are two main classes of these: economic models and statistical models.In the first the factors used are determined by experts on the markets in

    question and the factors which drive returns, for example sector factors orvalue factors. In the second the factors are derived from the returns time-series. This may be facilitated by techniques such as principal componentsanalysis (PCA) which identifies linear combinations of the variables whichhave the largest possible variance. By taking the first n components anapproximation to the covariance matrix can be constructed.

    A different approach to this problem is to use shrinkage estimators whichconsider a linear combination of the empirical estimator and a shrinkagetarget D (typically a diagonal matrix of the variances)

    Shrink := + (1 )Dwhere is selected to maximise the accuracy of the shrinkage estimator (bycross validation or other techniques).

    3.1.3 Derivatives

    In the Black-Scholes setting with fixed interest rates we have the dynamics,

    15

  • 7/27/2019 Risk Measures and Valuation

    17/46

    dSt = rStdt + sStdWt. (3.1)

    Which leads to the following equation for the valuation of a Europeancall option,

    C(St, t) = St(d1) Ker(Tt)(d2) (3.2)

    d1 =log(St

    K) + (r +

    2

    2)(T t)

    T t (3.3)

    d2 = log(

    St

    K) + (r 2

    2 )(T t)

    T t . (3.4)

    Where

    (.) is the cumulative distribution function of the standard normal. St is the price of the underlying (equity) asset at time t. K is the strike price.

    is the volatility of the underlying.

    r is the risk free rate. T is the expiry time.

    Under these assumptions we see that the value of the contract at timet + t will be determined by the price of the underlying security at this timeso we must include the stock return as a risk factor in our model. In practicewe know that volatility is not constant or at least we can observe that impliedvolatility is not constant so abusing the BS model but inline with standard

    market practice we can also consider implied volatility as another risk factor.For full Monte Carlo simulations we can use full valuation of the portfo-

    lio based on the equation (3.2) above and its analogue for pricing puts. Whenconsidering a Delta Normal approach we wish to use instead the linear ap-proximations to these risks. In this case we need to calculate the partialderivatives with respect to the underlying return log(St), time t and the im-plied volatility . These derivatives can be calculated from equation (3.2)and we obtain the following (usually referred to as the Greeks).

    16

  • 7/27/2019 Risk Measures and Valuation

    18/46

    CSt

    = (d1) (3.5)

    C

    = St

    (d1)

    T t (3.6)C

    t= St

    (d1)

    2

    T t rKer(Tt)(d2) (3.7)

    This makes our linear loss (ignoring implied volatility risk)

    Lt+1 =

    C

    t

    t +C

    StSt log(

    St+1

    St) (3.8)

    Or with volatility risk

    Lt+1 =

    C

    tt +

    C

    StSt log(

    St+1St

    ) +C

    (t+1 t)

    (3.9)

    Note the first term gives a linear approximation of time loss. Howeversince the passing of time is deterministic this is easily replaced with an exactcalculation of the the loss.

    This linear approximation is acceptable for options portfolios whosevalue is approximately linear with respect to the underlying and when thetime period under consideration is short. Portfolio one is such a portfolio(as we will see below) but it is easy to construct portfolios where the linearapproximation is extremely inaccurate outside a very small range of under-lying changes. Higher order approximations may be used in this case. Weconsider using the second derivative, gamma to improve the approximation,

    2C

    S2t=

    (d1)

    St

    T t . (3.10)

    We can make our model more realistic by considering a stochastic modelfor the interest rates. If we assume that the short rate follows a Hull andWhite process under the risk neutral measure,

    drt = kr( rt)dt + rdt (3.11)and the stock

    dSt = rtStdt + sStdWt (3.12)

    (where dtdWt = dt).

    17

  • 7/27/2019 Risk Measures and Valuation

    19/46

    In this case we use the following to value our call option (see Brigo and

    Mercurio p888 [dBfM06]) where P(t, T) is the price of a zero coupon bondwith maturity T at time t,

    C(St, t) = St(d1) KP(t, T)(d2) (3.13)

    d1 =log( St

    KP(t,T)) + 1

    2v2(t, T)

    v(t, T)(3.14)

    d2 =log( St

    KP(t,T)) 1

    2v2(t, T)

    v(t, T)(3.15)

    V(t, T) =2rk2r

    [T t + 2kr

    ekr(Tt) 12kr

    e2kr(Tt) 32kr

    ] (3.16)

    v(t, T) = V(t, T) + 2s(T t) + 2sr

    kr[T t 1

    kr(1 ekr(Tt))]. (3.17)

    We can again use these equations to derive linear and quadratic approx-imations for the option value.

    3.2 Modelling Risk Factors

    3.2.1 Distributions

    The most commonly used distribution for modelling risk factors is the nor-mal distribution chosen for its tractability. However it is typically not agood fit for the fat-tailed distributions of most risk factors. The multivariatet and NIG (normalised inverse Gaussian) distributions are shown in empir-ical results in McNeil, Frey and Embrechts [aMrFpE05] to provide goodalternatives when modelling daily stock returns.

    3.2.2 Dependence

    Multivariate distributions specify the dependence structure of the risk fac-tors in their parameters. For the multivariate normal this is specified by thecorrelation matrix. An alternative way of specifying the dependence struc-ture of random variables is via copulas which allow us to separately modelthe marginal distributions using univariate distributions and then apply a

    18

  • 7/27/2019 Risk Measures and Valuation

    20/46

    copula to complete the joint distribution. The key result in copula theory

    is Sklars theorem which describes this relationship (presented here in its 2dimensional version).

    Theorem 3. Let H be a joint distribution function with margins F and G.Then there exists a copula C such that for all x, y

    H(x, y) = C(F(x), G(y)) (3.18)

    If F and G are continuous then C is unique otherwise is is uniquely deter-mined on Range(G) Range(H). Conversely if C is a copula and F and Gare distribution functions the H(.,.) defined in (3.18) is a joint distribution

    function with marginals F and G.

    Nelsen [rN06] provides a good introduction to the theory of copulas.These are readily usable in Monte Carlo simulations and Roy [iR11] gives anexample of the use of copulas in conjunction with GARCH to perform VaRcalculations.

    3.2.3 Extreme Values

    Extreme value theory takes a different approach to the distribution modelswe discussed above by considering fitting the tail of the distribution inde-pendently from the body of the distribution. This ties in well with VaR andES risk measures as these measures are only dependent on the tail of thedistribution.

    The most commonly used model in this area is the generalized ParetoDistribution (GPD) which provides a model for the exceedances over a thresh-old. In fact a theorem from Pickans-Balkema-de Haan shows that for allcommon continuous distributions the excess distribution converges to a GPDasymptotically.

    Definition 16. The distribution function of the generalised Pareto distribu-tion (GPD) is

    G,(x) =

    1 (1 + x/)1/, = 0,1 exp(x/), = 0, (3.19)

    where > 0 and x 0 when 0 and 0 x / when < 0. Theparameter is the shape parameter and is the scale parameter.

    19

  • 7/27/2019 Risk Measures and Valuation

    21/46

    Definition 17. The Excess Distribution of a random variable X over the

    threshold u has distribution function.

    Fu(x) = P(X u x : X > u) (3.20)

    We use this later to estimate the risk measures given a sample of lossesgenerated from a Monte Carlo simulation.

    3.2.4 Conditional Models

    Timeseries methods provide an important tool for modelling conditional fac-tors. We can use these by fitting a timeseries model to our data and then fit astationary model to the residuals. The GARCH family of models is the mostpopular class as these allow us to model the volatility clustering observed inempirical studies of risk factors.

    To generate our risk changes we simulate the inovation distribution anduse our time series model to calculate the associated risk change. More recentstudies (Koopman, Jungbacker and Hol 2005 [sjKbJeH05]) have comparedhistorical volatility estimates (timeseries models) with at-the-money impliedvolatility estimates and have concluded that the implied volatility estimators

    have superior predictive abilities. Jiang and Tian [gJyT05] investigate the useof a model free implied volatility estimator which incorporates informationfrom options with different strikes

    EF0

    T0

    dFtFt

    2= 2

    0

    CF(T, K) max(0, F0 K)K2

    dK

    and is shown to be a more efficient predictor of future volatility than at-the-money implied volatility. This suggests an approach of using impliedvolatility estimates in our models. Analysis of options can also provide esti-

    mates of correlation between financial variables known as implied correlation.This area has received much less attention than implied volatility (a studyby Walter and Lopez on FX rates [cWjL00] gives mixed results) but as aforward looking measure it could be a useful input.

    20

  • 7/27/2019 Risk Measures and Valuation

    22/46

    3.3 Methods

    3.3.1 Historic Simulation

    In this approach rather than attempting to model the risk factor changes,Zt we assume that they are drawn from a stationary distribution FZ andcalculate a series of simulated losses by calculating the loss under the mostrecent n historical observations,

    {Ls := l[t](Zs : s = t n + 1,...,t}.We then use techniques such as empirical measurement of quantiles to cal-culate our risk measures. The possible choices in this area are discussed inmore detail in the Monte-Carlo section below. The main advantage of thisapproach is that by side stepping the issue of modelling the risk factors weremove a major source of model risk. Disadvantages include that we arelimited to scenarios seen in the historical record and that we are using anunconditional model (although there are hybrid approaches which attemptto address this concern they are not entirely satisfactory as they reintroducethe model risk this approach otherwise avoids).

    3.3.2 Analytic Methods

    Analytic approaches propose a model for the factors for which we can calcu-late the risk measures analytically. Typically the model used is a multivariatenormal which is chosen for its tractability and for its role as a limited dis-tribution of sums of random variables. When we have a multivariate normaldistribution for the risk factors and use a linear model for our exposures tothese risk factors we have a Delta Normal risk calculation.

    In this simplification we assume our portfolio has a linear exposure toits risk factors which leads us to the following formulation of the loss,

    l[t](z) :=

    ft(t, Xt)t +n

    i=1

    fXi(t, Xt)zi

    (3.21)

    where we the subscripts indicate partial derivatives. Now assumingour risk factor changes Z have are multivariate normal with mean andcovariance matrix we have

    E(l[t](Z)) = PVt wT

    21

  • 7/27/2019 Risk Measures and Valuation

    23/46

    Var(l[t](Z)) = PV2t w

    Tw

    and henceVaR = PVt wT+ PVt

    wTw (3.22)

    where w is the weight of the risk factor (the linear exposure divided bycurrent portfolio value), PVt is the portfolio value at time t and is the cor-responding quantile of the standard normal for the selected confidence level.This approach works well for assessing risk of portfolios exposed to multiplesources of risk where the exposures are approximately linear in nature.

    3.3.3 Monte-Carlo Methods

    Monte-Carlo simulation is a powerful tool for evaluating risk measures as itgives us the flexibility to combine sophisticated models of the dynamics ofour risk factors with full valuation techniques for calculating the resultingportfolio value. The method requires us to create a statistical model forour risk factors. We then sample risk factor changes from simulations ofthe model and evaluate the portfolio under these changes. We repeat thisprocedure many times and we get a series of samples we can use as input forestimators of our risk measures. This process allows us to examine modelswhich are not tractable with analytic methods.

    1: for i = 1 n do2: generate z the risk factor changes

    3: Li l[t](z) the losses4: end for

    5: estimate (L) from the Li

    The approach does have some drawbacks. Since we have specified themodel used to generate the risk factor changes we have introduced an elementof model risk due to the possibility that this model is misspecified. Another

    disadvantage is the time required to run the simulations. In some cases fullvaluation of a derivative itself requires solving a PDE or running a separateMonte Carlo simulation multiplying the number of computations required.There are a number of variance reduction techniques which can be usedto mitigate this effect including importance sampling and low-discrepancysequences (see Glasserman Chapter 9 [pG04]).

    Once we have run our Monte Carlo simulation we end up with a sampleof losses Li, i = 1...n. To derive our VaR estimate from this we can use the

    22

  • 7/27/2019 Risk Measures and Valuation

    24/46

    empirical estimators

    VaRE := (1 )L(i) + L(i+1), i = n (3.23)

    ESE :=

    ni=n L(i)

    n (3.24)

    where L(i) is the ith order statistic from the loss sample.

    Another option is to fit a parametric distribution to the losses and calcu-late the risk measures from this fitted distribution. If we do not have a goodcandidate distribution for the losses we can fit a GPD to the exceedences

    above a level u. This results in the following pairs of estimators.

    VaRPOT := u +

    1 F(u)

    1

    (3.25)

    ESPOT :=VaRPOT

    1 + u1 . (3.26)

    23

  • 7/27/2019 Risk Measures and Valuation

    25/46

    Chapter 4

    Liquidation Risk Adjustments

    4.1 Liquidity

    The standard risk setting for considering portfolio risk makes a key assump-tion that we are calculating the risk associated with holding a portfolio un-changed over a period under consideration. This assumption can be con-sidered reasonable when the time horizon for the risk is relatively short but

    is less applicable for longer time horizons as we may be dynamically man-aging risks by actively delta hedging our portfolio or we may be requiredto liquidate part of the portfolio value in response to external requirements(e.g. asset withdrawals, risk limits). In this section we discuss methods foraccounting for liquidity considerations when calculating risk measures. It isworth noting that these methods are about quantifying adjustments to riskcalculations based on normal market conditions. The most dangerous liquid-ity problems typically relate to situations where the assets become almostcompletely illiquid. In these cases it may be more appropriate to considercash flow risk associated with the portfolio.

    The simplest method for accounting for liquidity is to consider the port-folio valuation P Vt based upon best bid and ask prices rather than based onmid prices. This simple adjustment captures the liquidation risk associatedwith relatively small positions well if the size to be liquidated is less than theprevailing bid and ask sizes at these levels. Costs associated with the bid andthe ask are sometimes referred to as exogenous liquidity risk costs as they arecommon to all market participants and are not dependent on the portfolioheld. Bangia, Diebold, Schuerman and Stroughair [aBfDtSjS98] extend thisapproach to consider the variation of the spread making the impact for each

    24

  • 7/27/2019 Risk Measures and Valuation

    26/46

    asset

    Impacti(pi) = 12Aipi( Spread + Spread)

    where Spread is the spread fraction and is selected in order to cover n% ofthe variation in spread (where n is the VaR confidence level).

    Another simple adjustment noted in Jorion [pJ06] is to set the riskcalculation period so that it corresponds a period over which an orderly liq-uidation could proceed (i.e. one which has negligible market impact). Thisis a somewhat subjective adjustment (how long is such a period?) and cre-ates problems of aggregation as different sub portfolio may require differentaggregation periods. An approach suggested by Berkowitz [jB00] looks at

    trying to infer liquidity risk from the performance of funds by performingregression based on net flows and portfolio values of these funds.

    4.2 Liquidity Policies

    If we wish to account for the liquidity risk of a portfolio we must first under-stand the context in which the portfolio is held. A liquidation risk associatedwith a portfolio held for clients in which we have rules preventing asset with-

    drawals in the short term and no external risk limits will be very different toone which may be subject to immediate full liquidation in order to providefunds for alternative purposes. We formalise this aspect of the liquidationrisk following the discussion in Acerbi and Scandolo [cAgS07] as a liquida-tion policy which determines a subset of portfolios which are acceptable endstates for our portfolio if a liquidation is required in the period over whichwe are measuring risk.

    Definition 18. A liquidity policy L is a closed convex subset L P of thespace of portfolios such that

    p L p + (a,0) L, a > 0 (4.1)p = (p0, p) L (p0,0) L (4.2)

    We do not consider that our portfolio will be within this subset at alltimes but that may be required to be liquidated to one of these portfolios inthe time period over which we are considering performing our risk calculation.An simple example risk policy is a cash liquidation policy which requires thata certain amount of cash be held in the portfolio

    25

  • 7/27/2019 Risk Measures and Valuation

    27/46

    LaCash := {(p0, p) P : p0 a}, a R (4.3)Another example is a unwind risk policy which requires that a fraction

    of the portfolio be unwound.

    Ln%Unwind(p) := {q P : qi 0.01npi, i 1}, n R (4.4)

    The most conservative approach when considering liquidation is to ac-count for the immediate liquidation of the assets held in the portfolio. Thismethod of valuation produces a lower bound for the value of the portfolio

    under consideration. To quantify this cost we introduce a map called theMarginal Supply-Demand Curve (MSDC).

    Definition 19. The MSDC is a map m : R R which satisfiesx1 < x2 m(x1) >= m(x2) (4.5)

    m is cadlag for x < 0 and ladcag for x > 0 (4.6)

    This map represents the impact of immediate liquidation and may bedirectly observed from the levels in the order book of an asset traded on

    an exchange (although additional liquidity available in dark trading venues(crossing venues which do not disclose information on posted liquidity) willnot be reflected in such an assessment). Jarrow and Protter discuss theadjustment of VaR based on assuming the MSDC is a linear curve [rJpP05]and assuming a unwind n% liquidation policy.

    With a suitable history of trades we can model the likely impact (mea-sured by average price slippage against an arrival price benchmark) of tradinga certain size order of an asset over a time period. To do this we considerexternal proxies for liquidity such as the bid / ask spread and daily tradingvolume and fit against observed impact over a large number of trades. Thisapproach is discussed in Almugren, Thum, Hauptmann and Li [rAcTeHhL05]where the authors propose a model which breaks down market impact intotwo components: permanent which describes the long term effect on theprice process caused by the information in the trade and temporary whichdescribes the short term impact during the trading period, dissipating shortlyafter the trading stops. The suggested functional form in this paper for de-scribing both of these factors is a power law form with permanent impactmodelled as

    g() = ||

    26

  • 7/27/2019 Risk Measures and Valuation

    28/46

    and temporary impact

    h() = ||where is the trading rate. That paper determines coefficients = 1 and = 0.6. The Nomura Liquid Market Analytics team [NLMA09] suggest asimilar model with one additional term instantaneous impact which modelsthe fraction of the bid / ask spread lost during trading. For the purpose ofthis paper I assume that we can fit such a model so that for each of our assetswe have the price impact where ai, bi, ci > 0, (0, 1)

    Impact(pi) = ai + bi|pi| + ci|pi| (4.7)Definition 20. The liquidation cost of a portfolio is the loss due to priceimpact for liquidating a portfolio.

    LC(p) =n1

    |pi|(ai + bi|pi| + ci|pi|) (4.8)

    Theorem 4. The function LC : P R is convex.

    Proof. The liquidation cost for each asset is convex as it is a sum of positivemultiples of powers of absolute values (powers greater than or equal to one).

    This makes the total liquidation cost convex as it is a sum of convex functions(see Boyd and Vandenberghe [sBlV04]).

    Definition 21. The unadjusted value of a portfolio is the valuation of theportfolio at mid.

    U(p) := p0 +n1

    mi,tpi (4.9)

    Definition 22. The liquidation value of a portfolio is the value returned bythe full liquidation of that portfolio assuming impact cost as modelled above.

    L(p) := U(p) LC(p) (4.10)

    Definition 23. The reach of a portfolio p is the set of portfolios which canbe obtained from p by liquidating a portfolio r.

    Reach(p) := {q P : q = p r + L(r)} (4.11)

    Equipped with a model of this nature I propose that we consider aportfolio value function of the following form

    27

  • 7/27/2019 Risk Measures and Valuation

    29/46

    P VL(p) := sup{U(q)|q Reach(p) L} (4.12)This is a slightly modified version of the proposed function by Acerbi

    and Scandolo in that it takes into account the fact that our liquidations willnot take place immediately but instead over a time period specified in ourliquidity measure and hence that we should model the price obtained for suchliquidations using our impact model rather than the instantaneous impactmodel corresponding to immediate market execution.

    We can see that if we have an n% unwind liquidation policy this amountsto adjusting the value of the portfolio according to the expected impact costof trading n% of the portfolio contents over the risk period.

    P VLn%Unwind(p)(p) = U(p) LC(0.01np) (4.13)

    For other liquidation policies we can reformulate 4.12 as an optimisationproblem.

    Proposition 1. This optimisation problem inr is equivalent to the definition4.12 above.

    P VL(p) = sup

    {U(p)

    LC(r) : r

    CL(p)

    }(4.14)

    CL(p) := {r P : p r + L(r) L}

    Proof. We note that U(.) is an affine function and hence

    U(q) = U(p r + L(r))= U(p) U(r) + L(r)= U(p) U(r) + U(r) LC(r)= U(p) LC(r)

    28

  • 7/27/2019 Risk Measures and Valuation

    30/46

    Chapter 5

    Numeric Results

    5.1 Portfolio One

    The first portfolio contains two options:

    1. Long European call option: Strike, Kc = 120, Maturity, Tc = 5 years.

    2. Short European put option: Strike, Kp

    = 80, Maturity, Tp

    = 5 years.

    5.1.1 Mapping

    We calculate Value at Risk and Expected shortfall for this portfolio underthe following assumptions for the underlying process.

    Physical Measure:

    dSt = Stdt + sStdWt, S0 = 100, = 0.08, S = 0.2 (5.1)

    Pricing Measure:

    dSt = rStdt + sStdWt, r = 0.01 (5.2)

    We use consider linear and quadratic approximations for the portfoliovalue in our calculations. We can see the effect of these different valuationmethods in the graph below. Both the Delta (first order) and Gamma (secondorder) approximations overestimate the risk of the portfolio although theeffect is somewhat smaller for the Gamma approximation.

    29

  • 7/27/2019 Risk Measures and Valuation

    31/46

    A small modification of our portfolio to Portfolio One * shows the pos-

    sible drawbacks of the linear approximation.

    1. Long European call option: Strike, Kc = 120, Maturity, Tc = 5 years.

    2. Long European put option: Strike, Kp = 80, Maturity, Tp = 5 years.

    30

  • 7/27/2019 Risk Measures and Valuation

    32/46

    Figure 5.1: Portfolio One Linear and Quadratic approximations

    40

    20

    0

    20

    60 80 100 120 140 160

    Stock Price

    P

    ortfolio

    Loss

    LossType

    Full

    Delta

    DeltaGamma

    31

  • 7/27/2019 Risk Measures and Valuation

    33/46

    Figure 5.2: Portfolio One* Linear and Quadratic approximations

    30

    20

    10

    0

    10

    60 80 100 120 140 160

    Stock Price

    P

    ortfolio

    Loss

    LossType

    Full

    Delta

    DeltaGamma

    32

  • 7/27/2019 Risk Measures and Valuation

    34/46

    5.1.2 Theta

    As we are considering a relatively long time period (1 year) in most of ourcalculations we replace the linear theta loss in (3.8) and (3.9) with the exacttheta loss. This has negligible performance impact for calculations. Com-paring the linear theta loss with the exact loss we see that the quality of theapproximation deteriorates as the risk calculation period increases.

    HoldingPeriod ThetaLoss LinearThetaLoss0.10 0.12 0.110.50 0.60 0.57

    1.00 1.21 1.142.00 2.48 2.28

    Table 5.1: Theta methods

    5.1.3 Estimators

    Three types of estimator were tested in conjunction with the Monte Carlosimulations. Below is an assessment for this portfolio when running a Monte

    Carlo simulation using Delta evaluation. We compare the results of running100 Monte Carlo simulations each with 250 evaluations and calculating valueat risk and expected shortfall using empirical estimators 3.23, fitting a normaldistribution and by fitting GPD to the tail (POT method) 3.25. In this casethe actual distribution is normal so as expected fitting a normal distributiongives the best results of the three techniques. However we can see the POTmethod outperforms the empirical estimation in this case. This suggests thatif we are running a Delta Monte Carlo simulation we should pick the normalparametric fit.

    We then apply the same estimators to a Monte Carlo simulation with fullevaluation. Here we do not have an analytic output so we use a long runningMonte Carlo simulation to estimate the true parameter values. As we donot have an underlying normal distribution (unlike in our first example) thenormal estimator performs extremely badly. Our POT estimator once againoutperforms the empirical estimator. We will use the POT estimators forour non normal Monte Carlo simulations presented later.

    33

  • 7/27/2019 Risk Measures and Valuation

    35/46

    Estimator Statistic Bias Variance Analytic

    POT VaR -0.26 2.55 26.89Normal VaR 0.05 1.58 26.89Empirical VaR -0.50 2.65 26.89POT ES -1.02 3.54 31.18Normal ES 0.05 1.76 31.18Empirical ES -1.36 3.31 31.18

    Table 5.2: Delta Monte Carlo Estimators

    Estimator Statistic Bias Variance Analytic

    POT VaR -0.03 1.70 22.18Normal VaR 8.90 1.93 22.18Empirical VaR -0.35 1.67 22.18POT ES -0.52 2.21 25.19Normal ES 11.16 2.21 25.19Empirical ES -0.76 2.11 25.19

    Table 5.3: Monte Carlo Estimators

    5.1.4 Methods

    We run five different standard methods for calculating value at risk and ex-pected shortfall: DeltaNormal, DeltaGammaNormal analytic methods andthree variants of Monte Carlo with full valuation, linear approximation (Delta-MonteCarlo) and quadratic approximation (DeltaGammaMonteCarlo).

    Method Time VaR VaRBias VaRVAR ES ESBias ESVARMonte Carlo 49.34 22.19 -0.19 0.52 25.05 -0.36 0.60D Monte Carlo 0.86 22.19 4.68 0.40 25.05 6.10 0.45D Normal 0.42 22.19 4.70 0.00 25.05 6.13 0.00

    DG Monte Carlo 0.84 22.19 2.23 0.47 25.05 2.63 0.61DG Normal 0.35 22.19 4.71 0.00 25.05 6.15 0.00

    Table 5.4: Portfolio One Methods

    First we look at the analytic methods. These are the fastest (times arein milliseconds) but we see higher bias than for any of the other methods.We can see the cause of the bias in the Delta Normal method if we considerthe valuation graph 5.1. For the Delta Gamma Normal method we have very

    34

  • 7/27/2019 Risk Measures and Valuation

    36/46

    similar figures to the Delta Normal case. The approximations made in this

    model to obtain normality seem to counterbalance any improvement whichusing a quadratic valuation approximation may have gained.

    The Monte Carlo methods with approximate valuation are almost asfast as the analytic methods and in particular we see an improvement in theestimate when using the Delta Gamma approximation to the portfolio value(as 5.1 would suggest). These methods have another key advantage thatthey are easily modified to alter the distribution underlying the simulationthereby better modelling fat tail risks. The disadvantage is in knowing howappropriate the approximations are. We easily constructed an example wherethe linear approximation breaks down 5.2 and it would not be hard to buildone where the quadratic approximation was inappropriate. Monte Carlo withfull evaluation skips this problem and provides a method which is appropriatefor all derivative portfolios at a cost of time.

    5.2 Portfolio Two

    The second portfolio contains an option and a bond:

    1. Long European call option: Strike, Kc = 100, Maturity, Tc = 2 years.2. Short zero coupon bond: Notional, $1000, Maturity, Tb = 2 years.

    5.2.1 Mapping

    For this portfolio we consider a stochastic model for the short rate with thefollowing dynamics.

    Physical Measure:

    drt = kr( rt)dt + rdt, r0 = 0.01, kr = 0.1, = 0.1, r = 0.004 (5.3)Pricing Measure:

    drt = kr( rt)dt + rdt, = 0.05 (5.4)

    We have the following dynamics for the equity process.

    Physical Measure:

    dSt = Stdt + sStdWt, S0 = 100, = 0.09, S = 0.2 (5.5)

    35

  • 7/27/2019 Risk Measures and Valuation

    37/46

    Pricing Measure:

    dSt = rtStdt + sStdWt, rt as above (5.6)

    dtdWt =

    We can see the effect of changing the correlation between the equityprocess and the risk free rate by considering the following graph 5.3.

    If we have a correlation close to -1 we will find most values close tothe diagonal crossing from bottom left to top right which shows a steep rateof loss as stock prices decrease and the short rate increases. Conversely

    correlation close to 1 will find most values close to the diagonal from backto front which shows a very stable portfolio value as the bond forms aneffective hedge for the options. Hence we will expect to see much larger riskvalues associated with the correlation value approaching -1 and much reducedrisk with a correlation value approaching -1. Looking at historic records ofcorrelations we see that these are not stable quantities which means forwardlooking measures of these should be preferred when available.

    We now examine the quality of the linear approximation to this port-folio. This portfolio has a much higher Gamma value than the first as theoption is at the money and is only long options (losing some of the Gamma

    hedge provided by the short put in the first portfolio). As a result the linearapproximation is much worse for this portfolio as is illustrated in the graph5.4 and significantly over states the risk.

    We can see from 5.5, the linear approximation is reasonable for the shortrate risk.

    36

  • 7/27/2019 Risk Measures and Valuation

    38/46

    Figure 5.3: Portfolio Two Value

    70

    80

    90

    100

    110

    120

    130

    0.005

    0.010

    0.015

    0.020

    0.025

    0.030

    970

    980

    990

    1000

    1010

    1020

    Stock PriceShort Rate

    PortfolioValue

    37

  • 7/27/2019 Risk Measures and Valuation

    39/46

    Figure 5.4: Portfolio Two Linear Approximation (Stock Price)

    20

    10

    0

    10

    20

    70 80 90 100 110 120 130

    Stock Price

    P

    ortfolio

    Loss

    LossType

    Full

    Delta

    38

  • 7/27/2019 Risk Measures and Valuation

    40/46

    Figure 5.5: Portfolio Two Linear Approximation (Short Rate)

    5

    0

    5

    10

    15

    0.005 0.010 0.015 0.020 0.025 0.030

    Short Rate

    P

    ortfolio

    Loss

    LossType

    Full

    Delta

    39

  • 7/27/2019 Risk Measures and Valuation

    41/46

    5.2.2 Vega

    Previously we have been ignoring risks associated with changes to impliedvolatility but we can address this by modelling the BS implied volatility witha CIR process as is typical in the Heston model.

    Physical Measure:

    dt = k( t)dt + dt, r0 = 0.01, k = 1, = 0.3, = 0.5 (5.7)

    dtdt = 0, dtdWt = 0

    In this model we have no correlation between implied volatility and thestock price. We could further improve the model by adding this effect andby using a pricing model consistent with our implied volatility model (in thiscase the Heston model).

    5.2.3 Methods

    Looking at the output from the pricing calculations below we can see theproblems with the linear approximation noted in the previous section causing

    a massive overstatement of the risk measures. The correlation between stockprice and short rate can be seen to have a massive effect on risk with 0.99correlation causing the portfolio being so well hedged that the 99% VaR isnegative. Introducing implied volatility risk makes a small difference to mostof the risk figures but notably increases the risk for the portfolio in the casethe stock price and short rate are highly correlated making it clear that thisportfolio is still exposed to some risk at the 99% level (which was absent fromthe calculations with fixed implied volatility).

    40

  • 7/27/2019 Risk Measures and Valuation

    42/46

    Method VolType Rho VaR VaRBias VaRVARFull Monte Carlo Fixed 0.00 9.28 -0.03 0.26Full Monte Carlo Fixed 0.99 -0.45 -0.00 0.04Full Monte Carlo Fixed -0.99 13.12 0.02 4.67Full Monte Carlo Variable 0.00 9.57 -0.20 0.27Full Monte Carlo Variable 0.99 2.17 0.06 0.12Full Monte Carlo Variable -0.99 12.67 -0.01 4.27Delta Normal Fixed 0.00 9.28 10.57 0.00Delta Normal Fixed 0.99 -0.45 11.44 0.00

    Delta Normal Fixed -0.99 13.12 13.28 0.00Delta Normal Variable 0.00 9.57 11.60 0.00Delta Normal Variable 0.99 2.17 11.64 0.00Delta Normal Variable -0.99 12.67 14.37 0.00

    Table 5.5: Portfolio Two VaR Methods

    Method VolType Rho ES ESBias ESVARFull Monte Carlo Fixed 0.00 10.87 0.04 0.34Full Monte Carlo Fixed 0.99 -0.23 -0.00 0.05Full Monte Carlo Fixed -0.99 14.46 -0.05 4.72Full Monte Carlo Variable 0.00 11.16 -0.05 0.30Full Monte Carlo Variable 0.99 2.68 0.08 0.11Full Monte Carlo Variable -0.99 14.03 -0.04 4.26Delta Normal Fixed 0.00 10.87 12.97 0.00Delta Normal Fixed 0.99 -0.23 13.90 0.00Delta Normal Fixed -0.99 14.46 16.91 0.00Delta Normal Variable 0.00 11.16 14.56 0.00Delta Normal Variable 0.99 2.68 14.58 0.00Delta Normal Variable -0.99 14.03 18.44 0.00

    Table 5.6: Portfolio Two ES Methods

    41

  • 7/27/2019 Risk Measures and Valuation

    43/46

    Chapter 6

    Conclusion

    In this paper the mathematics of quantitative risk measures have been re-viewed. The advantages of expected shortfall as a measure over value atrisk have been detailed and the coherent properties examined. Reviewingthe methods for calculating risk Monte Carlo was shown as the most flexiblemethod when dealing with portfolios with non linear risk exposures. We seethat using Taylor approximations on conjunction with Monte Carlo can be apowerful way to speed up calculations on portfolios with approximately lin-

    ear exposures while the Delta normal method is useful when we have linearportfolios and speed of calculation is paramount. In the chapter on liquid-ity adjustments I examined some of the techniques currently proposed foradjusting risk figures to take into account liquidity concerns. I proposeda variation on the Acerbi and Scandolo framework which makes the some-what more realistic assumption that liquidations can be modelled using datafrom real trades rather than priced according to order book liquidation. Thework in this area could be extended by considering a stochastic cost of liq-uidation rather than simply the expected value. Another possible area ofinvestigation would be in characterising the types of optimisation problem

    this framework gives rise to according to different types of liquidity policy.In the final chapter some risk calculations were considered using some ofthe methods discussed previously. The application of extreme value theoryto calculating value at risk and expected shortfall from the losses producedby a Monte Carlo simulation were discussed and from these results a Peaksover Troughs estimator for the measures is a preferable alternative to theempirical estimator.

    42

  • 7/27/2019 Risk Measures and Valuation

    44/46

    Bibliography

    [pJ06] P. Jorion (2006), Value at Risk, 3rd Ed.: The New Benchmark for

    Managing Financial Risk, McGraw-Hill Professional.

    [aMrFpE05] A. McNeil, R. Frey and P. Embrechts (2005), Quantita-tive Risk Management: Concepts, Techniques, Tools, Princeton SeriesIn Finance.

    [vA89] V. Akgiray (1989), Conditional Heteroscedasticity in Time Seriesof Stock Returns: Evidence and Forecasts, The Journal of Business Vol.62 No. 1, University of Chicago Press.

    [hFaS02] H. Follmer and A. Schied (2002), Convex measures of risk and

    trading constraints, Finance and Stochastics 6 pp 429-447, Springer-Verlang.

    [cA02] C. Acerbi (2002), Spectral measures of risk: A coherent represen-tation of subjective risk aversion, The Journal of Banking and FinanceNo. 26 pp 1505-1518, Elsevier.

    [cAgS07] C. Acerbi and G. Scandolo (2007), Liquidity Risk The-ory and Coherent Measures of Risk, Working Paper, SSRN:http://ssrn.com/abstract=1048322.

    [rJpP05]R. Jarrow and P. Protter

    (2005), Liquidity Risk Theory andRisk Measure Computation, Review of Future Markets, Cornell

    [dC10] D. Cintioli (2010) The Seeds of Further Instability, The hedge fundjournal Sep 1010.

    [lC12] L. Carver (2012) VAR at risk, Risk Magazine 4 Mar 2012.

    [pG04] P. Glasserman (2004) Monte Carlo Methods in Financial Engi-neering, Springer-Verlag.

    43

  • 7/27/2019 Risk Measures and Valuation

    45/46

    [sjKbJeH05] S.J. Koopman, B. Jungbacker and E. Hol (2004) Fore-

    casting daily variability of the S&P 100 stock index using historical, re-alised and implied volatility measurements, Journal of Empirical Finance12, pp. 445-475

    [gJyT05] G.J. Jian and Y.S. Tian (2005) The Model-Free Implied Volatil-ity and Its Information Content, The Review of Financial Studies,vol.18, no.4

    [dBfM06] D. Brigo and F. Mercurio (2006) Interest Rate Models - The-ory and Practice:With Smile, Inflation and Credit, Springer Finance2006.

    [rAcTeHhL05] R. Almugren, C. Thum, E. Hauptmann andH. Li (2005) Direct Estimation of Equity Market Impact,http://www.math.nyu.edu/ almgren/papers/costestim.pdf

    [NLMA09] Nomura Liquid Market Analytics Team (2005) METRIC:Model-Estimated TRade Impact Cost, (Available from Nomura on re-quest)

    [rJoO05] R. A. Jarrow and P. Protter (2005) Liquidity Risk and RiskMeasure Computation, Review of Futures Markets No 11

    [aBfDtSjS98] A. Bangia, F. Diebold, T. Schuermann andJ. Stroughair (1998) Modelling Liquidity Risk: with implica-tions for Traditional Market Risk Measurement and Management, RiskManagement: The State of the Art, Kluwer Academic Publishers 2002

    [jB00] J. Berkowitz (2000) Incorporating liquidity risk into value-at-riskmodels, Journal of Derivatives 2000

    [sBlV04] S. Boyd and L. Vandenberghe (2004) Convex Optimisation,Cambridge University Press

    [P00] PIMCO (2000) Risk Measurement at PIMCO ,http://pages.nes.ru/agoriaev/Papers/PIMCO

    [rN06] R. Nelsen (2006) An Introduction to Copulas, Springer

    [iR11] U. Roy (2011) Estimation of Portfolio Value at Risk using Copula,RBI Working Paper Series

    44

  • 7/27/2019 Risk Measures and Valuation

    46/46

    [cWjL00] C. Walter and J. Lopez (2000) Is Implied Correlation Worth

    Calculating? Evidence from Foreign Exchange Options and HistoricalData, Journal of Derivatives, 7(3), 65-82


Recommended