+ All Categories
Home > Economy & Finance > CDS and Loss function: CDO Tranches

CDS and Loss function: CDO Tranches

Date post: 13-Jan-2015
Category:
Upload: bank-industry
View: 514 times
Download: 2 times
Share this document with a friend
Description:
 
Popular Tags:
22
Advanced tools for Risk Management and Asset Pricing Group assignment - a.a. 2011-2012 Giulio Laudani (1256809) Alberto Manassero (1248376) Marco Micieli (1255470) June 8, 2012 1
Transcript
Page 1: CDS and Loss function: CDO Tranches

Advanced tools for Risk Management and Asset Pricing

Group assignment - a.a. 2011-2012

Giulio Laudani (1256809)

Alberto Manassero (1248376)

Marco Micieli (1255470)

June 8, 2012

1

Page 2: CDS and Loss function: CDO Tranches

Comments

Exercise 1.

The file Data.xls contains data on CDS term structure as of the 1st March 2012 for Danone, Carrefour

and Tesco, as well as the risk-free zero curve on the Euro at the same date.

1.a.

Assumptions:

1. Intensities are piecewise flat and increasing over time;

2. Default can occur once a month (we choose a discretization of m = 12 to approximate the

continuos case);

3. S(t0, tN ) is assumed to be the contractual annualized fair CDS Spread;

4. The currency in which both the CDSs and the risk-free zero curve are denominated is assumed

to be the same;

5. The premium is paid at the end of the quarter.

For each company, we derive the term structure of hazard rates

�6,12,�12,24, ...,�84,120 (subscripts stand for maturities expressed in months) defined as the proba-

bility of a credit event occurring during the time interval dt conditional on surviving up to time t as

following:

Pr(✓ < t+ dt | ✓ > t) = �(t)dt

where ✓ is the dafault time. Default intensity � can be interpreted as the rate of default in time unit.

To construct the hazard term structure, we use an iterative process commonly known as bootstrap.

It starts by taking the shortest maturity contract and using it to calculate the first survival probability.

2

Page 3: CDS and Loss function: CDO Tranches

In equilibrium, for a contract starting in t0 with quarterly premium payments, the following must hold:

S(t0, t0 + k)X

n=3,6,9,12

4(tn�3, tn)Z(0, tn)e��0,ktn = (1�RR)

12X

m=1

Z(t0, tm)(e��0,ktm�1 � e

��0,ktm)

i.e. PV (Premium leg) = PV (Protection leg), where e

��0,ktn is the survival probability up to tn

and �0,k the constant instant default probability between 0 and k.

� for the shortest maturity is backed out from the above equation and the process is iterated for all

maturities: 12, 24, 36 up to 120. We define the customized function termstructuredefint on the basis

of typical Matlab solve function.

3

Page 4: CDS and Loss function: CDO Tranches

It should be noted how, for time horizon lower than a year, default intensities for the three reference

entities are flat and close to zero while their patterns differ when a longer time horizon is considered.

Such upward trend must be kept in mind when analysing the differences between piecewise and constant

lambdas as a clear evidence in favour of the former.

From the CDS term structure we know that Carrefour is the riskiest reference entity followed by

Tesco while Danone is the least risky one. This finding is confirmed by the default intensity term

structure. Markets assign the highest default intensities to Carrefour and the lowest to Danone.

1.b.

We performed a sensitivity analysis to understand how the hazard rate curve reacts to changes in

default-frequency discretization, recovery rate and maturity.

4

Page 5: CDS and Loss function: CDO Tranches

5

Page 6: CDS and Loss function: CDO Tranches

Default intensities are positive increasing function of the frequency at which default can occur. The

default frequency value ranges from 0.1 (default can occur once every 10 months) to 4 (4 times per

month).

The idea is pretty straightforward: if we let the company be at risk of default each week, instead

of each month, we are improving the quality of our approximation of continuos PV(Protection leg).

O’Kane and Turnbull found that this discretization error, in presence of flat hazard structure (i.e.

keeping � constant) leads to a difference between the corresponding computed spread proportional to

1/m where m is the number of intervals in which we divide the year. Specifically, difference between

continuos and discrete case is given by r/2m. In this case, having fixed the spread and the recovery

rate by hypothesis, the consequent upward bias of PV(Protection leg) can be justified only by an

increase in default intensities.

The sensitivity of � w.r.t. RR was performed by changing it from 10% to 80%, with a 1% increment.

6

Page 7: CDS and Loss function: CDO Tranches

7

Page 8: CDS and Loss function: CDO Tranches

As can be observed in the above graphs, given the same maturity, an increase in the recovery

rate results in an increase of the default intensities. Recovery rate is defined as the expected price of

the CTD obligation at the time of the credit event expressed as a percentage of its face value. The

rationale is the following: suppose you enter a CDS contract whose price is 100bps and recovery rate

40%. Suppose further to keep the price (unrealistically) constant and imagine the recovery rate could

change: if it goes up to 60%, it means that you are buying protection “for a lower loss”, so you should

have to pay lower premium to insure against default. But if the spread is somehow kept constant, you

will be willing to pay still 100bps only if the probability of default has increased.

The relationship we find can be described by the formula:

⇠=S

1�RR

Moreover, the sensitivity of � to recovery rate grows, in turn, with maturity: in fact, as maturity

increases, RR becomes more relevant in setting CDS price, since a credit event becomes more likely.

Following the same line of reasoning as before, keeping price constant, the reduction in expected loss

must be offset by a sharper increase in � the higher is RR.

8

Page 9: CDS and Loss function: CDO Tranches

1.c.

The cumulative default probabilities for the three companies were derived through the Matlab function

Cumprob which replicates the formula below:

f(0, t) = 1� p(0, t) = 1� e

�Pt

0�(s)�s

Cumulative deault probabilities, in case of piecewise constant �:

Cumulative Default Prob. (years) Danone Tesco Carrefour

1 0.00320 0.00602 0.00764

5 0.06890 0.09532 0.16159

10 0.21414 0.26655 0.39214

1.d.

To carry out this point the above hypotheses were maintained except for the pattern of the intensity

process which was assumed to be constant over time. Default intensities derived under such assumption

are contained in the matlab matrix “def_int”. It is worthwhile stressing that the term structure of

default intensities derived under a constant intensity process is constant through time as opposed to

that derived under the piecewise constant intensity process which is a piecewise flat function of time.

Cumulative default probabilities, in case of constant �:

Cumulative Default Prob. (years) Danone Tesco Carrefour

1 0.01216 0.01704 0.02765

5 0.05849 0.08117 0.12900

10 0.11316 0.15523 0.24060

By using a constant default intensity the default event is modelled as an exponential distribution

with parameter �, or equivalently, as the first arrival of a Poisson process with intensity �. That

9

Page 10: CDS and Loss function: CDO Tranches

is why increments are not 1:1 functions of time. Compared to the previous assumption (piecewise

constant intensities), constant � leads to an overestimation of cumulative default probabilities for

short maturities and to an underestimation in case of longer maturities.

The justification of this behavior is related to the nature of the constant structure hypothesis. In

this model, the hazard rate is a weighted average between the short and long term rate, hence all

the information given by the CDS market structure about the time dependence of credit events is

embedded into a unique value, which will be greater compared to the short term piecewise interval and

smaller for the longer one. The graph gives a suitable representation of this general idea: the piecewise

structure should be preferred as it ensures a better usage of market informations (as provided by the

CDS structure) and a more handy compromise with respect to the continous case.

10

Page 11: CDS and Loss function: CDO Tranches

Exercise 2.

2.a-b.

We derive the loss function of the portfolio at 1, 5 and 10 years, under the assumption of the Li model

with Gaussian copula. According to the Li model we:

• Generate (through the function mvnrnd) random variates (X1, X2, X3) following a multivari-

ate normal distribution in which the dependence structure is captured by different correlation

parameters r=[0.3 0.5 0.8 0.95]

• Derive the distribution functions of the survival times (T1, T2, T3) which are equal to the inverse

of the survival probability function computed in the cumulative gaussian distribution for Xi (as

computed above). The matrix tao will contain in each column the survival times arising from

the different simulation (rows) for the three assets contained in the portfolio

• We compare the default times to the particular time frame established: time=[0.5 1 3 10]. Said

in another way, we compute how many assets have a survival time which is less or equal than

6 months, ... , 10 years. We then sum all the assets that have a survival time lower than the

variable time to obtain the number of assets defaulting in our portfolio (n_def )

• The Loss function in our Matlab code will be equal to: Loss(j,:,o)=n_def.*(notional*(1-RR))

and the portfolio will be priced accordingly

• The above steps are repeated 1.000.000 times

11

Page 12: CDS and Loss function: CDO Tranches

The graph above shows different percentage losses (on x-axis) and their experiment-driven frequen-

cies1, according to different values of rho and different times to default: loss distributions tends to be

U-shaped as rho increases. Higher values for correlation imply an increase in joint event probability

(i.e.the probability that either all underlying assets default or all survive becomes larger relative to the

likelihood of “central” events). Note that this “shape-effect” becomes meaningful only for long time to

default: intuitively, in 1 month, probability of joint default is very low, regardless of the correlation2.

An important limitation in this approach is that the dependence among defaults is entirely driven

by the correlation matrix, making joint extreme events very rare. This limitation is partially overcome1Recall that the y-axis simply displays the relative frequency of observations over 1 million runs.2Hazard rates were kept constant. However we acknowledge that by increasing hazard rates the above U-shaped

pattern would have been more pronounced.

12

Page 13: CDS and Loss function: CDO Tranches

by assuming a t-copula for the dependence structure: in this way tail dependence is introduced and

becomes sizable.

In the graph below we plot the loss distribution derived under the t-copula assumption.

The following tridimensional continous versions of the above figures illustrate loss occurrence prob-

ability (relative frequency) as function of rho, under both Gaussian and t-student models and time to

default equal to 10 years (to make results more appreciable). The Gaussian distribution shows that for

correlation values higher than 0.3 a variation in the sensitivity trend is noticeable (in the t-student this

effect is less pronounced). Rising correlation levels are associated with an increase in the probability

of a zero loss, as the chances of a cross-default are higher and those of a single default diminish. Two

other effects are noteworthy: as rho increases, central values of losses become less frequent3, while3Bear in mind that a 20% loss implies that only one reference entity in the portfolio defaulted.

13

Page 14: CDS and Loss function: CDO Tranches

extreme percentages display growing frequency, confirming the tendency outlined before.

14

Page 15: CDS and Loss function: CDO Tranches

Moreover, the following diagram shows the difference (for levels of correlation of 0.3 and 0.8 only)

between losses computed according to Gaussian and t-student dependence structures4. Whenever the

difference is positive the gaussian prevails over the t-student and viceversa. As we know, t-student

distribution displays fatter tails: for level of losses equal to 0, the gaussian prevails (due to its larger

relative frequency around the mean) while for losses higher than 0 (or joint extreme losses) the t-student

prevails.4Please refer to figures 11 and 12 of the Matlab code for the losses computed according to Gaussian and t-student.

15

Page 16: CDS and Loss function: CDO Tranches

As a further step, we finally perform time sensitivity analysis by examining how loss frequency

changes according to time to default, for a given level of rho and constant default intentities �. The

graph was built by allowing loss occurrence probability (relative frequency) to vary as function of time

(ranging from 2 to 10). Under both Gaussian and t-student dependence assumptions, frequency of

zero losses decreases on time, while non-zero losses become more likely. Difference between Gaussian

and t-student, in favor of the latter, are relevant only in case of small losses. Around 20% of losses,

t-student displays higher probability only for short time: the two distribution tend to converge as time

raises.

16

Page 17: CDS and Loss function: CDO Tranches

17

Page 18: CDS and Loss function: CDO Tranches

The following graphs are relative to a piecewise � structure.

18

Page 19: CDS and Loss function: CDO Tranches

As we have already said, constant hypothesis entails a weighted average of instant default probabil-

ity, thus turning into smoother strucure for � and, by the same token, into less spread loss distribution.

On the contrary, piecewise constant can better approximate credit event behaviours: time sensitivity

trends feature sharper hikes in occurrence probability for longer time and lower values for shorter.

Specific differences due to the two dependence models are driven by the same factors as before5.

5Graphs for the differences are generated by code run: here we omit them for handiness’ sake

19

Page 20: CDS and Loss function: CDO Tranches

2.c.

The loss distribution of a second-to-default swap under the assumption of the Li model is derived.

The code introduces a variable controllo which triggers the repayment of the recovery value6 once the

inequality n_def>=num2def is verified. In our case num2def was set to 2 to account for the second-

to-default feature of the swap. The following graph plots the loss function for two values of rho, under

both model assumptions:

6The recovery value accounts for the loss on the unique entity defaulted (which in the code is expressed as:Loss(j,:,o)=n_def.*(notional*(1-RR)) where n_def = (n_def.*controllo +(1-num2def)*controllo))

20

Page 21: CDS and Loss function: CDO Tranches

As we would expect, frequency of second-to-default is even lower than that of a first-to-default and

almost next to zero up to 5 years. In this case the maximum loss threshold is 0.4. The T-student

copula allows for higher probability in joint event (due to fatter tails) and a lower one for central

values: tendency to U-shaped distribution is preserved, as positive increasing function of rho. The

following figure shows the difference in default frequency, according to the two dependence strucures:

21

Page 22: CDS and Loss function: CDO Tranches

The peak of frequency for around-zero losses in 5 years could seem counterintuitive: however,

possible explanations for such a pattern might be due to liquidity issues (five-years contract are the

most traded).

22


Recommended