+ All Categories
Home > Documents > The Bivariate Normal Copula Christian Meyer December 15 ... · arXiv:0912.2816v1 [math.PR] 15 Dec...

The Bivariate Normal Copula Christian Meyer December 15 ... · arXiv:0912.2816v1 [math.PR] 15 Dec...

Date post: 02-Dec-2018
Category:
Upload: vuongnga
View: 226 times
Download: 0 times
Share this document with a friend
24
arXiv:0912.2816v1 [math.PR] 15 Dec 2009 The Bivariate Normal Copula Christian Meyer *† December 15, 2009 Abstract We collect well known and less known facts about the bivariate normal distribution and translate them into copula language. In addition, we prove a very general formula for the bivariate normal copula, we compute Gini’s gamma, and we provide improved bounds and approximations on the diagonal. 1 Introduction When it comes to modelling dependent random variables, in practice one often resorts to the multivariate normal distribution, mainly because it is easy to pa- rameterize and to deal with. In recent years, in particular in the quantitative finance community, we have also witnessed a trend to separate marginal distri- butions and dependence structure, using the copula concept. The normal (or Gauss, or Gaussian) copula has even come to the attention of the general public due to its use in the valuation of structured products and the decline of these products during the financial crisis of 2007 and 2008. The multivariate normal distribution has been studied since the 19th century. Many important results have been published in the 1950s and 1960s. Nowadays, quite some effort is spent on rediscovering some of them. In this note we will proceed as follows. We will concentrate on the bi- variate normal distribution because it is the most important case in practice (applications in quantitative finance include pricing of options and estimation of asset correlations; the impressive list by Balakrishnan & Lai (2009) men- tions applications in agriculture, biology, engineering, economics and finance, the environment, genetics, medicine, psychology, quality control, reliability and survival analysis, sociology, physical sciences and technology). We will give an extensive view on its properties. Everything will be reformulated in terms of the associated copula. We will also provide new results (at least if the author has not been redis- covering himself...), including a very general formula implying other well known * DZ BANK AG, Platz der Republik, D-60265 Frankfurt. The opinions or recommendations expressed in this article are those of the author and are not representative of DZ BANK AG. E-Mail: [email protected] 1
Transcript

arX

iv:0

912.

2816

v1 [

mat

h.PR

] 1

5 D

ec 2

009

The Bivariate Normal Copula

Christian Meyer∗†

December 15, 2009

Abstract

We collect well known and less known facts about the bivariate normal

distribution and translate them into copula language. In addition, we

prove a very general formula for the bivariate normal copula, we compute

Gini’s gamma, and we provide improved bounds and approximations on

the diagonal.

1 Introduction

When it comes to modelling dependent random variables, in practice one oftenresorts to the multivariate normal distribution, mainly because it is easy to pa-rameterize and to deal with. In recent years, in particular in the quantitativefinance community, we have also witnessed a trend to separate marginal distri-butions and dependence structure, using the copula concept. The normal (orGauss, or Gaussian) copula has even come to the attention of the general publicdue to its use in the valuation of structured products and the decline of theseproducts during the financial crisis of 2007 and 2008.

The multivariate normal distribution has been studied since the 19th century.Many important results have been published in the 1950s and 1960s. Nowadays,quite some effort is spent on rediscovering some of them.

In this note we will proceed as follows. We will concentrate on the bi-variate normal distribution because it is the most important case in practice(applications in quantitative finance include pricing of options and estimationof asset correlations; the impressive list by Balakrishnan & Lai (2009) men-tions applications in agriculture, biology, engineering, economics and finance,the environment, genetics, medicine, psychology, quality control, reliability andsurvival analysis, sociology, physical sciences and technology). We will give anextensive view on its properties. Everything will be reformulated in terms ofthe associated copula.

We will also provide new results (at least if the author has not been redis-covering himself...), including a very general formula implying other well known

∗DZ BANK AG, Platz der Republik, D-60265 Frankfurt. The opinions or recommendations

expressed in this article are those of the author and are not representative of DZ BANK AG.†E-Mail: [email protected]

1

formulas, a derivation of Gini’s gamma, and improved and very simple boundsfor the diagonal.

For collections of facts on the bivariate (or multivariate) distribution we referto the books of Balakrishnan & Lai (2009), of Kotz, Balakrishnan & Johnson(2000), and of Patel & Read (1996), and to the survey article of Gupta (1963a)with its extensive bibliography (Gupta 1963b). For theory on copulas we referto the book of Nelsen (2006).

We will use the symbols P, E and V for probabilities, expectation values andvariances.

2 Definition and basic properties

Denote by

ϕ(x) :=1√2π

exp

(

−x2

2

)

, Φ(h) :=

∫ h

−∞

ϕ(x) dx

the density and distribution function of the standard normal distribution, andby

ϕ2(x, y; ) :=1

2π√

1 − 2exp

(

−x2 − 2xy + y2

2(1 − 2)

)

,

Φ2(h, k; ) :=

∫ h

−∞

∫ k

−∞

ϕ2(x, y; ) dy dx,

the density and distribution function of the bivariate standard normal distribu-tion with correlation parameter ∈ (−1, 1). The bivariate normal (or Gauss,or Gaussian) copula with parameter is then defined by application of Sklar’stheorem, cf. Section 2.3 of (Nelsen 2006):

C(u, v; ) := Φ2

(

Φ−1(u), Φ−1(v); )

(2.1)

For ∈ {−1, 1}, the correlation matrix of the bivariate standard normal distri-bution becomes singular. Nevertheless, the distribution, and hence the normalcopula, can be extended continuously. We may define

C(u, v,−1) := lim−→−1

C(u, v; ) = max(u + v − 1, 0), (2.2)

C(u, v, +1) := lim−→+1

C(u, v; ) = min(u, v). (2.3)

Hence C(·, ·; ), for −→ −1, approaches the lower Frechet bound,

W (u, v) := max(u + v − 1, 0),

and, for −→ 1, approaches the upper Frechet bound,

M(u, v) := min(u, v).

2

Furthermore, we have

C(u, v; 0) = u · v =: Π(u, v), (2.4)

the independence copula. What happens in between can be described using thefollowing differential equation derived by Plackett (1954):

d

dΦ2(x, y; ) = ϕ2(x, y; ) =

d2

dx dyΦ2(x, y; ) (2.5)

We find

C(u, v; ) − C(u, v; σ) = I(u, v; σ, ) :=

σ

ϕ2

(

Φ−1(u), Φ−1(v); r)

dr, (2.6)

and in particular,

C(u, v; ) = W (u, v) + I(u, v;−1, ), (2.7)

= Π(u, v) + I(u, v; 0, ), (2.8)

= M(u, v) − I(u, v; , 1). (2.9)

In other words, the bivariate normal copula allows comprehensive total concor-dance ordering with respect to , cf. Section 2.8 of (Nelsen 2006):

W (·, ·) = C(·, ·;−1) ≺ C(·, ·; ) ≺ C(·, ·; σ) ≺ C(·, ·; 1) = M(·, ·)

for −1 ≤ ≤ σ ≤ 1, i.e., for all u, v,

W (u, v) = C(u, v;−1) ≤ C(u, v; ) ≤ C(u, v; σ) ≤ C(u, v, 1) = M(u, v).

By plugging u = v = 12 into (2.8) we obtain

C(

12 , 1

2 ; )

=1

4+

0

1

2π√

1 − r2dr =

1

4+

1

2πarcsin(), (2.10)

a result already known to Stieltjes (1889).Mehler (1866) and Pearson (1901a), among other authors, obtained the so-

called tetrachoric expansion in :

C(u, v; ) = uv+(

Φ−1(u))

(

Φ−1(v))

∞∑

k=0

Hek

(

Φ−1(u))

Hek

(

Φ−1(v)) k+1

(k + 1)!

(2.11)where

Hek(x) =

[k/2]∑

i=0

k!

i!(k − 2i)!

(

−1

2

)i

xk−2i

are the Hermite polynomials.

3

The bivariate normal copula inherits the symmetries of the bivariate normaldistribution:

C(u, v; ) = C(v, u; ) (2.12)

= u − C(u, 1 − v;−) (2.13)

= v − C(1 − u, v;−) (2.14)

= u + v − 1 + C(1 − u, 1 − v; ) (2.15)

Here, (2.12) is a consequence of exchangeability, and (2.15) is a consequence ofradial symmetry, cf. Section 2.7 of (Nelsen 2006).

In the following sections we will discuss formulas for the bivariate normalcopula, its numerical evaluation, bounds and approximations, measures of con-cordance, and univariate distributions related to the bivariate normal copula.It will be convenient to assume 6∈ {−1, 0, 1} unless explicitly stated otherwise(cf. (2.2), (2.4), (2.3) for the simple formulation in the missing cases).

3 Formulas

If (X, Y ) are bivariate standard normally distributed with correlation thenY conditional on X = x is normally distributed with expectation x and vari-ance 1 − 2. This translates into the following formulas for the conditionaldistributions of the bivariate normal copula:

∂uC(u, v; ) = Φ

(

Φ−1(v) − · Φ−1(u)√

1 − 2

)

, (3.1)

∂vC(u, v; ) = Φ

(

Φ−1(u) − · Φ−1(v)√

1 − 2

)

(3.2)

The copula density is given by:

c(u, v; ) :=∂2

∂u ∂vC(u, v; ) =

ϕ2

(

Φ−1(u), Φ−1(v); )

ϕ (Φ−1(u))ϕ (Φ−1(v))(3.3)

=1

1 − 2exp

(

2Φ−1(u)Φ−1(v) − 2(

Φ−1(u)2 + Φ−1(v)2)

2(1 − 2)

)

In terms of its copula density, the bivariate normal copula can be written as

C(u, v; ) =

∫ u

0

∫ v

0

c(s, t; ) dt ds. (3.4)

4

Now let α, β, γ ∈ (−1, 1) with αβγ = . Then the following holds:

C(u, v; )

=

−∞

−∞

Φ

(

Φ−1(u) − α · x√1 − α2

)

Φ

(

Φ−1(v) − β · y√

1 − β2

)

ϕ2(x, y; γ) dy dx (3.5)

=

∫ 1

0

∫ 1

0

Φ

(

Φ−1(u) − α · Φ−1(s)√1 − α2

)

Φ

(

Φ−1(v) − β · Φ−1(t)√

1 − β2

)

c(s, t; γ) dt ds

=

∫ 1

0

∫ 1

0

(

∂sC(u, s; α)

)(

∂tC(t, v; β)

)

c(s, t; γ) dt ds

The right hand side of (3.5) has occurred in credit risk modelling but withoutthe link to the bivariate normal copula, cf. (Bluhm & Overbeck 2003). A proofin that context will be provided in Section A.1.

Other formulas for C(u, v; ) are obtained by carefully studying the limit assome of the variables α, β, γ are approaching the value one. The interestingcases, not regarding symmetry, are listed in Table 1.

α β γ reference= α = β = γ (3.5)= 1 = β = γ (3.6)= α = β = 1 (3.7)= 1 = = 1 (3.8)= 1 = 1 = (3.4)

Table 1: Limiting cases of (3.5)

By approaching α = 1 in (3.5) we obtain:

C(u, v; ) =

∫ Φ−1(u)

−∞

−∞

Φ

(

Φ−1(v) − β · y√

1 − β2

)

ϕ2(x, y; γ) dy dx (3.6)

=

∫ u

0

∫ 1

0

Φ

(

Φ−1(v) − β · Φ−1(t)√

1 − β2

)

c(s, t; γ) dt ds

=

∫ u

0

∫ 1

0

(

∂tC(u, t; β)

)

c(s, t; γ) dt ds

However, Equation (3.6) may be considered rather unattractive. By approaching

5

γ = 1 in (3.5) instead we obtain a more interesting formula:

C(u, v; ) =

−∞

Φ

(

Φ−1(u) − α · z√1 − α2

)

Φ

(

Φ−1(v) − β · z√

1 − β2

)

ϕ(z) dz (3.7)

=

∫ 1

0

Φ

(

Φ−1(u) − α · Φ−1(t)√1 − α2

)

Φ

(

Φ−1(v) − β · Φ−1(t)√

1 − β2

)

dt

=

∫ 1

0

(

∂tC(t, v; α)

)(

∂tC(u, t; β)

)

dt

Equation (3.7) seems to have been discovered repeatedly, sometimes in moregeneral context (e.g., multivariate, cf. (Steck & Owen 1962)), sometimes forspecial cases. Gupta (1963a) gives credit to Dunnett & Sobel (1955).

By approaching α = γ = 1 in (3.5) we obtain:

C(u, v; ) =

∫ u

0

Φ

(

Φ−1(v) − · Φ−1(t)√

1 − 2

)

dt =

∫ u

0

∂tC(t, v; ) dt (3.8)

=

∫ v

0

Φ

(

Φ−1(u) − · Φ−1(t)√

1 − 2

)

dt =

∫ v

0

∂tC(u, t; ) dt (3.9)

These formulas can also be derived from (3.1) and (3.2). Finally, by approachingα = β = 1 in (3.5) we rediscover (3.4).

In order to evaluate the bivariate normal distribution function numerically,Owen (1956) defined the following very useful function, to which we will referas Owen’s T -function:

T (h, a) =1

∫ a

0

exp(

− 12h2(1 + x2)

)

1 + x2dx = ϕ(h)

∫ a

0

ϕ(hx)

1 + x2dx (3.10)

He proved that

C(u, v; ) =u + v

2− T

(

Φ−1(u), αu

)

− T(

Φ−1(v), αv

)

− δ(u, v) (3.11)

where

δ(u, v) :=

{

12 , if u < 1

2 , v ≥ 12 or u ≥ 1

2 , v < 12

0, else(3.12)

and

αu =1

1 − 2

(

Φ−1(v)

Φ−1(u)−

)

, αv =1

1 − 2

(

Φ−1(u)

Φ−1(v)−

)

. (3.13)

In particular, on the lines defined by v = 12 and by u = v, the following formulas

hold:

C(

u, 12 ; )

=u

2− T

(

Φ−1(u),− √

1 − 2

)

, (3.14)

C(u, u; ) = u − 2 · T(

Φ−1(u),

1 −

1 +

)

(3.15)

6

From (3.11) and (3.14) we can derive the useful formula

C(u, v; ) = C(

u, 12 ; u

)

+ C(

v, 12 ; v

)

− δ(u, v), (3.16)

where

u = − αu√

1 + α2u

= sin(arctan(−αu)),

v = − αv√

1 + α2v

= sin(arctan(−αv)).

On the diagonal u = v, (3.16) reads:

C(u, u; ) = 2 · C(

u, 12 ;−

1 −

2

)

(3.17)

Inversion of (3.17) using (2.13) gives:

C(

u, 12 ; )

=

{

12C(u, u; 1 − 22), < 0,

u − 12C(u, u; 1 − 22), > 0.

(3.18)

Applying (3.8) to (3.17) we obtain, cf. also (Steck & Owen 1962),

C(u, u; ) = 2 ·∫ u

0

g(t; ) dt (3.19)

with

g(u; ) := Φ

(√

1 −

1 + · Φ−1(u)

)

. (3.20)

We findd

duC(u, u; ) = 2 · g(u; ). (3.21)

The function g will become important in Section 5. Note that if U , V areuniformly distributed on [0, 1] with copula C(·, ·; ) then

g(u; ) = P(V ≤ u |U = u).

Below we list some properties of g:

limu−→0+

g(u; ) = 0, (3.22)

limu−→1−

g(u; ) = 1, (3.23)

g(12 ; ) = 1

2 , (3.24)

g(1 − u; ) = 1 − g(u; ), (3.25)

g(g(u; );−) = u (3.26)

In particular, (3.22) and (3.23) show that the bivariate normal copula does notexhibit (lower or upper) tail dependence (cf. Section 5.2.3 of Embrechts, Frey& McNeil (2005)).

Substitution of t = g(s; ) in (3.19) and application of (3.26) lead to theidentity, cf. also (Steck & Owen 1962):

C(u, u; ) = 2u · g(u; )− C (g(u; ), g(u; );−) (3.27)

7

4 Numerical evaluation

The bivariate normal copula has to be evaluated numerically. To my knowledge,in the literature there is no direct approach. Hence for the time being we haveto rely on the definition of the bivariate normal copula (2.1) and numerical eval-uation of Φ2 and Φ−1. There are excellent algorithms available for evaluation ofΦ−1, cf. (Acklam 2004); the main problem is evaluation of the bivariate normaldistribution function Φ2.

In the literature on evaluation of Φ2 there are basically two approaches:application of a multivariate method to the bivariate case, and explicit consid-eration of the bivariate case. For background on multivariate methods we referto the recent book by Bretz & Genz (2009). In most cases, bivariate methodswill be able to obtain the desired accuracy in less time. In the following wewill provide an overview on the literature. We will concentrate on methodsand omit references dealing with implementation only. Comparisons of differ-ent approaches in terms of accuracy and running time have been provided bynumerous authors, e.g., Agca & Chance (2003), Terza & Welland (1991), andWang & Kennedy (1990).

Before the advent of widely available computer power, extensive tables of thebivariate normal distribution function had to be created. Using (3.16) or similarapproaches, the three-dimensional problem (two variables and the correlationparameter) was reduced to a two-dimensional one.

Pearson (1901a) used the tetrachoric expansion (2.11) for small ||, andquadrature for large ||. Nicholson (1943), building on ideas of Sheppard (1900),worked with a two-parameter function, denoted V -function. Owen (1956) intro-duced the T -function (3.10) which is closely related to Nicholson’s V -function.For many years, quadrature of the T -function was the method of choice forevaluation of the bivariate normal distribution. Numerous authors, e.g. Borth(1973), Daley (1974), Young & Minder (1974), and Patefield & Tandy (2000),have been working on improvements, e.g. by dividing the plane into manyregions and choosing specific quadrature methods in each region.

Sowden & Ashford (1969) applied Gauss-Hermite quadrature to (3.7) andSimpson’s rule to (3.8). Drezner (1978) used (3.16) and Gauss quadrature. Divgi(1979) relied on polar coordinates and an approximation to the univariate Mills’ratio. Vasicek (1998) proposed an expansion which is more suitable for large ||than the tetrachoric expansion (2.11).

Drezner & Wesolowsky (1990) applied Gauss-Legendre quadrature to (2.8)for || ≤ 0.8, and to (2.9) for || > 0.8. Improvements of their method in termsof accuracy and robustness have been provided by Genz (2004) and West (2005).

Most implementations today will rely on variants of the approaches of Divgi(1979) or of Drezner & Wesolowsky (1990). The method of Drezner (1978),although less reliable, is also very common, mainly because it is proposed in(Hull 2008) and other prevalent books.

8

5 Bounds and approximations

Nowadays, high-precision numerical evaluation of the bivariate normal copulais usually available and there is not much need for low-precision approximationsanymore, in particular if the mathematics is hidden behind strange constantsderived from some optimization procedure.

On the other hand, if the mathematics is general and transparent then theresulting approximations are often rather weak. As an example, take approxi-mations to the multivariate Mills’ ratios applied to the bivariate case, cf. (Lu& Li 2009) and the references therein.

Bounds, if they are not too weak, are more interesting than approximationsbecause they can be used, e.g., for checking numerical algorithms. Moreover,derivation of bounds often provides valuable insight into the mathematics behindthe function to be bounded.

In the following we concentrate on bounds and approximations explicitlyderived for the bivariate case. Throughout this section we will only consider thecase > 0, 0 < u = v < 1/2. Note that by successively applying, if required,(3.16), (3.18), (3.27) and (2.15), we can always reduce C(u, v; ) to a sum of twoterms of that form. Any approximation or bound given for the special case canbe translated to an approximation or bound for the general case, with at mosttwice the absolute error. Note also that for many existing approximations andbounds the diagonal u = v may be considered a worst case, cf. (Willink 2004).

Mee & Owen (1983) elaborated on the so-called conditional approach pro-posed by Pearson (1901b). If (X, Y ) are bivariate standard normally distributedwith correlation then we can write

Φ2(h, k; ) = Φ(h) · P(Y ≤ k |X ≤ h). (5.1)

The distribution of Y conditional on X = h is normal but the distribution of Yconditional on X ≤ h is not. Nevertheless, it can be approximated by a normaldistribution with the same mean and variance. In terms of the bivariate normalcopula, the resulting approximation is

C(u, u; ) ≈ u ·Φ(

u · Φ−1(u) + · ϕ(

Φ−1(u))

u2 − 2 · ϕ (Φ−1(u)) · (u · Φ−1(u) + ϕ (Φ−1(u)))

)

. (5.2)

The approximation works well for || not too large. For || large there arealternative approximations, e.g. (Albers & Kallenberg 1994).

The simpler of the two approximations proposed by Cox & Wermuth (1991)replaces the second factor in (5.1) by the mean of the conditional distribution(the more complicated approximation adds a term of second order). In termsof the bivariate normal copula, the resulting approximation is

C(u, u; ) ≈ u · Φ(

1 −

1 + · u · Φ−1(u) + · ϕ

(

Φ−1(u))

(1 + ) · u

)

.

9

Mallows (1959) gave two approximations to Owen’s T -function (3.10). Interms of the bivariate normal copula, the simpler one reads

C(u, u; ) ≈ 2u · Φ(√

1 −

1 + ·(

Φ−1

(

u

2+

1

4

)

− Φ−1

(

3

4

)))

.

Further approximations were derived by Cadwell (1951) and Polya (1949).There are not too many bounds available in the literature. For > 0 there

are, of course, the trivial bounds (2.2) and (2.3):

u2 ≤ C(u, u; ) ≤ u

The upper bound given by Polya (1949) is just the one above. His lower bound istoo weak (even negative) on the diagonal. A recent overview on known bounds,and derivation of some new ones, is provided by Willink (2004). We will presentsome of his bounds below in more general context.

Theorem 5.1 Let ≥ 0 and 0 ≤ u ≤ 1/2. Then C(u, u; ) is bounded asfollows, where g(u; ) is defined as in (3.20):

C(u, u; ) ≥ u · g(u; ) (5.3)

C(u, u; ) ≤ u · g(u; ) · 2 (5.4)

The lower bound (5.3) is tight for = 0 or u = 0. The maximum error of (5.3)equals 1/4 and is obtained for u = 1/2, = 1. The upper bound (5.4) is tightfor = 1 or u = 0. The maximum error of (5.4) equals 1/4 and is obtained foru = 1/2, = 0.

A proof of Theorem 5.1 is provided in Section A.2, together with a proof ofthe following refinement:

Theorem 5.2 Let > 0 and 0 ≤ u ≤ 1/2. Then C(u, u; ) is bounded asfollows, where g(u; ) is defined as in (3.20):

C(u, u; ) ≥ u · g(u; ) ·(

1 +2

πarcsin()

)

(5.5)

C(u, u; ) ≤ u · g(u; ) · (1 + ) (5.6)

These bounds are the optimal ones of the form u · g(u) · a(). They are tight for = 0, = 1, or u = 0. The maximum error of (5.6) is obtained for u = 1/2,

=√

1 − 4π2 ≈ 0.7712, the value being

1

4

(

1 − 4

π2− 2

πarcsin

(

1 − 4

π2

))

≈ 0.05263.

The lower bound (5.5) is tight for u = 1/2.

10

The bounds (5.3) and (5.6) have been discussed, without explicit computa-tion of the maximum error, by Willink (2004). The maximum error of (5.5) isdifficult to grasp analytically. Numerically, the error always stays below 0.006.

An alternative upper bound is given by the following theorem, the proof ofwhich is provided in Section A.3:

Theorem 5.3 Let > 0 and 0 ≤ u ≤ 1/2. Then

C(u, u; ) ≤ 2u · g(u

2; )

.

The bound is tight for = 0, = 1, or u = 0. The maximum error is obtainedfor u = 1/2,

− 1 +

2π · Φ−1(14 )

= ϕ

(√

1 −

1 + · Φ−1(1

4 )

)

,

i.e., ≈ 0.5961, the value being approx. 0.015.

It is also possible to derive good approximations to C(u, u; ) by consideringthe family C(u, u; ) ≈ u · g(u; ) · (a() + b()u). In particular, the choice

a() := 1 + , b() := 2 ·(

1 +2

πarcsin() − (1 + )

)

=4

πarcsin() − 2

is attractive because the resulting approximation

C(u, u; ) ≈ u · g(u; ) ·(

1 + +

(

4

πarcsin() − 2

)

u

)

(5.7)

is tight for = 0, = 1, u = 0, or u = 1/2, and for u −→ 0+ it has the sameasymptotic behaviour as (5.6). By visual inspection we may conjecture that itis even an upper bound, with an error almost cancelling the error of the lowerbound (5.5) most of the time. Consequently, an even better approximation (butnot a lower bound, for large) is given by

C(u, u; ) ≈ u · g(u; ) ·(

1 +

2+

(

2

πarcsin() −

)

u

)

. (5.8)

Again, (5.8) is tight for = 0, = 1, u = 0, or u = 1/2. Numerically,the absolute error always stays below 0.0006. Hence (5.8) is comparable inperformance with (5.2), and much better for large.

6 Measures of concordance

In the study of dependence between (two) random variables, properties andmeasures that are scale-invariant, i.e., invariant under strictly increasing trans-formations of the random variables, can be expressed in terms of the (bivariate)copula of the random variables. Among these are the so-called measures of

11

concordance, in particular Kendall’s tau, Spearman’s rho, Blomqvist’s beta andGini’s gamma. For background and general definitions and properties we refer toSection 5 of (Nelsen 2006). In this section we will provide formulas for measuresof concordance for the bivariate normal copula, depending on the correlationparameter .

Blomqvist’s beta follows immediately from (2.10):

β() := 4 · C(

12 , 1

2 ; )

− 1

=2

πarcsin() (6.1)

For the bivariate normal copula, Kendall’s tau equals Blomqvist’s beta:

τ() := 4

∫ 1

0

∫ 1

0

C(u, v; ) dC(u, v; ) − 1

= 1 − 4

∫ 1

0

∫ 1

0

(

∂vC(u, v; )

)(

∂uC(u, v; )

)

du dv

=2

πarcsin() (6.2)

For a proof of (6.1) and (6.2) cf. Section 5.3.2 of (Embrechts et al. 2005). BothBlomqvist’s beta and Kendall’s tau can be generalized to (copulas of) ellipticaldistributions, cf. (Lindskog, McNeil & Schmock 2003). This is not the case forSpearman’s rho, cf. (Hult & Lindskog 2002), which is given by:

S() := 12

∫ 1

0

∫ 1

0

C(u, v; ) − uv du dv

= 12

∫ 1

0

∫ 1

0

C(u, v; ) du dv − 3

=6

πarcsin

(

2

)

(6.3)

For proofs of (6.3) cf. (Kruskal 1958) or Section 5.3.2 of (Embrechts et al. 2005).Gini’s gamma for the bivariate normal copula is given as follows:

γ() := 4

(∫ 1

0

C(u, u; ) du +

∫ 1

0

C(u, 1 − u; ) du − 1

2

)

= 4

(∫ 1

0

C(u, u; ) du +

∫ 1

0

u − C(u, u;−) du − 1

2

)

=2

π

(

arcsin

(

1 +

2

)

− arcsin

(

1 −

2

))

(6.4)

=4

π

(

arcsin

(√1 +

2

)

− arcsin

(√1 −

2

))

(6.5)

=4

πarcsin

(

1

4

(

(1 + )(3 + ) −√

(1 − )(3 − ))

)

(6.6)

12

I have not been able to find proofs in the literature, hence they will be providedin Section A.4.

Equation (6.6) can be inverted which may be useful for estimation of froman estimate for γ():

= sin(

γ() · π

4

)

3 − tan(

γ() · π

4

)

(6.7)

Finally, we propose a new measure of concordance, similar to Gini’s gammabut based on the lines u = 1/2, v = 1/2 instead of the diagonals. For a bivariatecopula C it is defined by:

γ(C(·, ·)) := 4

(∫ 1

0

C(

u, 12

)

− u

2du +

∫ 1

0

C(

12 , v)

− v

2dv

)

(6.8)

= 4

(∫ 1

0

C(

u, 12

)

du +

∫ 1

0

C(

12 , v)

dv − 1

2

)

For the bivariate normal copula we obtain

γ() := γ(C(·, ·; )) =4

πarcsin

(

√2

)

. (6.9)

A proof is given implicitly in Section A.4.

7 Univariate distributions

In this section we will discuss two univariate distributions being closely relatedto the bivariate normal copula (or distribution).

7.1 The skew-normal distribution

A random variable X on R is skew-normally distributed with skewness parameterλ ∈ R if it has a density function of the form

fλ(x) = 2ϕ(x)Φ(λx). (7.1)

The skew-normal distribution was introduced by O’Hagan & Leonard (1976) andstudied and made popular by Azzalini (1985, 1986). Its cumulative distributionfunction is given by

P(X ≤ x) =

∫ x

−∞

2ϕ(x)Φ(λx) = 2

∫ Φ(x)

0

Φ(

λΦ−1(t))

dt. (7.2)

13

In the light of (3.19) and (3.17), cf. also (Azzalini & Capitanio 2003), we find

P(X ≤ x) = 2Φ2

(

x, 0;− λ√1 + λ2

)

(7.3)

=

Φ2

(

x, x;1 − λ2

1 + λ2

)

, λ ≥ 0,

1 − Φ2

(

−x,−x;1 − λ2

1 + λ2

)

, λ ≤ 0.(7.4)

In particular, the bounds given in Theorem 5.2 can be applied.

7.2 The Vasicek distribution

A random variable P on the interval [0, 1] is Vasicek distributed with parametersp ∈ (0, 1) and ∈ (0, 1) if Φ−1(P ) is normally distributed with mean

E(Φ−1(P )) =Φ−1(p)√

1 − (7.5)

and varianceV(Φ−1(P )) =

1 − . (7.6)

In Section A.1 implicitly it is proved that

E(P ) = p, E(P 2) = C(p, p; ),

so thatV(P ) = E(P 2) − E(P )2 = C(p, p; ) − p2.

Furthermore, we have

P(P ≤ q) = P(Φ−1(P ) ≤ Φ−1(q)) = Φ

(√1 − · Φ−1(q) − Φ−1(p)√

)

.

The (one-sided) α-Quantile qα of P , with α ∈ (0, 1), is therefore given by

qα = Φ

(√ · Φ−1(α) + Φ−1(p)√

1 −

)

. (7.7)

In particular, the median of P is simply

q0.5 = Φ

(

Φ−1(p)√1 −

)

= Φ(

E(Φ−1(P )))

. (7.8)

The density of P is

d

dqP(P ≤ q) =

1 −

· ϕ(√

1 − · Φ−1(q) − Φ−1(p)√

)

· 1

ϕ (Φ−1(q)).

14

The distribution is unimodal with the mode at

Φ

(√1 −

1 − 2· Φ−1(p)

)

for < 0.5, monotone for = 0.5, and U-shaped for > 0.5.Let P be Vasicek distributed with parameters p, ˜, and let

corr(

Φ−1(P ), Φ−1(P ))

= γ.

Then

cov(

Φ−1(P ), Φ−1(P ))

= γ ·√

1 − · ˜

1 − ˜,

E

(

P · P)

= C(

p, p; γ ·√

· ˜)

,

andcov

(

P, P)

= C(

p, p; γ ·√

· ˜)

− p · p.

The Vasicek distribution does not offer immediate advantages over othertwo-parametric continuous distributions on (0, 1), such as the beta distribution.Its importance stems from its occurrence as mixing distribution in linear factormodels set up as in Section A.1. It is a special case of a probit-normal distri-bution; it is named after Vasicek who introduced it into credit risk modeling.

For (different) details on the material in this section we refer to Vasicek(1987, 2002) and Tasche (2008). Estimation of the parameters p and is alsodiscussed by Meyer (2009).

References

Acklam, P. (2004), An algorithm for computing the inverse normal cumulativedistribution function.http://home.online.no/∼pjacklam/notes/invnorm/index.html

Albers, W., Kallenberg, W.C.M. (1994), A simple approximation to the bivariatenormal distribution with large correlation coefficient, Journal of Multivari-ate Analysis 49(1), pp. 87–96.

Arnold, B.C., Lin, G.D. (2004), Characterizations of the skew-normal and gen-eralized chi distributions, Sankhya : The Indian Journal of Statistics 66(4),pp. 593–606.

Agca, S., Chance, D.M. (2003), Speed and accuracy comparison of bivariatenormal distribution approximations for option pricing, Journal of Compu-tational Finance 6(4), pp 61–96.

Azzalini, A. (1985), A class of distributions which includes the normal ones,Scandinavian Journal of Statistics 12(2), pp. 171–178.

15

Azzalini, A. (1986), Further results on a class of distributions which includesthe normal ones, Statistica 46(2), pp. 199–208.

Azzalini, A., Capitanio, A. (2003), Distributions generated by perturbation ofsymmetry with emphasis on a multivariate skew t distribution, Journal ifthe Royal Statistical Society Series B 65, pp. 367–389.

Balakrishnan, N., Lai, C.D. (2009), Continuous Bivariate Distributions, 2nded., Springer.

Baricz, A. (2008), Mills’ ratio: Monotonicity patterns and functional inequali-ties, Journal of Mathematical Analysis and Applications 340(2), pp. 1362–1370.

Bluhm, C., Overbeck, L. (2003), Estimating systematic risk in uniform creditportfolios, in: Credit Risk; Measurement, Evaluation and Management, ed.G. Bol et al., Contributions to Economics, Physica-Verlag Heidelberg, pp.35–48.

Borth, D.M. (1973), A modification of Owen’s method for computing the bi-variate normal integral, Journal of the Royal Statistical Society, Series C(Applied Statistics) 22(1), pp. 82–85.

Bretz, F., Genz, A. (2009), Computation of Multivariate Normal and t Proba-bilities, Lecture Notes in Statistics 195, Springer.

Cadwell, J.H. (1951), The bivariate normal integral, Biometrika 38(3-4), pp.475–479.

Cox, D.R., Wermuth, N. (1991), A simple approximation for bivariate andtrivariate normal integrals, International Statistical Review 59(2), pp. 263–269.

Daley, D.J. (1974), Computation of bi- and tri-variate normal integrals, Journalof the Royal Statistical Society, Series C (Applied Statistics) 23(3), pp.435–438.

Divgi, D.R. (1979), Calculation of univariate and bivariate normal probabilityfunctions, Annals of Statistics 7(4), pp. 903–910.

Drezner, Z. (1978), Computation of the bivariate normal integral, Mathematicsof Computation 32(141), pp. 277–279.

Drezner, Z., Wesolowsky, G.O. (1990), On the computation of the bivariatenormal integral, Journal of Statistical Computation and Simulation 35, pp.101–107.

Dunnett, C.W., Sobel, M. (1955), Approximations to the probability integraland certain percentage points of a multivariate analogue of Student’s t-distribution, Biometrika 42(1-2), pp. 258–260.

16

Embrechts, P., Frey, R., McNeil, A.J. (2005), Quantitative Risk Management:Concepts, Techniques, Tools, Princeton University Press.

Genz, A. (2004), Numerical computation of rectangular bivariate and trivariatenormal and t probabilities, Statistics and Computing 14(3), pp. 151–160.

Gupta, S. (1963a), Probability integrals of multivariate normal and multivariatet, Annals of Mathematical Statistics 34(3), pp. 792–828.

Gupta, S. (1963b), Bibliography on the multivariate normal integrals and relatedtopics, Annals of Mathematical Statistics 34(3), pp. 829–838.

Hull, J. (2008), Futures, Options, and Other Derivatives, 7th ed., Prentice Hall.

Hult, H., Lindskog, F. (2002), Multivariate extremes, aggregation and depen-dence in elliptical distributions, Advances in Applied Probability 34(3), pp.587–608.

Kotz, S., Balakrishnan, N., Johnson, N.L. (2000), Continuous Multivariate Dis-tributions, Volume 1: Models and Applications, 2nd ed., Wiley Series inProbability and Statistics.

Kruskal, W.H. (1958), Ordinal measures of association, Journal of the AmericanStatistical Association 53(284), pp. 814–861.

Lindskog, F., McNeil, A.J., Schmock, U., Kendall’s tau for elliptical distribu-tions, in: Credit Risk; Measurement, Evaluation and Management, ed.G. Bol et al., Contributions to Economics, Physica-Verlag Heidelberg, pp.149–156.

Lu, D., Li., W.V. (2009), A note on multivariate Gaussian estimates, Journalof Mathematical Analysis and Applications 354, pp. 704–707.

Mallows, C.L. (1959), An approximate formula for bivariate normal probabil-ities, Technical Report No. 30, Statistical Techniques Research Group,Princeton University.

Mee, R.W., Owen, D.B. (1983), A simple approximation for bivariate normalprobability, Journal of Quality Technology 15, pp. 72–75.

Mehler, G. (1866), Reihenentwicklungen nach Laplaceschen Functionen hohererOrdnung, Journal fur die reine und angewandte Mathematik 66, pp. 161–176.

Meyer, C. (2009), Estimation of intra-sector asset correlations, Journal of RiskModel Validation 3(4), pp. 47–79.

Nelsen, R.B. (2006), An Introduction to Copulas, 2nd ed., Springer.

Nicholson, C. (1943), The probability integral for two variables, Biometrika33(1), pp. 59–72.

17

O’Hagan, A., Leonard, T. (1976), Bayes estimation subject to uncertainty aboutparameter constraints, Biometrika 63(1), pp. 201–203.

Owen, D.B. (1956), Tables for computing bivariate normal probability, Annalsof Mathematical Statistics 27, pp. 1075–1090.

Patefield, M., Tandy, D. (2000), Fast and accurate computation of Owen’s T -function, Journal of Statistical Software 5(5).

Patel, J.K, Read, C.B. (1996), Handbook of the Normal Distribution, Dekker.

Pearson, K. (1901a), Mathematical contributions to the theory of evolution. VII.On the correlation of characters not quantitatively measurable, Philosophi-cal Transactions of the Royal Society of London Series A 195, pp. 1–47.

Pearson, K. (1901b), Mathematical contributions to the theory of evolution. XI.On the influence of natural selection on the variability and correlation oforgans, Philosophical Transactions of the Royal Society of London SeriesA 200, pp. 1–66.

Pinelis, I. (2002), Monotonicity properties of the relative error of a Pade ap-proximation for Mills’ ratio, Journal of Inequalities in Pure and AppliedMathematics 3(2).

Plackett, R.L. (1954), A reduction formula for normal multivariate integrals,Biometrika 41(3), pp. 351–360.

Polya, G. (1949), Remarks on computing the probability integral in one and twodimensions, Proceedings of the First Berkeley Symposium on MathematicalStatistics and Probability, Univ. of California Press, pp. 63–78.

Sheppard, W.F. (1898), On the application of the theory of error to cases ofnormal distributions and normal correlation, Philosophical Transactions ofthe Royal Society of London Series A 192, pp. 101–167.

Sheppard, W.F. (1900), On the calculation of the double integral expressingnormal correlation, Transactions of the Cambridge Philosophical Society19, pp. 23–69.

Sowden, R.R., Ashford, J.R. (1969), Computation of the bivariate normal inte-gral, Journal of the Royal Statistical Society, Series C (Applied Statistics)18(2), pp. 169–180.

Steck, G.P., Owen, D.B. (1962), A note on the equicorrelated multivariate nor-mal distribution, Biometrika 49(1-2), pp. 269–271.

Stieltjes, T.S. (1889), Extrait d’une lettre adressee a M. Hermite, Bulletin desSciences Mathematiques Series 2 13, p. 170.

Tasche, D. (2008), The Vasicek distribution.http://www-m4.ma.tum.de/pers/tasche/

18

Terza, J.V., Welland, U. (1991), A comparison of bivariate normal algorithms,Journal of Statistical Computation and Simulation 39(1-2), pp. 115–127.

Vasicek, O. (1987), Probability of loss on loan portfolio.http://www.moodyskmv.com/research/portfolioCreditRisk wp.html

Vasicek, O. (1998), A series expansion for the bivariate normal integral, TheJournal of Computational Finance 1(4), pp. 5–10.

Vasicek, O. (2002), The distribution of loan portfolio value, RISK 15(12), pp.160–162.

Wang, M., Kennedy, W.J. (1990), Comparison of algorithms for bivariate nor-mal probability over a rectangle based on self-validated results from intervalanalysis, Journal of Statistical Computation and Simulation 37(1-2), pp.13–25.

West, G. (2005), Better approximations to cumulative normal functions,Wilmott Magazine, May, pp. 70–76.

Willink, R. (2004), Bounds on the bivariate normal distribution function, Com-munications in Statistics; Theory and Methods 33(10), pp. 2281–2297.

Young, J.C., Minder, Ch.E. (1974), An integral useful in calculating non-centralt and bivariate normal probabilities, Journal of the Royal Statistical Society,Series C (Applied Statistics) 23(3), pp. 455–457.

A Proofs

A.1 Proof of (3.5)

Let

X = α · Y +√

1 − α2 · ǫ,X = β · Y +

1 − β2 · ǫ,

where α, β ∈ (−1, 1) \ {0} are parameters and where Y , Y , ǫ, ǫ are all standardnormal and pairwise independent, except

γ := corr(

Y, Y)

= cov(

Y, Y)

.

By construction, X and X are standard normal again with

corr(

X, X)

= cov(

X, X)

= αβγ.

We define indicator variables Z = Z(X) ∈ {0, 1}, Z = Z(X) ∈ {0, 1} calibratedto expectation values u, v:

Z = 1 :⇐⇒ X ≤ Φ−1(u),

Z = 1 :⇐⇒ X ≤ Φ−1(v)

19

Conditional on (Y = y, Y = y), Z and Z are independent. We find

P(Z = 1 |Y = y) = P(

X ≤ Φ−1(u) |Y = y)

= P

(

α · y +√

1 − α2 · ǫ ≤ Φ−1(u))

= P

(

ǫ ≤ Φ−1(u) − α · y√1 − α2

)

= Φ

(

Φ−1(u) − α · y√1 − α2

)

.

Now we define the random variables

P := P (Y ) := Φ

(

Φ−1(u) − α · Y√1 − α2

)

,

P := P (Y ) := Φ

(

Φ−1(v) − β · Y√

1 − β2

)

.

We find

u = E(Z) = E(E(Z |Y )) = E(P(Z = 1 |Y )) = E(P ), v = E(P )

and

P

(

Z = 1, Z = 1)

= P

(

X ≤ Φ−1(u), X ≤ Φ−1(v))

= Φ2

(

Φ−1(u), Φ−1(v), cov(X, X))

= Φ2

(

Φ−1(u), Φ−1(v), αβγ)

.

On the other hand,

P

(

Z = 1, Z = 1)

= P

(

ZZ = 1)

= E

(

ZZ)

= E

(

E(ZZ |Y, Y ))

= E

(

E(Z |Y, Y ) · E(Z |Y, Y ))

= E

(

E(Z |Y ) · E(Z | Y ))

= E

(

P · P)

=

−∞

−∞

P (x) · P (y) · ϕ2(x, y, γ) dx dy

=

−∞

−∞

Φ

(

Φ−1(u) − α · x√1 − α2

)

Φ

(

Φ−1(v) − β · y√

1 − β2

)

ϕ2(x, y, γ) dx dy.

A.2 Proof of Theorems 5.1 and 5.2

We will assume as fixed and write

C(u) := C(u, u; ), g(u) := g(u; ).

20

The upper bound (5.4) follows from (3.27). Regarding the lower bound (5.3)we note that

g′(u) =

1 −

1 + · exp

(

1 + · Φ−1(u)2

)

> 0,

g′′(u) = g′(u) · 2

1 + · Φ−1(u)

ϕ(Φ−1(u))< 0.

Hence g is increasing and concave on (0, 1/2) and we conclude

C(u) = 2

∫ u

0

g(t) dt ≥ 2 · 1

2· u · g(u) = u · g(u).

Now we defineDa(u) := u · g(u) · a

with a ∈ [1, 2], u ∈ [0, 1/2]. We start by noting that

C′(u) = 2g(u) > 0, C′′(u) = 2g′(u) > 0,

hence C is increasing and convex on (0, 1/2). Furthermore,

D′

a(u) = a · (g(u) + u · g′(u)) > 0,

D′′

a(u) = a · (2 · g′(u) + u · g′′(u))

= 2a · g′(u) ·(

1 +1

1 + · u · Φ−1(u)

ϕ (Φ−1(u))

)

= 2a · g′(u) ·(

1 − 1

1 + · H(

−Φ−1(u))

)

with H(x) = x · R(x), where

R(x) =1 − Φ(x)

ϕ(x)

is Mills’ ratio. Pinelis (2002) has shown that H ′(x) > 0 for x > 0, H(0) = 0,and limx−→∞ H(x) = 1. Hence Da(u) is increasing and convex on (0, 1/2) aswell.

For u ∈ (0, 1/2), C′(u) = D′

a(u) is equivalent with

f(u) :=u · g′(u)

g(u)=

2 − a

a, or a =

2

1 + f(u).

We will show that f is strictly increasing on (0, 1/2). We have

f(u) = λ · Φ(

Φ−1(u))

· ϕ(

λ · Φ−1(u))

ϕ (Φ−1(u)) · Φ (λ · Φ−1(u))= λ · Fλ

(

−Φ−1(u))

with λ =√

1−1+ ∈ [0, 1] and

Fλ(x) :=R(x)

R(λ · x), x ≥ 0.

21

We find

F ′

λ(x) =R′(x) · R(λ · x) − R(x) · R′(λ · x)

R(λ · x)2

= Fλ(x) ·(

R′(x)

R(x)− λ · R′(λ · x)

R(λ · x)

)

< 0

for λ < 1. Here we have used that Fλ(x) > 0 and that the function

y 7→ y · R′(y)

R(y)

is strictly decreasing on (0,∞), cf. (Baricz 2008). We conclude

f ′(u) = −λ · F ′

λ

(

−Φ−1(u))

ϕ (−Φ−1(u))> 0.

Furthermore, we find

f(

12

)

= λ · R(0)

R(0)=

1 −

1 +

and

limu−→0+

f(u) = limu−→0+

g′(u) + u · g′′(u)

g′(u)

= limu−→0+

1 +2

1 + · u · Φ−1(u)

ϕ(Φ−1(u))

= limu−→0+

1 − 2

1 + · H(

−Φ−1(u))

= 1 − 2

1 + =

1 −

1 + .

We have C(0) = Da(0) = 0, C′(0) = D′

a(0) = 0, and

limu−→0+

Da(u)

C(u)= lim

u−→0+

D′

a(u)

C′(u)= lim

u−→0+

a

2(1 + f(u)) =

a

1 + .

By standard calculus we conclude that

• For a ≥ 1 + we have D′

a(u) ≥ C′(u), and hence Da(u) ≥ C(u), for allu ∈ [0, 1/2];

• For a ≤ 2 ·(

1 +√

1−1+

)−1

we have D′

a(u) ≤ C′(u), and hence Da(u) ≤C(u), for all u ∈ [0, 1/2];

• For

a ∈(

2 ·(

1 +

1 −

1 +

)−1

, 1 +

)

22

there exists u0 ∈ (0, 1/2) with D′

a(u) < C′(u) for u ∈ (0, u0), and D′

a(u) >C′(u) for u ∈ (u0, 1/2). Consequently, the best lower bound for C ofthe form Da is obtained if C(1/2) = Da(1/2), i.e., a = 1 + 2

π arcsin().Moreover, the upper bound Da with a = 1 + can not be improved.

The maximum error of Da with a = 1 + is attained if

d

d[Da(1/2) − C(1/2)] =

1

4− 1

2π√

1 − 2= 0,

which is equivalent with =√

1 − 4π2 .

A.3 Proof of Theorem 5.3

We will assume as fixed and write

C(u) := C(u, u; ), g(u) := g(u; ).

We have

C(u) = 2 ·∫ u

0

g(t) dt = 2u · g(v(u))

withv(u) := v(u; ) ≤ u.

Since g is concave and increasing, we even know that

v(u) ≤ u

2

and henceC(u) = 2u · g(v(u)) ≤ 2u · g

(u

2

)

.

Moreover, for the same reason we have

d

du

(

2u · g(u

2

)

− C(u))

= 2(u

2· g′(u

2

)

−(

g(u) − g(u

2

)))

≥ 0,

and hence, for fixed, the maximum error is obtained for u = 1/2, the valuebeing

Φ

(√

1 −

1 + · Φ−1

(

1

4

))

− 1

4− 1

2πarcsin().

Derivation of the above expression with respect to gives the result.Note that by (3.26) we can write

v(u; ) = g

(

C(u)

2u;−

)

Unfortunately, for large , the function v is not convex, and the approximation

2u · g(u

2

)

· C(12 )

g(14 )

is not an upper bound for C(u).

23

A.4 Proof of (6.4), (6.5), (6.6)

In a first step, using (2.8) we find:∫ 1

0

C(u, u; ) du =

∫ 1

0

u2 +1

0

1√1 − r2

exp

(

−Φ−1(u)2

1 + r

)

dr du

=1

3+

1

0

1√1 − r2

−∞

ϕ(v) exp

(

− v2

1 + r

)

dv dr

=1

3+

1

0

1√1 − r2

−∞

ϕ(s)

1 + r

3 + rds dr

=1

3+

1

0

1√

(1 − r)(3 + r)dr

=1

3+

1

∫1+

2

12

1√1 − r2

dr

=1

3+

1

(

arcsin

(

1 +

2

)

− arcsin

(

1

2

))

=1

4+

1

2πarcsin

(

1 +

2

)

.

We conclude that

γ() = 4

(∫ 1

0

C(u, u; ) du +

∫ 1

0

u − C(u, u;−) du − 1

2

)

= 4

(

1

4+

1

2πarcsin

(

1 +

2

)

+1

2− 1

4− 1

2πarcsin

(

1 −

2

)

− 1

2

)

=2

π

(

arcsin

(

1 +

2

)

− arcsin

(

1 −

2

))

.

In a similar way, using again (2.8), we can compute∫ 1

0

C(

u, 12 ; )

du =

∫ 1

0

u

2+

1

0

1√1 − r2

exp

(

− Φ−1(u)2

2(1 − r2)

)

dr du

=1

4+

1

2πarcsin

(

√2

)

,

which, using (3.17), leads to∫ 1

0

C(u, u; ) du = 2 ·∫ 1

0

C

(

u, 12 ;−

1−2

)

du =1

2− 1

πarcsin

(√1 −

2

)

.

We obtain alternative formulas for Gini’s gamma, the second one using theaddition theorem for the arcsin function:

γ() =4

π

(

arcsin

(√1 +

2

)

− arcsin

(√1 −

2

))

=4

πarcsin

(

1

4

(

(1 + )(3 + ) −√

(1 − )(3 − ))

)

24


Recommended