+ All Categories
Home > Documents > Monte Carlo Simulation of Long-Term Dependent Processes: A Primer

Monte Carlo Simulation of Long-Term Dependent Processes: A Primer

Date post: 03-Oct-2016
Category:
Upload: carlos-leon
View: 215 times
Download: 4 times
Share this document with a friend
10
48 Wilmott magazine Keywords Monte Carlo simulation, Fractional Brownian Motion, Hurst exponent, Long-term Dependence, Biased Random Walk JEL Classification: C15, C53, C63, G17, G14. Introduction This article briefly describes the Cholesky method for simulating Geometric Brownian Motion processes with long-term dependence, also referred as Fractional Geometric Brownian Motion. This choice results from its parsi- mony, simplicity and documented theoretical advantages (Jennane et al., 2001). Price or return paths simulated with this method exhibit approximately the same first four distributional moments for differing cases of serial-depend- ence (i.e. independence, persistence and antipersistence). Graphical inspection (i.e. probability plots and simulated paths) and the estimation of the simulated time-series’ Hurst exponent corroborates the impact of long-term memory in financial markets, where significant and sustained price changes are more likely than standard Brownian Motion or random walk models assume. Results show that the chosen method generates random numbers capable of replicating independent, persistent or antipersistent time- series depending on the value of the chosen Hurst exponent. Simulating FBM via the Cholesky method is (i) convenient since it grants the ability to replicate intense and enduring returns, which allows for reproducing well-documented financial returns’ slow convergence in distribution to a Gaussian law, and (ii) straightforward since it takes advantage of the Gaussian distribution ability to express a broad type of stochastic processes by changing how volatility behaves with respect to the time horizon. A draw- back is that the methodology is computationally demanding, principally from the use of the Cholesky decomposition. Monte Carlo Simulation of Long-Term Dependent Processes: A Primer Potential applications of Fractional Geometric Brownian Motion simu- lation include market, credit and liquidity risk models, option valuation techniques, portfolio optimization models and payments systems dynam- ics. All can benefit from the availability of a stochastic process that provides the ability to explicitly model how volatility behaves with respect to the time horizon in order to simulate severe and sustained price and quantity changes. These applications are more pertinent than ever because of the consensus regarding the limitations of customary models for valuation, risk and asset allocation after the most recent episode of global financial crisis. Fractional Brownian Motion (FBM) Bachelier (1900) introduced Brownian motion for describing the behav- ior of financial prices in what is known as Arithmetic Brownian motion, which was afterwards revised by Samuelson (1965) because of several inconveniences arising from applying such process to financial prices directly (e.g. prices may turn negative in the long-term); Samuelson’s revision resulted in applying Arithmetic Brownian motion to prices’ log- returns, commonly referred as Geometric Brownian Motion. The Geometric Brownian Motion (henceforth Brownian Motion or BM) underlies modern economic and financial theory. Following Mandelbrot (1963), if Z(t) is the log-return of the price of a stock at the end of time period t, successive differences of the form Z(t+s) – Z(t) are (i) independent, (ii) Gaussian or normally distributed, (iv) random variables (iv) with zero mean and (v) variance proportional to the differencing interval s. Therefore, let s 2 be the variance and Z(•) an independent and continuous process, BM may be expressed as in [F1], where the operator means independently and equally distributed: Z(t + s) Z(t) N ( 0, σ 2 ) [F1] Consequently, the resulting process of Z(t) exhibits the following statistical properties: E [Z (t)] = 0, t [F2] COV [Z (t, s)] = 0;∀t, s [F3] Carlos León Banco de la República (Colombia), e-mail: [email protected] Alejandro Reveiz The World Bank The opinions and statements are the sole responsibility of the authors; they do not necessarily reflect the official position of the Central Bank of Colombia, the World Bank Group, or their Board of Directors.
Transcript
Page 1: Monte Carlo Simulation of Long-Term Dependent Processes: A Primer

48 Wilmott magazine

KeywordsMonte Carlo simulation, Fractional Brownian Motion, Hurst exponent, Long-term Dependence, Biased Random Walk

JEL Classification: C15, C53, C63, G17, G14.

IntroductionThis article briefly describes the Cholesky method for simulating Geometric Brownian Motion processes with long-term dependence, also referred as Fractional Geometric Brownian Motion. This choice results from its parsi-mony, simplicity and documented theoretical advantages (Jennane et al., 2001).

Price or return paths simulated with this method exhibit approximately the same first four distributional moments for differing cases of serial-depend-ence (i.e. independence, persistence and antipersistence). Graphical inspection (i.e. probability plots and simulated paths) and the estimation of the simulated time-series’ Hurst exponent corroborates the impact of long-term memory in financial markets, where significant and sustained price changes are more likely than standard Brownian Motion or random walk models assume.

Results show that the chosen method generates random numbers capable of replicating independent, persistent or antipersistent time-series depending on the value of the chosen Hurst exponent. Simulating FBM via the Cholesky method is (i) convenient since it grants the ability to replicate intense and enduring returns, which allows for reproducing well- documented financial returns’ slow convergence in distribution to a Gaussian law, and (ii) straightforward since it takes advantage of the Gaussian distribution ability to express a broad type of stochastic processes by changing how volatility behaves with respect to the time horizon. A draw-back is that the methodology is computationally demanding, principally from the use of the Cholesky decomposition.

Monte Carlo Simulation of Long-Term Dependent Processes: A Primer†

Potential applications of Fractional Geometric Brownian Motion simu-lation include market, credit and liquidity risk models, option valuation techniques, portfolio optimization models and payments systems dynam-ics. All can benefit from the availability of a stochastic process that provides the ability to explicitly model how volatility behaves with respect to the time horizon in order to simulate severe and sustained price and quantity changes. These applications are more pertinent than ever because of the consensus regarding the limitations of customary models for valuation, risk and asset allocation after the most recent episode of global financial crisis.

Fractional Brownian Motion (FBM)Bachelier (1900) introduced Brownian motion for describing the behav-ior of financial prices in what is known as Arithmetic Brownian motion, which was afterwards revised by Samuelson (1965) because of several inconveniences arising from applying such process to financial prices directly (e.g. prices may turn negative in the long-term); Samuelson’s revision resulted in applying Arithmetic Brownian motion to prices’ log-returns, commonly referred as Geometric Brownian Motion.

The Geometric Brownian Motion (henceforth Brownian Motion or BM) underlies modern economic and financial theory. Following Mandelbrot (1963), if Z(t) is the log-return of the price of a stock at the end of time period t, successive differences of the form Z(t+s) – Z(t) are (i) independent, (ii) Gaussian or normally distributed, (iv) random variables (iv) with zero mean and (v) variance proportional to the differencing interval s. Therefore, let s 2 be the variance and Z(•) an independent and continuous process, BM may be expressed as in [F1], where the operator � means independently and equally distributed:

Z(t + s) − Z(t) � N (0, σ 2)

[F1]

Consequently, the resulting process of Z(t) exhibits the following statistical properties:

E [Z (t)] = 0, ∀t [F2]

COV [Z (t, s)] = 0; ∀t, ∀s [F3]

Carlos LeónBanco de la República (Colombia), e-mail: [email protected] ReveizThe World Bank

† The opinions and statements are the sole responsibility of the authors; they do not necessarily reflect the official position of the Central Bank of Colombia, the World Bank Group, or their Board of Directors.

48-57_Wilm_Leon_TP_July_2012.ind48 4848-57_Wilm_Leon_TP_July_2012.ind48 48 8/16/12 2:34:07 PM8/16/12 2:34:07 PM

Page 2: Monte Carlo Simulation of Long-Term Dependent Processes: A Primer

Wilmott magazine 49

TECHNICAL PAPER

^These two properties of the BM result in the widespread weak-form of

the Efficient Market Hypothesis (EMH): [F2] states that the expected return is zero for any time-horizon, thus the current price is the best forecast for future price, whereas [F2] affirms that past behavior of returns is irrelevant.

Independence is the foremost important pillar of BM, even more vital than the Gaussian distribution assumption. Two main reasons support this statement: First, even if returns are not normally distributed, the Central Limit Theorem demonstrates that independent time-series of returns will converge in distribution to a Gaussian Law rather quickly. Moreover, even if time-series exhibit some sort of short-term dependence that fades out after some realizations of the random variable (e.g. the process exhibits weak-dependence, such as AR or GARCH effects), Central Limit Theorem also guarantees such convergence.

Second, if time-series are independent or weak-dependent it is also true that variance of the process is proportional to the differencing interval; this is, that the square of the fluctuations of the prices increases in proportion to the time scale. According to Sornette (2003) this is equivalent to saying that the typical amplitude of returns is proportional to the square root of the time scale, which he describes as the most important prediction of the BM model. This is the widespread square-root-of-time-rule (hereafter referred as SRTR), which simply consists of multiplying the standard deviation cal-culated from high-frequency (hf) time-series (e.g. daily) by the square-root of n, where n is the number of units that compose the low-frequency (lf ) time-series (e.g. yearly), as in [F4].

σlf = σhf n0.5; ∀n [F4]

As acknowledged by Malavergne and Sornette (2006), slow convergence in distribution of financial time-series to a Gaussian law – even for low fre-quency returns - may be caused by significant time-dependencies between asset returns. Therefore, testing for independence of financial time-series is as important as the more traditional focus on non-normality of returns.

Following Hurst (1951) work on long-term dependence in Geophysics, seminal work by Mandelbrot (1972) demonstrated the presence of long-term memory in financial assets’ time-series. Peters (1992) acknowledged Mandelbrot’s contribution and discarded the ability of prevalent short-term memory econometric models (e.g. AR, ARMA, GARCH) to capture or replicate the type of enduring dependence found in financial time-series.

Due to the evidence of long-term dependence in financial returns, which has been confirmed by Nawrocki (1995), Peters (1996), Willinger et al. (1999), Weron and Przybylowicz (2000), Sun et al. (2007), Menkens (2007), Cajueiro and Tabak (2008), León and Vivas (2010) and León and Reveiz (2011), the SRTR is inappropriate to describe the way financial returns scale with time. Hence, according to Jennane et al. (2001) and McLeod and Hipel (1978), [F3] and [F4] may be generalized as in [F5] and [F6], respectively, where H is known as the Hurst exponent (Box 1) and m corresponds to the resolution or fre-quency of the time-series (e.g., if daily autocovariance is to be estimated, then m = 1 ):

COVm [Z (t, s)] = σ 2m

2

[|t − s + m|2H − 2 |t − s|2H + |t − s − m|2H] ; ∀t, ∀s [F5]

σlf = σhf nH; ∀n [F6]

As stated by Sun et al. (2007), in the H = 0.5 and H ≈ 0.5 cases the process has no memory –is independent-, hence next period’s expected result has the

same probability of being lower or higher than the current result, and the autocovariance or autocorrelation resulting from [F5] is zero1 (Jennane et al., 2001). Applied to financial time-series this is analogous to assuming that the process followed by assets’ returns is similar to coin tossing, where the prob-ability of heads (e.g. rise in the price) or tails (e.g. fall in the price) is the same (1/2), and is independent of every other toss; this is precisely the theoretical base of the Capital Asset Pricing Model (CAPM), the Arbitrage Pricing Theory (APT), the Black & Scholes model and the Modern Portfolio Theory (MPT).

When H takes values between 0.5 and 1 (0.5 < H ≤ 1) evidence suggests a persistent behavior, therefore one must expect the result in the next period to be similar to the current one (Sun et al., 2007), and the autocovari-ance or autocorrelation resulting from [F5] is positive (Jennane et al., 2001). According to Menkens (2007) this means that increments are positively cor-related: if an increment is positive, succeeding increments are most likely to be positive than negative. In other words, each event has influence on future events; therefore there is dependence or memory in the process.

As H becomes closer to one (1) the range of possible future values of the variable will be wider than the range of purely random and independent variables. Peters (1996) argues that the presence of persistence is a signal that today’s behavior doesn’t influence near future only, but distant future as well.2

On the other hand, when H takes values below 0.5 (0 ≤ H < 0.5) there is a signal that suggests an antipersistent behavior of the variable. This means, as said by Sun et al. (2007), that a positive (negative) return is more likely followed by negative (positive) ones, and the autocovariance or autocorrela-tion resulting from [F5] is negative (Jennane et al., 2001); hence, as stated by Mandelbrot and Wallis (1969), this behavior causes the values of the variable to tend to compensate with each other, avoiding time-series’ overshooting. Applied to financial markets series, Menkens (2007) affirms that this kind of continuously compensating behavior would suggest a constant over- correction of the market, one that would drive it to a permanent adjustment process. Similarly, Peters (1996) links this behavior to the well-known “mean- reversion” process.

BOX 1: THE HURST EXPONENT: ORIGIN, ESTIMATION AND FITHurst exponent (H) is named after British physicist H.E. Hurst (1880–1978), whose analysis demonstrated that numerous natural phenomena signifi-cantly diverged from being long-term independent. Thus, Hurst suggested estimating the empirical exponent (H) that fits how the random variable behaves with respect to time, instead of –blindly- relying on the SRTR. Based on Rescaled Range Analysis (R/S) by Mandelbrot and Wallis (1969), León and Reveiz (2011) developed an adjusted –unbiased- version of the Hurst exponent. The step-by-step estimation of the adjusted Hurst expo-nent is as follows:

A. For a time series of N log-returns, with k non-overlapping3 windows of size n, divide the original series in such way that n × k = N.

B. Estimate the arithmetic mean of each k-segment (mk) of size n.

Obtain the difference between each i-return and the mean of each k segment (m

k).

Yi,k = xi,k − μk

48-57_Wilm_Leon_TP_July_2012.ind49 4948-57_Wilm_Leon_TP_July_2012.ind49 49 8/16/12 2:34:16 PM8/16/12 2:34:16 PM

Page 3: Monte Carlo Simulation of Long-Term Dependent Processes: A Primer

50 Wilmott magazine

C. Estimate standard deviation for each k segment (Sn,k

). D. Calculate cumulative differences for each k segment.

Di,k =n∑

i=1

Yi,k

E. Calculate range (Rn,k

) of the Di,k

series.

Rn,k = max (D1,k, . . . Dn,k) − min (D1,k, . . . Dn,k)

F. Calculate rescaled range for each k segment k.

(R/S)n,k = Rn,k/Sn,k

G. Calculate (R/S)n as the average rescaled range for k segments of size n:

(R/S)n = 1

k

k∑i=1

(R/S)n,k

H. Repeat for different values of k, where kj = n

min…n

max, and where n

min

and nmax

correspond to the minimum and maximum size of the cho-sen window to calculate the rescaled range. The result are j values of (R/S)

n, where n

j = N/k

j.

I. Using n and (R/S)n values estimate the ordinary least squares regres-

sion proposed by Mandelbrot and Wallis (1969a y 1969b), where C is a constant and H corresponds to the estimated Hurst exponent:

Log (R/S)n = Log (c) + HLog(n)

J. Because there is a well-documented positive bias in the estimation of H resulting from using finite time-series, León and Reveiz (2011) suggest estimating the expected Hurst exponent corresponding to an independent random time-series of size N (H

iid), and calculating

the adjusted Hurst exponent (Hadj

):

Hadj = H − (Hiid − 0.5)

K. As suggested by León and Reveiz (2011)4, Hiid

is estimated using the functional approximation developed by Anis and Lloyd (1976) and Peters (1994), which is based on the estimation of the expected value of (R/S)

n for independent random time-series (E

iid(R/S)

n), which

replaces steps B to G:

Eiid(R/S)n = n − 12

n

1√nπ/2

n−1∑i=1

√n − i

i

By construction, since the Hurst exponent (H) is based on the estimation of the true range covered by the random variable per unit of time ((R/S)

n),

the adjusted Hurst exponent’s fit is superior than the customary use of the SRTR, where H = 0.5 regardless of the time-series’ dynamics. The following figures serve the purpose of graphically comparing the out-of-the-sample fit obtained by the SRTR and the adjusted Hurst exponent. Two time-series are employed: emerging market’s stock index (MSCI EM, Bloomberg ticker MXEF) and the off-peak day ahead electricity spot price at Palo Verde (Bloomberg Ticker ELGFPVOF Index); according to their H

adj, they correspond to 99% sta-

tistically significant persistent and antipersistent time series, respectively. Each figure displays the observed time-series (black line), and two areas

that correspond to the expected range that each time-series should be bounded to according to each assumption: the light-gray area corresponds to the SRTR (H = 0.5), whereas the dark-gray corresponds to the estimated H

adj. Both areas are estimated as a BM and a FBM, respectively, preserving

estimated means and standard deviations. The MSCI EM series exhibits a 99% statistically significant persistence

(Hadj

= 0.58), which results in a broader range of possible price paths that fits the 2007’s increase (38.28% in 10 months) and the 2008’s severe drop (63.53% in 10 months) better. The MSCI EM Index displaying significant persistence corroborates findings by León and Reveiz (2011) and Cajueiro and Tabak (2008), who document that less liquid and less developed mar-kets tend to reveal persistent dynamics.

MSCI EM Energy (ELGFPVOF Index)

Figure B1: SRTR and adjusted Hurst fit.

Source: author's calculations.

48-57_Wilm_Leon_TP_July_2012.ind50 5048-57_Wilm_Leon_TP_July_2012.ind50 50 8/16/12 2:34:20 PM8/16/12 2:34:20 PM

Page 4: Monte Carlo Simulation of Long-Term Dependent Processes: A Primer

Wilmott magazine 51

TECHNICAL PAPER

^

Mandelbrot’s research not only applied a revised version of Hurst’s devel-opments, but introduced a novel process to describe the behavior of financial prices, where the idealistic smoothness and continuity of BM is conveniently harmonized with the factual roughness and discontinuity of financial mar-kets. This generalization of BM grants the process the ability to scale with dif-ferent powers of time while preserving the shape of the distribution of returns (e.g. returns are scale-invariant), which indicates that returns on different time-scales can be re-scaled from or to the original process without modify-ing its statistical properties. Such generalization was baptized as Fractional Brownian Motion (FBM) because it implies that the process is self-similar or self-affine, which means that fractions or parts of the process preserve the characteristics and are related to the whole or other fractions of the process; therefore the term “fractal” for Mandelbrot’s theoretical developments.

Consequently, as defined by Greene and Fielitz (1979), FBM is a self-similar process that follows a nH law, where its increments (e.g. fractional Gaussian noise) exhibit long-term dependence, with the Hurst exponent (H) determining the direction and intensity of such dependence. One of the key characteristics of FBM is that the span or window of interdependence can be said to be infinite, contrasting traditional Markov processes (Mandelbrot and Van Ness, 1968) and making any application of the Central Limit Theorem (e.g. to assume normality) flawed at best.

Monte Carlo Simulation of Fractional Gaussian Noise and FBMCustomary Monte Carlo simulation of a Gaussian noise (e.g. white noise) con-sists of the simulation of a considerably large number of independent and normally distributed random numbers with zero mean and unit variance, and the application of Euler’s discrete-time approximation for a BM process. Let the process’ mean and standard deviation be denoted by scalars m and s, respectively; a column vector of p-random numbers be denoted by e , where e ~ N(0,1); Y

t the asset’s log-return during time t; X

t the asset’s price

at the end of time t, then [F7] and [F8] exhibit next period’s (t + 1) vectors of p- simulated log-returns (Y

t + 1) and of p-prices (X

t + 1), respectively, which corre-

spond to the mentioned Gaussian noise and BM:

Yt+1 = μ + σε; ε ∼ N (0, 1) [F7]

Xt+1 = XteYt+1

[F8]

When simulating more than one-period-in-the-future prices the p-sized e column vector of independent and normally distributed random numbers

becomes a p × q matrix, where p still corresponds to the number of simu-lations, whereas q corresponds to the number of periods-in-the-future to be simulated; please note that since the random numbers are generated independently there is no time-dependence. Because log-returns are used to estimate the moments of the process, returns may be accumulated through n periods as in [F9], which results in Y

t → t + n. After calculating Y

t → t + n it is

straightforward to obtain a p × q matrix Xt → t + n

which contains the successive prices of the variable as in [F10].

Yt→t+n =

n∑u=1

(Yt+u) [F9]

Xt→t+n = XteYt→t+n [F10]

It is usual to employ the simulation procedure just described to simulate Gaussian noise for more than one asset without disregarding the dependence existing across assets’ returns. This is customarily carried out by generating a p × a × q hyper-matrix of independent and normally distributed random numbers with zero mean and unit variance, where p and q stand for the number of simulations and the number of periods-in-the-future to be simu-lated, whereas a corresponds to the number of assets to be considered. Each q-layer of the hyper-matrix is afterwards multiplied by the triangular matrix resulting from applying the Cholesky decomposition to the assets’ returns correlation matrix5, which at the end yields q-matrixes of normally distrib-uted random numbers with zero mean and variance equal to the covariance matrix of assets’ returns; such hyper-matrix is denoted herein as e .

Let ∑ and Θ stand for the estimated correlation matrix and for the Cholesky decomposition procedure, respectively, the first-period-in-the-future (t + 1) case of cross-dependency for each a-asset is found in the first layer of the hyper-matrix (e1), as in [F12]. Subsequently, in order to simulate the a-asset first-period-in-the-future (t + 1) returns and prices a-asset’s mean (m

a) and standard deviation (s

a), along with the a-column from (e1) are used

as in [F13] and [F14].

ε1 = ε1 × �(�) [F12]

Ya,t+1 = μa + σaεa,1; εa,1 ∼ N (0, �) [F13]

Xa,t+1 = Xa,teYa,t+1 [F14]

Similar to the procedure previously presented, which transforms inde-pendent and normally distributed random numbers (e ) into cross-depend-ent and normally distributed numbers (e ), it is also possible to simulate a fractional Gaussian noise. The major change consists of switching from asset- dependence to serial or time-dependence, which entails estimating autocov-ariance instead of covariance.

Despite it is tempting to use standard autocovariance estimation, as acknowledged by Hurst (1951), Mandelbrot (1972) and Peters (1994), its consistency and robustness for detecting and measuring dependence, espe-cially long-term dependence, is critically restricted to the Gaussian world; this is, using customary autocovariance o autocorrelation is non-robust to changes in distribution, and should be used cautiously if time-series are not normal. However, as exhibited in [F5], an alternative method for assessing time-dependence is available, where the degree and sign of the dependence (e.g. persistence or antipersitence) is determined jointly by the variance, the Hurst exponent (H) and the n-period-in-the-future to be simulated.

The second time-series exhibits a 99% statistically significant anti-persistence (H

adj = 0.36), typical of energy prices time-series (León and

Reveiz, 2011; Weron and Przybylowicz, 2000), which face several par-ticularities such as market regulation and storage problems. The anti-persistence results from energy prices being constantly overcompensat-ing past returns, which makes the series unwavering in the long-term despite being extremely volatile (daily standard deviation around 14%). Consequently, unlike the MSCI EM time-series, in this case the SRTR range of possible price paths is significantly wider than the one obtained with the adjusted Hurst exponent’s, where the latter fits the series in an appro-priate manner.

48-57_Wilm_Leon_TP_July_2012.ind51 5148-57_Wilm_Leon_TP_July_2012.ind51 51 8/16/12 2:34:27 PM8/16/12 2:34:27 PM

Page 5: Monte Carlo Simulation of Long-Term Dependent Processes: A Primer

52 Wilmott magazine

Let Ψ and Θ be the autocorrelation matrix resulting from the autocovari-ance estimated as in [F5] and the Cholesky decomposition procedure, cor-respondingly. Transforming a 1 × q vector of independent and normally dis-tributed random numbers (e ) into time-dependent and normally distributed numbers (ε) is as in [F15], and X

t → t + n in [F17] exhibits a n-periods-in-the-future

simulated single path. Simulating p-price paths becomes straightforward if e is a p × n matrix.

ε = ε × � (�) [F15]

Yt→t+n = μ + σε; ε ∼ N (0, �) [F16]

Xt→t+n = XteYt→t+n [F17]

Monte Carlo Simulated FBMThis section displays some illustrative results based on the procedure previ-ously exhibited. The first two moments of the distribution (i.e. mean and standard deviation) will be 0.00 and 0.01, and will remain constant across the different cases of Hurst6 exponents herein considered. Dependence will be estimated as in [F5], simulations as in [F17], and three cases of Hurst expo-nents (H) will be considered (Table 1).

Simulating five price paths for 500-periods-in-the-future with the three cases of Hurst exponents results in the following plots (Figure 1). Despite being a small number of simulations, it is clear that each case has unique characteristics related to the way that time-series diffuse across time: anti-persitent (persistent) simulated time-series exhibit the lowest (highest) dispersion around the mean. Note that the starting price (100) and random numbers used to simulate the processes (e ) are the same for the three cases of Hurst exponents; therefore, differences between cases are only due to the transformation of random variables when multiplied by the corresponding

Cholesky decomposition of the covariance matrix as in [F15]. This will allow for objective comparisons between the three cases.

It is remarkable to find that despite the graphical difference between the simulated time-series is rather clear, their distributions’ first, second, third and fourth moments do not differ greatly from each other; this is, as exhibited in Table 2, they asymptotically preserve the mean (0.00), standard deviation (0.01), skewness (0.00) and kurtosis (3.00). These results converge with Sornette (2003) regarding the inability of frequency distributions to capture other types of structures (i.e. serial- dependence) typical of complex time-series such as financial returns.

In order to further assess the difference between the three depend-ence cases, 5,000 price paths for each Hurst exponent case were simulated (Figure 2). The difference in the way each case simulates prices is unmistaka-ble: the wider paths (black) correspond to the persistence case, the narrower (red) to the antipersistence case, and the intermediate (yellow) to the inde-pendence case. As expected, the higher the Hurst exponent, the broader the range of simulated prices.

As exhibited in Table 3, the mean of the first four distributional moments throughout the 5,000 simulated paths show that they do not differ significantly across the three cases, whilst the mean adjusted Hurst exponent effectively replicates each dependence case. Once again, this con-firms frequency distributions and their moments failing to capture other type of structures in time-series, such as serial dependence. This is rather relevant since it is well-documented that focusing on the frequency of returns ignores the importance of their sequence, which could be the main reason why Value at Risk, Expected Shortfall or Extreme Value Theory tend to underestimate risk (Malevergne and Sornette, 2006; Los, 2005; Sornettte, 2003), whilst customary use of GARCH models is insufficient to account for observed volatility.8

Instead of focusing on the distribution of each simulation it is advisable to focus on the end-of-the-simulations’ prices and the total returns. Figure 3 exhibits the probability plots for the end-of-the-simulations’ prices (left panel) and for the total returns (right panel), for each Hurst exponent case.

The independence case corresponds to the normality-of-returns assump-tion (i.e. log-normality-of-prices) with original (assumed) mean and standard deviation. With the same volatility input the persistence case exhibits a sig-nificant departure from the customary BM, where the most evident charac-teristic is the higher frequency and the larger magnitude of extreme returns when compared to the independence case; the converse is also true, where

Table 1: Three cases of Hurst exponents.7

Case Description

Hant

= 0.4 Corresponds to long-term antipersistence

Hind

= 0.5 Corresponds to long-term non-dependence

Hper

= 0.6 Corresponds to long-term persistence

Source: authors’ design.

Figure 1: Three cases of Hurst exponents: 500 daily returns (5 simulations).

Hant = 0.4 Hind = 0.5 Hper = 0.6

Source: author's calculations.

48-57_Wilm_Leon_TP_July_2012.ind52 5248-57_Wilm_Leon_TP_July_2012.ind52 52 8/16/12 2:34:33 PM8/16/12 2:34:33 PM

Page 6: Monte Carlo Simulation of Long-Term Dependent Processes: A Primer

Wilmott magazine 53

TECHNICAL PAPER

^

antipersistence results in the lower frequency and the smaller magnitude of extreme returns.

It is remarkable that Figure 3 also displays that persistence and antiper-sistence cases approximately preserve the independent’s case shape. For instance, the right panel of Figure 3 depicts that the three cases’ simulated

returns may be approximated through a linear fit, which corresponds to the Gaussian or normality assumption. Each case displays a particular slope, where the higher the dependence the lower the slope; this is, the more per-sistent (antipersistent) the time-series, the larger and more likely (the small-er and less likely) the simulated returns when compared with customary BM under the independence assumption.

Such interesting results regarding Figure 3 correspond to an attribute typical of the Gaussian distribution highlighted by Haug and Taleb (2009): it is possible to express any probability distribution in terms of Gaussian, even if it has fat tails, by varying the standard deviation at the level of the den-sity of the random variable. In the case in hand, the Gaussian can express a broad type of stochastic processes by changing how volatility behaves with respect to the time horizon.

Comparing FBM and financial industry ’s standard GARCH(1,1)As documented before, Peters (1992) discarded the ability of short-term memory or weak-dependence econometric models (e.g. AR, ARMA, GARCH) to capture or replicate the type of enduring dependence found in financial time-series. This section provides evidence of the convenience of using FBM over the well-known GARCH (1,1) model, even when the latter includes non-Gaussian (heavy-tailed) distribution of errors.

Based on a common set of first two moments of the normal distribution (m = 0.00 and s = 0.10), this exercise compares the long-term dependence simulation based on FBM against the short-term dependence simulation, where the latter is based on two benchmark volatility models: GARCH (1,1) with Gaussian distributed errors (e ~ N(0,1)) and GARCH (1,1) with t- distributed errors (e ~ t(0,1,n )), where n corresponds to the degrees of free-dom of the t-distribution.

The two GARCH (1,1) short-term dependence volatility models are cus-tomarily defined:

σ 2t = ω + αx2

t−1 + βσ 2t−1;ω ≥ 0, α, β ≥ 0, α + β < 1 [F18]

Parameters a and b correspond to Dowd’s (2005) choice of a standard configuration of financial industry’s GARCH (1,1) model9: a = 0.1 and b = 0.85. The w parameter is calculated as follows, where the long-run (unconditional) variance (Ω) is set as equal to the standard deviation of the process (s = 0.10):

ω = � × (1 − α − β) [F19]

Table 2: Three cases of Hurst exponents: 500 daily return’s statistical properties (5 simulations).

Case Simulation Mean Std. Dev. Skewness Kurtosis

Hant

= 0.4 1 0.0000 0.0097 0.1317 3.1336

2 –0.0000 0.0104 0.0855 2.6112

3 0.0004 0.0101 –0.1388 2.8518

4 0.0002 0.0094 –0.0271 3.1076

5 0.0005 0.0101 0.0083 3.2630

Hind

= 0.5 1 –0.0000 0.0097 0.0885 3.1151

2 –0.0002 0.0103 0.0768 2.6063

3 0.0006 0.0101 –0.1431 2.8284

4 0.0002 0.0093 –0.0575 3.1332

5 0.0009 0.0101 0.0277 3.1620

Hper

= 0.6 1 –0.0002 0.0096 0.0449 3.0433

2 –0.0005 0.0101 0.0588 2.6415

3 0.0010 0.0101 –0.1422 2.7798

4 0.0002 0.0092 –0.0803 3.1625

5 0.0016 0.0100 0.0548 3.0622

Assumed 0.0000 0.0100 0.0000 3.0000

Source: authors’ calculations.

Table 3: Three cases of Hurst exponents: 2,500 daily return’s statistical propertiesa (Mean of 5,000 simulations).

Case Mean Std. Dev. Skewness Kurtosis Adj. H Exp.

Hant

= 0.4 0.0002 0.0099 0.0012 2.9992 0.4199

Hind

= 0.5 0.0002 0.0099 0.0012 2.9998 0.5020

Hper

= 0.6 0.0002 0.0098 0.0011 2.9992 0.5816

Assumed 0.0000 0.0100 0.0000 3.0000a Estimations are based on a 2,500 daily returns time-series; this is due to documented issues regarding the inconvenience of using short series for estimating the adjusted Hurst exponent (León and Vivas, 2010; León and Reveiz, 2011). Source: authors’ calculations.

Figure 2: 5,000 price paths for each case of Hurst exponent.

Source: author's calculations.

48-57_Wilm_Leon_TP_July_2012.ind53 5348-57_Wilm_Leon_TP_July_2012.ind53 53 8/16/12 2:34:38 PM8/16/12 2:34:38 PM

Page 7: Monte Carlo Simulation of Long-Term Dependent Processes: A Primer

54 Wilmott magazine

The FBM long-term dependence model is defined as in F5, F15-F17, where H

adj = 0.58, corresponding to a significantly persistent variable at 99% confi-

dence interval. This choice matches the MSCI EM estimated adjusted Hurst exponent for a 1995–2010 daily database, and its similar to the H

adj estimated

for the Emerging Market Bond Index (EMBI); nevertheless, it is significantly lower than the estimations provided by León and Vivas (2010) and León and Reveiz (2011) for some emerging market’s EMBI Global and individual stocks, which may reach H

adj = 0.65.

Each model was used to simulate 1,000 paths, each comprising 500 days. Figure 4 contains the comparison between simulated FBM and GARCH (1,1), where the left panel exhibits the GARCH (1,1) with standard Gaussian distributed errors (e ~ N(0,1)), and the right panel the GARCH (1,1) with t- distributed errors with 4 degrees of freedom10 (e ~ t(0,1,4)). FBM paths are presented in red, and GARCH (1,1) models in black.

The difference in the way each model simulates prices is unmistakable: the wider paths (red) correspond to the FBM model, whilst the narrower (black) to the GARCH (1,1) models. As expected, the GARCH (1,1) model with t-distributed errors simulated more volatile returns and fatter tails, but still below the level attained by the FBM model. Figure 5 displays the 500 days total returns’ probability plots, where the ability of FBM to generate fatter tails in the long-run is again noticeable.

These results concur with Sornette’s (2003) statement regarding the inabil-ity of the customary use of GARCH models to account for the dependencies observed in real data. Results also stress the suitability of modeling long-term dependence via FBM, which may attain higher levels of leptokortusis and serial-dependence in a more parsimonious way.11 Nevertheless, despite that in the long-term (i.e. beyond 50 days) the FBM provides more volatile and persist-ent returns, the GARCH (1,1) model with t-distributed errors attained better

End-of-the-simulations’ prices Total returns

Figure 3: Probability plots.

Source: author's calculations.

FBM vs. GARCH (1,1) (e∼N(0,1)) FBM vs. GARCH (1,1) (e∼t (0,1,4))

Figure 4: Simulated paths for FBM and GARCH (1,1).

Source: author's calculations.

48-57_Wilm_Leon_TP_July_2012.ind54 5448-57_Wilm_Leon_TP_July_2012.ind54 54 8/16/12 2:34:42 PM8/16/12 2:34:42 PM

Page 8: Monte Carlo Simulation of Long-Term Dependent Processes: A Primer

Wilmott magazine 55

TECHNICAL PAPER

^

results in the short-term; this is intuitive since the GARCH models are short-term dependence processes, with volatility displaying mean-reverting features.

Finally, the ability to model the well-known financial returns’ slow convergence in distribution to a Gaussian law in a parsimonious manner is precious for the market practitioner. By being able to better fit the behavior of financial returns through time, FBM may generate short-term and long-term fat tails, with evident applications in modeling volatility smiles for option valuation, or enhanced risk assessment.

Final RemarksBM is insufficient to describe some well-known characteristics of financial time-series, where short-term volatility clustering is the most documented and modeled. Nevertheless, as documented by León and Vivas (2010) and León and Reveiz (2011), long-term dependence has been largely ignored despite its presence on financial time-series has been widely verified by sev-eral authors (e.g. Mandelbrot, 1972; Los, 2003; Peters, 2004; Menkens, 2007).

An appealing alternative for tackling some of the flaws of BM and stand-ard financial industry’s GARCH models is to generalize BM in order to recog-nize the presence of long-term memory. Such alternative consists of using factual data for estimating the way volatility scales throughout time. Fractal theory, developed by Benoit and Mandelbrot, comprises some techniques to reach that alternative, including what is commonly referred as FBM.

As demonstrated in this article, combining FBM and Hurst exponents typical of financial time-series allows for simulating processes which dis-play characteristics analogous to those found in financial markets, where serial dependence may result in acute and sustained price changes through-out time. With the same expected (mean) return and dispersion (standard deviation) FBM is capable of getting closer to the true behavior of financial

time-series, where large and enduring returns are more likely than what standard BM assumes or is able to simulate. Likewise, when compared to financial industry’s benchmark GARCH (1,1) model, the FBM is able to cap-ture the well-documented slow convergence in distribution to a Gaussian law; this is true even if the GARCH model considers errors as distributed as a heavy tailed t-distribution. These abilities of the FBM model result from the Gaussian distribution’s ability to express a broad type of stochastic processes by changing how volatility behaves with respect to the time horizon.

Simulating prices or returns that comply with an estimated persistence metric such as the Hurst exponent is useful. Besides recognizing the impact of investment’s horizon for portfolio optimization (León and Reveiz, 2011), market, credit and liquidity risk assessment may profit from the ability to model –mostly overlooked- large and continuous prices’ and quantities’ changes. The most recent episode of financial crisis provides two relevant insights: (i) according to the IMF (2010), although liquidity risk manage-ment tools existed prior to the most recent episode of financial crisis, they were unprepared for a large and long-lasting shock; (ii) as a consequence of the failure of volatility scaling methods, the Basel Committee on Banking Supervision (BIS, 2009) changed the quantitative standards for calculat-ing Value at Risk, where the customary use of the SRTR has been severely restricted to those cases in which technical support exists.

Other applications of FBM simulation are also straightforward. Processes exhibiting significant dependence may profit from its convenience. A topic worth approximating with this approach is payments’ simulation for iden-tifying and assessing sources of systemic risk within a payments system. Instead of using historical simulation methods for simulating payments within the payments system as in León et al. (2011), being able to simulate payments without ignoring their documented persistence (Bouchaud et al., 2008; Lillo and Farmer, 2004) allows for more flexible modeling.

Nevertheless, some challenges still remain. First, as documented by Di Mateo (2007), the Hurst exponent may vary overtime, which would result in what it is referred as multi-scaling or multi-fractal processes, in contrast to the uni-scaling or uni-fractal process used in this article. This is not a trivial challenge since estimating the Hurst exponent requires long time-series for obtaining a feasible estimate.

Second, it is not clear how to simulate a process which captures serial-dependence and cross dependence simultaneously. When using Cholesky decomposition for transforming independent random numbers into serial-dependent random numbers, and subsequently applying the same method for transforming the latter into cross-dependent random numbers, the simulated series’ Hurst exponent diverges from the assumed. An intuitive approach to cir-cumvent this inconvenience is to estimate the Hurst exponent from portfolios’ time-series and not from individual assets, and to simulate portfolios’ proc-esses directly; however, this may be impractical for some purposes.

Lastly, Cholesky method for simulating prices or returns’ paths may become computationally cumbersome. In order to simulate q paths it is una-voidable to work with several matrices of size q × q (i.e. the covariance matrix and its Cholesky decomposition), which may turn computationally demand-ing when simulating a non-small number of periods-in-the-future (e.g more than 1,000). Other approaches may serve the purpose of circumventing this shortcoming.

As the Research and Development Manager at the Central Bank of Colombia’s Financial Infrastructure Oversight Department, Carlos León is responsible for investigating and

Figure 5: Total Returns’ probability plots.

Source: author's calculations.

48-57_Wilm_Leon_TP_July_2012.ind55 5548-57_Wilm_Leon_TP_July_2012.ind55 55 8/16/12 2:34:48 PM8/16/12 2:34:48 PM

Page 9: Monte Carlo Simulation of Long-Term Dependent Processes: A Primer

56 Wilmott magazine

ENDNOTES1. Please note that if H = 0.5 in [F5] FBM reduces to BM; this is, the right side of the equa-tion equals cero. 2. Some explanations for financial assets’ return persistence are found in human behavior, since the latter contradicts rationality assumption in several ways (e.g. investors’ choices are not independent, and they are characterized by non-linear and imitative behavior; investors resist to change their perception until a new credible trend is established, and investors don’t react to new information in a continuous manner, but rather in a discrete and cumulative way). Other explanations for financial assets’ return persistence have to do with the importance of economic fundamentals, and the use of privileged information. Alternatively, some authors conclude that markets’ liquidity make instantaneous trading impossible, leading to transactions’ splitting and decisions’ clustering, resulting in market prices that don’t fully reflect information immediately, but incrementally. León and Reveiz (2011) contains references for each explanation.3. For a discussion regarding the use of overlapping and non-overlapping segments, please refer to Nawrocki (1995) and Ellis (2007).4. León and Vivas (2010) explore an alternative: reshuffling the original time-series sig-nificantly (e.g. 1.000 times) and estimating the H

iid following steps A to I. Results are non-

significantly different.5. The Cholesky decomposition of the correlation matrix (Σ) consists of estimating matrix L in [F11], where L is a positive-defined lower-triangular-matrix. If Σ is a g-size square correlation matrix, multiplying the resulting g-size square matrix L to a n × g matrix of uncorrelated samples produces a n × g matrix of correlated samples, where their correlation approximates Σ. Σ = LLT [F11]The reader is referred to Cuthberson and Nitzsche (2004) for a more comprehensive explanation.6. Please note that the adjusted Hurst exponent developed by León and Vivas (2010) and León and Reveiz (2011) should be used. This adjustment is convenient since non-adjusted Hurst exponent is estimated under the assumption of infinite or near-infinite-length time-series, which yields biased — overestimated — Hurst exponents when using finite-time-series; thus, not adjusting the Hurst exponent for such bias yields unreliable — persistent — results.

7. Please note that these cases correspond to factual figures. Based on results by León and Reveiz (2011) and León and Vivas (2010), independent or near independent processes (H = 0.5 and H ≈ 0.5) were found for S&P500, MSCI World and S&P Agriculture & Live Stock indexes; antipersistence was found for S&P Precious Metals (0.48) and US energy indexes (between 0.34 and 0.42), where only the latter were significant at 95% confidence level; persistence was found for Colombian stock and fixed income indexes (i.e. IGBC and IDXTES, 0.61 and 0.60, respectively), MSCI Emerging markets index (0.58) and EMBI price index (0.59), all significant at 95% confidence level.8. Sornette (2003) demonstrates that the financial industry ’s standard GARCH (1,1), even when using a t-distribution with 4 degrees of freedom, is unable to account for the dependencies observed in real DJIA data. In order to account for such dependence Sornette (2003) and Reveiz and León (2010) suggest using a measure such as drawdowns.9. It is common to obtain estimates of b over 0.7, but a is usually less than 0.25. (Dowd, 2005).10. Using 4 degrees of freedom makes the GARCH (1,1) with t-distributed errors able to model severe leptokurtosis (fat tails).11. It is important to highlight that using the FBM only requires the estimation of the first two moments of the distribution (mean and variance) and an additional parameter: the adjusted Hurst exponent. Any of the two GARCH (1,1) models herein used requires esti-mating more parameters (a, b, w), and an additional one if using the t-distributed errors variant. Moreover, despite being the financial industry’s standard, fitting real data with a GARCH (1,1) model is not trouble-free, and it usually violates the a + b < 1 condition.12. Authors’ preliminary versions of published documents (*) are available online (http://www.banrep.gov.co/publicaciones/pub_borra.htm).

REFERENCES12

Bachelier, L. 1900. “Théorie de la Spéculation”, Annales de l‘Ecole Normale Supérieure, Tercera Serie, 17.BIS, “Revisions to the Basel II Market Risk Framework”, Basle Committee on Banking Supervision, Bank for International Settlements, 2009. Bouchaud, J-P., Farmer, J.D. and Lillo, F. 2008. “How Markets Slowly Digest Changes in Supply and Demand”.Cajueiro, D.O. and Tabak, B.M. 2008. “Testing for Long-Range Dependece in World Stock Markets”, Chaos, Solitons and Fractals, No.37.Cuthbertson, K. and Nitzsche, D. 2004. Quantitative Financial Economics, John Wiley & Sons.Di Matteo, T. 2007. “Multi-scaling in Finance”, Quantitative Finance, 7(1): February.Dowd, K. 2005. Measuring Market Risk, Wiley Finance.Greene, M.T. and Fielitz, B.D. 1979. “The Effect of Long-Term Dependence on Risk-Return Models of Common Stocks”, Operations Research, 27(5).Haug, E.G. and Taleb, N.N. 2009. “Why we have never used the Black-Scholes-Merton option pricing formula”.Hurst, H. 1951. “Long-Term Storage Capacity of Reservoirs”, Transactions of the American Society of Civil Engineers, No.116. International Monetary Fund (IMF), “Systemic liquidity risk: improving the resilience of financial institutions and markets”, Global Financial Stability Report, October, 2010.Jennane, R., Harba, R. and Jacquet, G. 2001. “Methodes d‘analyse du movement brownien fractionnaire: théorie et resultats comparatifs” Traitement du Signal, 18 (5-6). León, C. and Reveiz, A. 2011. “Portfolio Optimization and Long-Term Dependence”, BIS Papers, No. 58, Bank for International Settlements, October.*León, C. and Vivas, F. 2010. “Dependencia de Largo Plazo y la Regla de la Raíz del Tiempo para Escalar la Volatilidad en el Mercado Colombiano”, Borradores de Economía, No. 603, Banco de la República.*León, C., Machado, C.L., Cepeda, F. and Sarmiento, M. 2010. “Too-connected-to-fail institutions and payments system’s stability: assessing challenges for financial authorities”, Borradores de Economía, No. 644, Banco de la República.*Lillo, F. and Farmer, J.D. 2004. “The Long Memory of the Efficient Market”, Studies in Nonlinear Dynamics & Econometrics, 8 (3).

developing methodologies for comprehensively overseeing local market’s financial institutions and infrastructures, currently focusing on the assessment of systemic importance and intraday liquidity risk. Present interests include network theory, fuzzy logic, fractal theory, portfolio optimization, risk management, and simulation methods. Prior working experience includes

positions as Researcher for the Central Bank of Colombia’s Foreign Reserves Department and for the Operations and Market Development Department, and Head of the Risk Management

Group at Colombia’s Ministry of Finance-Public Credit Directorate. He holds a M.Sc. in Banking and Finance from HEC-Université de Lausanne (Switzerland), a M.A. in International Economics and a B.A. in Finance and International Relations form Externado de Colombia University (Colombia).

As a Lead Investment Strategist in charge of the Quantitative Strategies Groups at the World Bank Treasury, Alejandro Reveiz is responsible for constructing and analyzing fixed income portfolios, providing investment policy advice to clients and developing research and invest-ment tools. Prior to joining the World Bank Group he was in charge of the implementation of monetary and foreign exchange policy, as well as the development of the foreign exchange and the capital markets in Colombia’s Central Bank. He also has portfolio management experi-ence both as Head of Asset Management at the Latin American Reserves Fund (FLAR) and at Colombia’s Central Bank where he headed Colombia’s foreign reserves management and man-aged the Oil Stabilization Fund. He holds a Ph.D in Financial Economics form the University of Reading (UK). He is an industrial engineer from Universidad de los Andes in Colombia, where he also studied economics.

48-57_Wilm_Leon_TP_July_2012.ind56 5648-57_Wilm_Leon_TP_July_2012.ind56 56 8/16/12 2:34:50 PM8/16/12 2:34:50 PM

Page 10: Monte Carlo Simulation of Long-Term Dependent Processes: A Primer

Wilmott magazine 57

TECHNICAL PAPER

Los, C.A. 2005. “Why VAR Fails: Long Memory and Extreme Events in Financial Markets”, The ICFAI Journal of Financial Economics, 3(3).Los, C.A. 2003. Financial Market Risk, Routledge.Malevergne, Y. and Sornette, D. 2006. Extreme financial risks: from dependence to risk management, Springer-Verlag.Mandelbrot, B. and Van Ness, J.W. 1968. “Fractional Brownian Motions, Fractional Noises and Applications”, SIAM Review, 10 (4).Mandelbrot, B. and Wallis, J. 1969. “Robustness of the Rescaled Range R/S in the Measurement of Noncyclic Long-Run Statistical Dependence”, Water Resources Research, No.5. Mandelbrot, B. 1972. “Statistical Methodology for Nonperiodic Cycles: from the Covariance to the R/S Analysis”, Annals of Economic and Social Meaurement, NBER, 1(3).Mandelbrot, B. 1963. “The Variation of Certain Speculative Prices”, The Journal of Business, 36(4).McLeod, A.I. and Hipel, K.W. 1978. “Preservation of the Rescaled Adjusted Range”, Water Resources Research, 14 (3):June.Menkens, O. 2007. “Value at Risk and Self-Similarity”, Numerical Methods for Finance (Eds. Miller, J., Edelman, D., Appleby, J.), Chapman & Hall/CRC Financial Mathematics Series. Nawrocki, D. 1995. “R/S Analysis and Long-Term Dependence in Stock Market Indices”, Managerial Finance, 7 (21).

Peters, E.E. 1989. “Fractal Structure in the Capital Markets”, Financial Analysts Journal, 4(45).Peters, E.E. 1992. “R/S Analysis using Logarithmic Returns”, Financial Analysts Journal, 6(48).Peters, E.E. 1996. Chaos and Order in the Capital Markets, John Wiley & Sons.Peters, E.E. 1994. Fractal Market Analysis, John Wiley & Sons.Reveiz, A. and León, C. 2010. “Efficient Portfolio Optimization in the Wealth Creation and Maximum Drawdown Space”, Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth Funds (Eds. Berkelaar, A., Coche, J., Nyholm, K.), Palgrave Macmillan.*Samuelson, P. 1965. “Rational theory of warrant pricing”, Industrial Management Review, 6(2): 13-39.Sornette, D. 2003. Why Stock Markets Crash, New Jersey, Princeton University Press.Sun, W., Rachev, S., Fabozzi, F.J. 2007. “Fractals or IID: Evidence of Long-Range Dependence and Heavy Tailedness from Modeling German Equity Market Returns”, Journal of Economics & Business, 59(6).Weron, R. and Przybylowicz, B. 2000. “Hurst Analysis of Electricity Price Dynamics”, Physica A, 283(3).Willinger, W., Taqqu, M., Teverovsky, V. 1999. “Stock Market Prices and Long-Range Dependence”, Finance and Stochastics, No. 3.

W

48-57_Wilm_Leon_TP_July_2012.ind57 5748-57_Wilm_Leon_TP_July_2012.ind57 57 8/16/12 2:34:51 PM8/16/12 2:34:51 PM


Recommended