+ All Categories
Home > Documents > Key Concepts + Figures

Key Concepts + Figures

Date post: 10-Apr-2018
Category:
Upload: matthias-homan
View: 221 times
Download: 0 times
Share this document with a friend

of 69

Transcript
  • 8/8/2019 Key Concepts + Figures

    1/69

  • 8/8/2019 Key Concepts + Figures

    2/69

    each other. The goal is to gain when the securities converge to their appropriate values over

    time.

    23. Arbitrage is the process of the two securities converging to their appropriate market value.

    24. The convertible arbitrage style involves taking a long position in a convertible bond and selling

    short the associated stock.

    25. The main source of return of the convertible arbitrage style comes from a decline in the price

    of the stock. The main risk is from the possibility of the stock increasing in value, but this is

    hedged with the bonds embedded option, which is generally believed to be undervalued.

    26. There are many fixed income arbitrage strategies. To earn profits, they can use tax loopholes,

    different laws and taxes policies around the world, projected changes in yield curves around

    the world, and other factors. Often, the manager will create the position and take offsetting

    positions to neutralize interest-rate risk.

    27. The equity market neutral style uses a long/short strategy and has a stated goal of eliminating

    market risk (i.e., reducing beta to zero), with offsetting long and short positions.

    28. Usually, an equity market neutral manager uses pair trading, which involves buying one stock

    that is undervalued and one that is overvalued, in proportions that result in a beta equal to

    zero. The main source of return comes from corrections of the asset prices.

    29. In contrast to the equity market neutral style, the equity long/short style takes short positions

    but does not explicitly try to set market risk equal to zero.

    30. The index arbitrage style generally attempts to exploit mispricings between an index and its

    associated derivatives.

    31. The mortgage-backed securities arbitrage style seeks to earn a profit from mispricing of these

    assets, relative to Treasury securities, based upon their uncertain credit quality and

    prepayment risk.

    32. A fund of funds will allocate capital to different managers of hedge funds who may or may not

    use the same style. The goal is to diversify across managers and/or styles.

    33. In contrast to a fund of funds, a multi-strategy fund tends to have the capital under the

    management of one person or office. It may use many styles, which can provide some

    diversification, but it may focus more on trying to time the styles.34. High net worth individuals are currently the main source of the assets under management for

    hedge funds.

    35. Institutions are expected to be a major source of capital for hedge funds in the future, but

    individuals will continue to supply capital.

  • 8/8/2019 Key Concepts + Figures

    3/69

    2. MeasuringReturn

    1. The lack of transparency in the hedge fund market is caused by several factors including the

    fact that they are privately organized, regulations restricting hedge fund marketing, andmanagers deliberately keeping their strategies a secret.

    2. The trend towards hedge fund self-institutionalization is the result of growth in the industry,

    increased interest from institutions like pensions, and an attempt to reduce the likelihood of

    further government regulation.

    3. Position transparency means revealing the positions in a hedge fund, and it often includes a

    great deal of information. Risk transparency is achieved by revealing a summary of exposures

    to various risks and is more concise than position transparency.

    4. The goal of equalization is to make sure the investors in the fund pay an appropriate portion of

    the managers fee for the measurement period.

    5. The main reason behind equalization is the possible distortions and inequitable fee payments

    that can be caused by investors contributing capital during the measurement period.6. In the absence of adjustments, each investor would pay a portion of the inventive fee simply

    based upon the number of shares owned.

    7. The free ride syndrome occurs when an investor buys shares in the middle of a measurement

    period at a time when the shares have declined. The shares eventually recover and exceed

    their beginning period value, and there are no fee adjustments. In such a case, the portion of

    the incentive fee paid by the middle-of-period investors earned from the recovery of the share

    prices.

    8. Onshore funds are usually closed-ended and offshore funds tend to be open-ended. Being

    open-ended introduces the need for adjustments in the payment of fees.

    9. The multiple share approach is the practice of issuing a new class of shares for each new

    inflow of capital to a hedge fund.10. The main advantage of the multiple share approach is being able to keep track of how much

    of the incentive fee each investor must pay. The main disadvantage is the fact that the number

    of classes of shares can become quite large and difficult to track individually.

    11. The equalization factor approach involves setting up a special account to be associated with

    each inflow of cash that occurs in the middle of a measurement period. The account

    represents a fee adjustment to the middle-of-period investors portion of the management fee.

    12. Crystallization occurs at the end of the accounting period when there is a calculation of any

    remaining equalization factors attributable to shareholders when the manager is paid the

    incentive fee.

    13. The equalization approach allows the NAV per share to be calculated as the gross asset value

    minus the incentive fee earned at any given point.

    14. The simple equalization approach records the different net asset values for each investor

    based upon the time of the investment. At the end of the period, it assigns the lowest NAV of

    all the shares to each share and makes up the differences in NAVs by issuing additional,

    fractional shares to each investor who purchased shares at the higher NAVs during the period.

    15. The simple equalization approach differs from the multiple-share approach in that the former

    consolidates all its shares to one NAV at the end of each measurement period. The multiple-

    share approach continues to record each series separately from period to period.

    16. In a formula , the net return (holding period return) is:

    The sample gross return is:

  • 8/8/2019 Key Concepts + Figures

    4/69

    17. A compounded return for n periods is:

    18. Gross returns are computed before deducting management fees and other expenses. Net

    returns have had management fees and other expenses deducted.

    19. The list of approaches to assign prices to determine net asset value includes the average

    quote, the average quote with top and bottom removed, a median quote, and a subjective

    judgment quote.

    20. Valuation of illiquid assets usually includes a liquidity adjustment.

    21. Cash accounting only records income when received and expenses when paid. Accrual

    accounting matches income with expenses.

    22. Trade date is the day the manager executes the order. Settlement date is when the cash

    transfers to settle the executed order.

    23. Let N equal the number of sub-periods in a year, then for i sub-periods:

    24. When calculating the performance of several funds, an equal weighting approach is equivalent

    to a simple average of the returns of the funds.

    25. The equal weighting approach is essentially a specific case of a weighted average that

    assigns each fund return a weight of 1/N, where N is the number of funds.

    26. An asset weighting approach for a portfolio of funds involves weighting each return by the ratio

    of the assets in the respective fund to the sum of all the assets in the funds being analyzed.

    27. Multiplying the individual fund returns by their respective weights and summing the resulting

    weighted returns gives the aggregate return.

    28. An arbitrary weight approach generally involves choosing weights in an arbitrary way that may

    vary over time, with one caveat that the weights must sum to one.29. The median fund return is the middle return of the sorted returns of a group of funds being

    analyzed.

    30. The continuously compounded return is rtti ln(Rtti)or simply the natural logarithm ofthe gross return.

    31. If a fund increases in value from $100 to $130, the continuously compounded return is

    ln(30/00)0.2624, or 26.24%.32. To annualize a continuously compounded return, simply multiply the return by N/i, which is the

    number of periods in a year divided by the number of sub-periods over which the return was

    earned:

    ln

    33. Continuous compounding over several periods is simply the difference of the logarithms of the

    ending NAV and the beginning NAV:

    () ()

  • 8/8/2019 Key Concepts + Figures

    5/69

    3. Return and Risk Statistics

    1. A relative frequency histogram is a graph of the relative frequencies (percent of observations)

    falling in mutually exclusive data intervals. Steps involved in creating relative frequencyhistograms are:

    Set up mutually exclusive intervals.

    Assign each observation to one interval.

    Calculate the relative frequency (percentage of observations) within each interval.

    Plot the histogram charting the relative frequencies for each interval.

    2. The arithmetic average return is the simple average return computed over a prespecified time

    period. The formula for the arithmetic average return equals the sum of the periodic returns

    divided by the number of periods over which the sum is computed.

    3. The geometric mean return is the compound average return divided over a prespecified time

    period.

    4. The geometric return accounts for compounding. The arithmetic return does not account forcompounding. The geometric mean return will always lie below the arithmetic average return

    and the difference between the two statistics widens as the data series becomes more

    volatile.

    5. In the case of continuously compounded returns, the geometric mean return simply equals

    , where is the average continuously compounded return.6. The median is the middle number within a ranked data series. The median provides a better

    measure of the central tendency of a data distribution in the presence of outlier (or extreme)

    values.

    7. The average gain is the arithmetic average return computed over all periods in which the fund

    manager earned a positive return. Similarly, the average loss is the arithmetic average return

    computed over all periods in which the fund manager earned a negative return. The gain toloss ratio equals the average gain return divided by the average loss return. A positive ratio

    indicates that, on average, the fund managers average gains outpaced the average losses.

    8. Performance rankings based solely on average fund returns are misleading because they

    ignore the risks undertaken by each fund. Risk and return are inversely related. Therefore, low

    risk funds will rank near the bottom based on long-term average rate of return.

    9. Many definitions of risk exist. The common factors of all risk statistics are that they focus on

    uncertainty and the chance of disappointing outcomes.

    10. The range is the difference between the minimum and maximum values in a data distribution.

    The range is not usually a good indicator of risk because it relies solely on two data points (the

    extreme observations) and ignores all other intermediate observations.

    11. A percentile is the percentage of observations in a data series that lies below a particularvalue. For example, the 5th

    percentile return equals the return below which the portfolio

    performed 5% of the time, and above which the portfolio performed 95% of the time.

    12. The variance measures the extent to which a funds performance deviates from its mean

    return.

    13. Volatility is measured by the funds standard deviation, which equals the square root of the

    variance of returns.

    14. The formula for the variance for sampled data equals the sum of squared deviation of the

    portfolio return from its sample mean divided by the number of sampled returns less one. The

    formula for the variance derived from the entire population equals the sum of squared

    deviations of the portfolio return from its population mean divided by the total number of

    returns.

    15. The variance formula using simple returns does not account for compounding effects. Instead,

    the variance should be calculated using continuously compounded returns. The volatility that

  • 8/8/2019 Key Concepts + Figures

    6/69

    accounts for compounding equals e raised to the power of the standard deviation of

    continuously compounded returns less one.

    16. The annualization procedure often employed for volatility ignores the effects of compounding.

    Therefore, the calculation of the annualized volatility is inconsistent with the calculation of the

    annualized return.

    17. The researcher must balance the need for sufficient number of observations to derive reliable

    volatility estimates with the reality that volatility is not constant over time.

    18. Skewness refers to the extent to which the distribution of data is not symmetrical about its

    mean. The tail of the distribution is elongated to the right for positively skewed data and is

    elongated to the left for negatively skewed data. Positively skewed distributions have more

    positive outliers than negative outliers. Negatively skewed distributions have more negative

    outliers than positive outliers.

    19. Kurtosis refers to the degree of peakedness in the data distribution. A distribution is said to be

    leptokurtic if it has a peak that extends above that of a normal distribution, and has tails that

    are fatter than those of a normal distribution. In contrast, a distribution is said to be platykurtic

    if it has a peak that lies beneath that of a normal distribution, and has tails that are thinner

    than those of a normal distribution.

    20. The Beta-Jarque statistic is used to test data for departures from the normal distribution. The

    formula for the statistic is:

    ! ",where S is the skewness, and K is the kurtosis for the sampled data. If the data follow a

    normal distribution, S equals 0 and K equals 0, and the Beta-Jarque statistic also equals zero.

    21. The Beta-Jarque statistic follows a chi-square distribution with two degrees of freedom. The

    null hypothesis that the data follow a normal distribution is rejected if the Beta-Jarque statistic

    exceeds the critical value (usually the 95th

    percentile of the chi-square distribution with two

    degrees of freedom).

    22. Drawbacks of using volatility (standard deviation) as the measure of risk include:

    Asymmetric returns: the standard deviation is high for funds with many moreabnormally high returns versus abnormally low returns. Yet, few investors would

    equate the abnormally good performance with risk.

    Benchmark: the standard deviation examines the volatility around the sample mean

    return. Yet many investors equate risk with volatility around a target return such as a

    minimum acceptable return or a benchmark return, rather than the sample mean

    return.

    Risk aversion: the standard deviation weights positive and negative surprises equally.

    Yet, most investors dislike negative surprises more than they like positive surprises.

    23. Downside risk differs from volatility risk by focusing solely on returns that fall below a pre-

    specified target return. Therefore, downside risk differs from traditional volatility risk in two

    ways. First, downside risk focuses solely on negative surprise outcomes. In contrast, volatility

    risk uses all returns, positive and negative surprises. Second, downside risk uses a

    customized reference point or target return, #. In contrast, volatility risk uses the historicalmean return for the fund.

    24. Semi-deviation equals the volatility of returns that fall below the historical mean return and is

    used as a measure of downside risk.

    25. Below-target semi-deviation equals the volatility of returns that fall below the returns a for pre-

    specified benchmark, and is also used as a measure of downside risk. The below-target semi-

    variance equals the square of the below-target semi-deviation.

    26. Other measures that shed light on the riskiness of the fund are downside frequency, gain

    standard deviation, and loss standard deviation. The downside

    Frequency equals the number of negative surprise occurrences, indicating how often the fund

    underperformed the target. The gain standard deviation equals the volatility of all positive

    returns around the average positive rate of return, indicating upside volatility. The loss

  • 8/8/2019 Key Concepts + Figures

    7/69

    standard deviation equals the volatility of all negative returns around the average negative rate

    of return, indicating downside volatility.

    27. Shortfall probability equals the chance that the funds return will be less than a pre-specified

    target return.

    28. Value at risk (VaR) is the maximum percent loss, equal to a pre-specified worst case

    quantile return (typically the 5th

    percentile return). In contrast, expected shortfall is the mean

    percent loss among the q-quantile worst returns.

    29. Drawdown equals the percentage decline in asset value from its previous high.

    30. Several drawdown statistics are used in practice. The maximum drawdown is the worst

    percent loss experienced from peak to trough over a specified period of time. The

    uninterrupted drawdown measures the duration of the consecutive loss periods as well as the

    cumulative loss over the uninterrupted loss period. The drawdown duration is the amount of

    time needed to totally recover the drawdown.

    31. Drawdowns are more easily understood than volatility risk measures, but must be used

    cautiously. In particular, maximum drawdowns are larger for assets that are valued more often

    and increase with the time period examined, which place long time managers of daily marked-

    to-market assets at a disadvantage.

    32. Funds are often evaluated relative to pre-specified benchmarks, such as those defined below:

    Capture indicator: ratio fund average return to the benchmark average return.

    Up capture indicator: fund average up percent gains divided by benchmark average

    up percent gains.

    Down capture indicator: fund average down percent gains divided by benchmark

    average down percent gains.

    Up number ratio: number of periods in which a positive return for the fund and

    benchmark coincided divided by the total number of positive returns for the

    benchmark.

    Down number ratio: number of periods in which a negative return for the fund and

    benchmark coincided divided by the total number of negative returns for the

    benchmark.

    Up percentage ratio: number of periods in which the fund outperformed the

    benchmark during up periods for the benchmark divided by the total number of

    positive returns for the benchmark.

    Down percentage ratio: number of periods in which the fund outperformed the

    benchmark during down periods for the benchmark divided by the total number of

    negative returns for the benchmark.

    Percent gain ratio: number of positive returns for the fund divided by the number of

    positive returns for the benchmark.

    Ratio of negative months over total months: number of negative monthly returns for

    the fund divided by the total number of sampled months.

    33. The beta of a hedge fund measures the sensitivity of its returns to changes in the benchmark

    return. Beta does not measure the total risk of the fund: it measures only that part of the funds

    risk that is related to the benchmark, often called market or systematic risk.

    34. Tracking error measures the extent to which the portfolios returns deviate from the

    benchmark returns over time. Therefore, tracking error quantifies the uncertainty (risk)

    regarding deviations of the portfolio return from the benchmark return. A low tracking error

    indicates that the fund performance closely resembles that of the benchmark. Several tracking

    error statistics are used in practice, including the root mean difference from the benchmark,

    the standard deviation of differences from the benchmark, and the mean absolute deviation

    from the benchmark.

  • 8/8/2019 Key Concepts + Figures

    8/69

    4. Risk-Adjusted Performance Measures

    1. The Sharpe ratio measures the portfolios excess return per unit of (total) risk, determined by

    the standard deviation of portfolio returns: .

    2. The information ratio (IR) measures the performance of a portfolio relative to a prespecified

    benchmark:

    . The numerator of the IR can also be viewed as the zero investment

    hedge fund return on a long/short strategy that takes a long position in Portfolio P and an

    offsetting short position in the benchmark.

    3. Various tests of significance for differences in Sharpe ratios are often used. The Jobson and

    Korkie test examines differences in Sharpe ratios between two portfolios, assuming normality

    of portfolio returns. In contrast, the Gibbons, Ross, and Shanken test examines differences in

    Sharpe ratios between a managed portfolio and the mean-variance efficient market portfolio.

    Finally, the Lo test relaxes the distributional assumptions on the portfolio returns, incorporating

    characteristics such as serial correlation and mean reversion.4. The CAPM provides an equilibrium relationship between required returns for assets and their

    betas: required return = . Therefore, according to the CAPM, the risk

    premium for any asset equals . The graph of the CAPM is called the Security

    Market Line, with intercept equal to the risk-free rate, , and slope equal to the market risk

    premium, .

    5. The Jensen alpha equals the difference between the return earned on the portfolio and the

    portfolios CAPM required return.

    6. The Treynor ratio equals portfolio excess return divided by beta.

    7. The Treynor ratio differs from the Sharpe ratio by using the portfolio beta as the appropriate

    measure of risk (in contrast to the Sharpe ratio that uses the standard deviation). The ratio of

    the portfolios alpha to its beta equals the difference between the Treynor ratios for the

    managed portfolio and the market index.

    8. The M2

    portfolio performance statistic compares the rate of return performance of a managed

    portfolio versus the market portfolio, after controlling for differences in standard deviations.

    The M2

    measure applies leverage or deleverage to the managed portfolio (in order to match

    the risk of the market portfolio).

    9. Both GH1 and GH2 performance measures relax the assumption that the maturity of the risk-

    free asset matches the portfolio evaluation period. Therefore, the maturity of the treasury

    security may exceed the length of the evaluation period, exposing the treasury security to

    interest rate risk. The end result is that the GH measures are derived using the curved

    opportunity sets connecting the Treasury bill with the managed portfolio or with the market

    portfolio. To control for differences in risk, the GH1 measure applies leverage/deleverage to

    the market portfolio, whereas the GH2 applies leverage/deleverage to the managed portfolio.

    10. The Sortino ratio measures rate of return performance for a portfolio, relative to its downside

    risk. In contrast, the upside-potential ratio examines the expected upside return for a portfolio

    (above the minimum acceptable return benchmark), relative to its downside risk.

    11. Both the Sterling ratio and the Burke ratio evaluate rate of return performance relative to

    drawdowns (percent losses) experienced by the portfolio. The drawdown measure differs

    between the two ratios. The Sterling ratio uses the average extreme drawdown (or most

    extreme drawdown), whereas the Burke ratio uses the square root of the sum of all drawdown

    squared.

    12. The return on value-at-risk, or RoVaR, equals the mean return earned on the portfolio divided

    by the portfolios value-at-risk (in absolute value).

  • 8/8/2019 Key Concepts + Figures

    9/69

    5. Databases, Indices, and Benchmarks

    1. Self-selection bias results from the fact that managers voluntarily submit their performance

    data to database vendors. Since funds with good track records and smaller funds with expresscapacity and good results have an incentive to report (they cannot otherwise market or

    advertise their results), an upward bias in the performance data results.

    2. Sample selection bias results when database vendors impose inconsistent criteria across

    vendors with respect to database inclusion (such as asset size and track record restrictions).

    This results in incomplete samples with widely varying fund membership among different

    database vendors.

    3. Survivorship bias results when the history of dead, merged, or otherwise nonreporting funds is

    excluded from the historical set of returns in the database, which results in an upward bias in

    the databases reported results.

    4. Backfill bias occurs when a fund is allowed to include its historical returns upon its inclusion in

    a database. The upward bias is estimated at 1.2% to 1.4% for databases that allow thispractice.

    5. Illiquidity bias results when funds hold infrequently priced assets. Subjectivity in the valuation

    process (i.e., the manager, rather than the market, assigns an asset its value) tends to smooth

    returns and understate volatility (thus overstating risk-adjusted return measures).

    6. The benefits of hedge fund indices are that they can provide a representation of the underlying

    universe of hedge funds, provide correlation estimates as an input into an optimization

    process, allow the investor to compare the risk and return trade-offs of various trading

    strategies, provide the basis for a passive investment vehicle, and give the investor the ability

    to compare individual hedge fund managers against an index of all managers.

    7. The difficulties in constructing an effective index are that the indices will also contain the

    databases biases, and classification may be inaccurate.

    8. Characteristics of a well-constructed index include the following: the index fund is transparent

    representative, capitalization weighted, investable, timely, and stable over time.

    9. The key characteristics of the various hedge fund databases are as follows:

    ABN AMRO Asian hedge funds, equal weighted, reporting restrictions (sample

    selection issues).

    Altvest CaLPERS uses this database which tracks 2,000 plus hedge funds and 14

    indices. Funds may show up in several sub-indices.

    MAR/CISDM has tracked data on managed futures since 1979 and on the hedge

    fund industry since 1994. Now owned by the University of Massachusetts-Amherst.

    Academic database of choice.

    CSFB/Tremont indices start in 1994 and use 3,000 hedge funds from the TASS

    database (fund of funds and managed accounts are excluded). Funds must have $10

    million in assets under management and audited financial statements, and they must

    meet requirements on disclosure and transparency. Indices are rebalanced monthly

    with quarterly manager revisions. CSFB/Tremont created a series of investable

    indices in 2003. They are the only capitalization-based index provider.

    Evaluation Associates Capital Markets nontransparent, based on a small sample

    size. EACM100 is the master index. They also have five broad strategy indices and 13

    sub-strategy indices with data starting in 1990. Equally weighted, non-audited

    performance data are provided by 100 participating hedge funds.

    Hedge Fund Research pure indices with minimum bias problems. HFR has 37

    equally weighted monthly performance indices (onshore and offshore). Returns are

    net of fees and free of survivorship bias after 1994. No minimum asset size or trackrecord is necessary. They maintain dead fund history. In 2003, Hedge Fund Research

    published one composite and eight investable indices (rebalanced quarterly).

  • 8/8/2019 Key Concepts + Figures

    10/69

    HedgeFund.net/Channel Capital Group (Tuna indices) produces 32 equally

    weighted indices from a database of 4,000 onshore and offshore hedge funds. Self-

    selection, survivorship biases are present.

    Hennessee produces 23 equally weighted indices and four composites based on a

    sample of 500 hedge funds from a database of 3,000 funds. Restrictions of index

    inclusion include: (1) a minimum of $100 million in assets under management orgreater than $10 million and a 1-year track record, and (2) satisfy reporting

    requirements. Indices retain the historical performance of dead funds and include

    several funds that are closed to new investors.

    Invest Hedge/Asia Hedge/Euro Hedge HFI does not manage money or provide

    investment advice. Data goes back to 2000 for HFIs European hedge fund indices

    and to 2001 for its Asian hedge fund indices.

    LJH Global Investments small sample size, investable indices. LJH publishes 16

    equally weighted indices of 25 to 50 hedge funds.

    Morgan Stanley Capital International created a database of over 1,500 funds in

    2002 and created premier hedge fund classification system based on the managers

    investment process, the asset classes used, geographic location of investments, andsecondary classifications. Three composites based on asset size including broad

    (greater than $15 million), core (greater than $100 million), and small ($15 million to

    $100 million).

    Standard & Poors equally weighted and primarily investable indices cover nine

    styles and contain only 40 funds.

    ZCM Zurich Capital Markets has failed at several attempts to create hedge fund

    indices based on style while simultaneously offering investors fund of funds that would

    track the indices performance.

    Van Hedge Fund Advisors International comprehensive database of over 5,000

    funds. Tracks 14 strategies and one global index based on a sample of 750 offshore

    and onshore hedge funds.

    10. The intuition of the EDHEC index is that each of the existing indices represents a sample of

    the underlying universe with some noise created by database biases. By combining the

    available indices and using PCA, the resulting index is a pure form index for the underlying

    universe without bias problems.

    11. Three key reasons why performance benchmarks are important are that they:

    Help measure the investment performance of hedge fund managers.

    Provide clients and trustees with a reference point for monitoring performance.

    Modify the behavior of portfolio managers who drift away from their investment

    mandate.

    12. The three reasons why the S&P 500 would be a poor hedge fund benchmark include the

    following:

    Inconsistent holding periods and market exposures of managers versus the S&P 500.

    Hedge funds typically employ leverage and short sales.

    Hedge funds typically invest in shares of firms and asset classes not included in the

    S&P 500.

    13. The benefits of the relative peer group benchmarks are that they look at the performance of

    other practitioners, reflect the differences or similarities between managers, and take into

    account the transaction and trading costs. The drawbacks are that they suffer from selection

    and survivorship bias, lack overall representativeness, and are not a viable passive

    investment strategy. Peer groups are not useful to assess the performance of a manager in

    general.

    14. The essential elements of a manager benchmark are that it be unambiguous, investable,

    appropriate, reflective, measurable, and specified in advance.

    15. The following are the four properties that an ideal benchmark should possess: (1) simple to

    understand, (2) replicable, (3) comparable, and (4) representative of the underlying market.

  • 8/8/2019 Key Concepts + Figures

    11/69

    6. Covariance and Correlation

    1. A scatter plot is a graph of paired observations fort wo variables, illustrating the relationship

    between the variables.2. The covariance is a statistic used to measure the relationship between two variables. The

    covariance ranges from negative infinity to positive infinity. A positive covariance indicates a

    positive relationship, a negative covariance indicates a negative relationship, and a zero

    covariance indicates no relationship.

    3. The formula for the sample covariance is:

    4. The variance-covariance matrix is a square table arranged in a fixed number of rows and

    columns that report variances and covariances. Variances are reported down the diagonal and

    covariances are reported in the off diagonal cells. The covariance between variables i and j isreported in Row i and Column j.

    5. The covariance is unbounded, ranging from negative to positive infinity, with the actual

    magnitude of the covariance providing little insight into the strength of the relationship. To

    correct this problem, it is common to scale the covariance by the product of the standard

    deviations of the two variables. By scaling the covariance, we derive the correlation.

    6. The correlation between two variables (also known as the Pearson product-moment

    correlation coefficient) equals: 7. A correlation matrix is a square table arranged in a fixed number of rows and columns,

    conveniently providing correlations for different pairs of variables. The correlation between variables or ranks i and j, is reported in Row i and Column j.

    All correlations down the diagonal will equal 1.

    8. The Spearman rank correlation equals the Pearson correlation of the variable rankings. The

    Spearman rank correlation is particularly appropriate for data series that are not normally

    distributed.

    9. To calculate the Spearman rank correlation, convert each observation to a rank with the

    lowest observation equaling one and then use the following formula:

    10. Geometry can be used to illustrate the effects of correlation on portfolio risk. Portfolio risk can

    be measured by the length of a vector. A leverage overlay is perfectly positively correlatedwith the original portfolio, causing the new and old portfolio to lie at a zero degree angle to

    each other. In contrast, a hedge overlay is perfectly negatively correlated with the original

    portfolio, causing the new and old portfolio to lie at a 180 degree angle to each other. And, the

    risk of the portfolio comprising uncorrelated assets equals the length of the hypotenuse of a

    right triangle.

    11. The correlation estimate measures the strength of the linear relationship between two

    variables, but it does not necessarily imply a causal relationship between the two variables.

    12. Correlation measures the direction and strength of the linear relationship between two

    variables. It does not measure the strength of non-linear relationships.

    13. A spurious relationship refers to an incorrect inference drawn from an observed correlation,

    and is most often made when correlations are high between variables that have no logicalconnection.

  • 8/8/2019 Key Concepts + Figures

    12/69

    14. An outlier is defined as an extreme or abnormal observation that is not representative of the

    majority of observations in the sample. Outliers cause the correlation to move toward zero,

    which incorrectly implies a lack of relationship between the variables. The researcher should

    attempt to use robust estimation methods that identify and diminish the weight or importance

    of outliers in the correlation calculation.

    15. The partial correlation is the correlation between two variables after controlling or removing

    the effects of other related variables.

    16. Sampling error refers to the difference between a sample statistic and the corresponding

    population parameter that it is trying to estimate. Sampling error declines as the sample size

    grows.

    17. A confidence interval is an estimated range within which the population parameter is likely to

    be contained. Confidence limits refer to the lower and upper bounds of the confidence level.

    18. Statistical significance refers to the probability that a relationship observed in a sample did not

    happen by chance. If the estimate is statistically significant, then the estimate is reliable.

    19. The correlation confidence interval is the range within which the population correlation is likely

    to be found.

    20. A correlation estimate is statistically significant if its estimated confidence interval does not

    include zero.

    21. Heteroscedasticity refers to a data series characterized by a nonconstant dispersion (or

    variance), which may cause the correlation estimate to be biased downward.

  • 8/8/2019 Key Concepts + Figures

    13/69

    7. Regression Analysis

    1. The general equation underlying any regression equals the sum of predictable and

    unpredictable components.2. The true regression model is unknown. Instead, we must rely on regressions derived from

    estimated models.

    3. Primary problems encountered by regression include:

    The selection of appropriate independent variables

    The use of appropriate estimation methods

    The evaluation of the quality of the estimated model

    4. Linear regression is a statistical technique used to explain the linear relationship of the

    dependent variable with one or more predictor or independent variables. The regression

    equation is the mathematical representation of the linear relationship.

    5. The use of sampled data introduces many errors into the regression estimation process. The

    most common sampling errors include: Recording errors: errors recording the data in the database

    Non-synchronous pricing: last trades of the day not transacting at exactly the same

    time

    Liquidity premiums: not all assets can be bought and sold with the same ease

    Discreteness: increments used to price assets may not represent true values

    6. Linear regression should be viewed as a first order approximation of reality because, in reality,

    most relationships are not perfectly linear.

    7. The regression error term equals the difference between the observed and estimated values

    of the dependent variable. Large errors indicate an inferior or useless regression; small errors

    indicate a superior or useful regression.

    8. Ordinary least squares (OLS) is a statistical technique that derives a regression line thatminimizes the sum of squared regression errors.

    9. Desirable properties of OLS include:

    The regression errors remain as small as possible

    The regression will pass through the sample means of the dependent and

    independent variable

    The errors are uncorrelated with the independent variable and with the estimated

    dependent variable

    The estimates of the intercept and slopes are the best linear unbiased estimates

    10. The slope coefficient equals the average change in the dependent variable for every 1-unit

    change in the independent variable. The formula for the slope coefficient estimate is ,where sx,y is the sample covariance between the dependent and independent variables (Y and

    X, respectively), and is the sample variance for the independent variable.11. The intercept is the point of intersection of the regression line with the Y-axis. The formula for

    the intercept estimate is .12. The multiple R is the correlation between the realized values and the predicted values of the

    dependent variable.

    13. The R-square is the fraction of the dependent variables variance that is explained by the

    regression. The R-square equals the square of the multiple R, and is also equal to the

    regression sum of squares divided by the total sum of squares.

    14. The standard error of the estimate is the standard deviation of the regression residuals. A

    large standard error indicates that the scatter around the regression line is large, suggesting

    the regression is inferior.15. ANOVA, or analysis of variance, refers to the decomposition of the variance of the dependent

    variable into explained and unexplained components.

  • 8/8/2019 Key Concepts + Figures

    14/69

    16. The F-statistic is the test statistic used to test the overall significance of the regression:

    H0: all slopes equals zero

    HA: not all slopes equal zero

    The formula for the F-statistic equals the ratio of the mean explained variation to the mean

    unexplained variation.

    17. A p-value equals the probability that the null hypothesis is correct. The null hypothesis, H0:

    slopes = zero, is rejected if the p-value is less than the significance level used for the test

    (typically 5%).

    18. The t-statistic for the slope estimate equals the slope estimate divided by its standard error.

    The null hypothesis, H0: slope = 0, is rejected if the t-statistic exceeds its critical value (which

    equals 1.96 for a large sample based on a 5% level of significance).

    19. A confidence interval is the reasonable range within which the true unknown parameter (e.g.

    slope) is likely to be found. The null hypothesis, H0 = 0, is rejected if the confidence interval

    does not contain the hypothesized value (0).

    20. Several difficulties limit the usefulness of regression to derive accurate forecasts for the

    dependent variable. These difficulties include unpredictability of the independent variables,

    instability of regression parameters, out-of-sample predictions, and spurious relationships.

    21. Regression is often used to predict the dependent variable. Using a simple linear regression,

    the predicted value for Y equals , in which and are the estimated intercept andslope (from past data), and X is the predicted value for the independent variable.

    22. Multiple linear regression refers to a linear regression of a dependent variable against multiple

    independent variables.

    23. In contrast to the R-square, the R-adjusted considers the number of independent variables

    used in the regression. The formula for the R-adjusted equals:

    24. To test the overall significance of a multiple regression, the F-statistic is used:

    !"#!" ""#$"" % ""#"" & 25. Omitted variable bias is present when independent variables that should be in the model are

    omitted. The consequences of this bias depend on the correlation between the included and

    omitted variables. If the correlation equals zero, then the intercept estimate is biased and the

    regression residuals might deviate from the normal distribution. If the correlation equals 1,

    then all the parameters are adversely affected (intercept and slope estimates are biased, and

    regression residuals might be non-normal).

    26. Extraneous regression variables refer to regressions that include irrelevant independent

    variables. If the extraneous variable is uncorrelated with the relevant independent variables,

    then there is no bias. On the other hand, if the extraneous X variable is correlated with the

    relevant X variables, then the standard errors of the parameter estimates will be inflated,causing the t-statistics to fall.

    27. Multicollinearity refers to the violation of a regression assumption that all the independent

    variables are uncorrelated with each other. Multicollinearity causes the intercept and slope

    standard errors to be biased upward, and t-statistics to be biased downward.

    28. Heteroscedasticity refers to the violation of a regression assumption that the variance of the

    regression errors is constant across all observations. There are two common corrections for

    heteroscedasticity:

    Use a different specification for the model (different X variables, or perhaps non-linear

    transformations of the X variables)

    Apply a weighted least squares estimation method, in which OLS is applied to

    transformed or weighted values of X and Y. The weights vary over observations,depending on the changing estimated error variances.

    29. Serial correlation refers to the violation of the regression assumption that the regression errors

    are uncorrelated across observations.

  • 8/8/2019 Key Concepts + Figures

    15/69

    30. The Durbin-Watson test refers to the test of the hypothesis that the regression errors are not

    serially correlated. For a large sample, the Durbin-Watson test statistic equals approximately

    2(1-corr), where corr is the correlation between successive regression residuals. The null

    hypothesis of no serial correlation will usually be accepted if the Durbon-Watson statistic lies

    between 1.5 and 2.5.

    31. Non-linear regression refers to regressions in which the relationship between the dependent

    and one or more of the independent variables in non-linear. A model in which the effects of the

    independent variable on the dependent variable change over time will produce a curved

    relationship between the dependent and independent variable. A typical example is the

    regression:

    ' #' ' ('32. Transformations refer to non-linear modifications (e.g., ratios, trigonometric functions, and

    logarithms) to the original data.

    33. Stepwise regression refers to a method in which independent variables are selected

    sequentially based on incremental explanatory power.

    34. The backward elimination stepwise regression approach begins by including all independent

    variables chosen to be analyzed by the researcher. Each independent variable is thenevaluated based on its ability to explain the dependent variable. The variable that explains the

    least (smallest slope coefficient t-statistic) is eliminated, and the process is repeated until all

    variables significantly contribute to the explanatory power of the regression.

    35. The forward selection stepwise regression approach begins with no independent variable. The

    first variable added to the model is the one with the highest slope t-statistic. Then, other

    variables are added sequentially, depending on the magnitude of their t-statistics.

    36. Hierarchical multiple regression uses similar statistical tests as those applied in stepwise

    regression, except that the researcher, not the computer, selects the order in which the

    independent variables are sequentially tested and added to the model.

    37. In contrast to simple linear regression, non-parametric regression makes very few

    assumptions about the behavior of the data (normality of the error term, homoscedasticity of

    the errors, no serial correlation among the errors, etc.) or about the exact functional form of

    the relationship between Y and X. Non-parametric regression is a data smoothing method.

  • 8/8/2019 Key Concepts + Figures

    16/69

    8. Asset Pricing Models

    1. Dimension reduction is taking a large amount of data and condensing it into a much smaller

    number of variables or factors, without losing the information content of the data set.2. Factor models provide insight into a funds risk/return profile (including the risks taken by the

    funds managers), allow for accurate risk attribution, help with accurately forecasting future

    risks, and identify the contribution the manager is making to the funds overall return.

    3. A factor model should be intuitive, should be estimated in a reasonable amount of time, should

    be parsimonious, should reflect commonalities across funds, and should help managers with

    decision making.

    4. A general linear factor model expresses returns as a function of a single factor.

    5. The Capital Asset Pricing Model (CAPM), the most famous single-factor model, expresses

    return as a function of the market risk of the asset.

    6. Single-factor models are too limited to capture the complexity of hedge fund returns. Consider

    the various hedge fund styles, investment opportunities, markets, long and short positions anddegrees of leverage as examples of this complexity.

    7. In a general linear multi-factor model, return is related to more than one underlying risk factor.

    In general form the model is:

    8. Principal component analysis is a statistical technique that allows the researcher to distill a

    vast amount of data into a few common factors without losing much of the information in the

    original data (thus accomplishing dimension reduction). The factors are implied by the data

    and do not have a direct economic interpretation.

    9. Common factor analysis is another statistical tool used to accomplish dimension reduction. A

    key difference is that the factors in factor analysis are observable and explicitly identified by

    the researcher.10. Fama and French developed a three-factor model that includes two additional risk factors

    relative to the CAPM. Their model indicates that both market capitalization (SMB) and book-

    to-market ratios (HML) help explain returns.

    11. Jagadeesh and Titman find evidence of a momentum effect in stock market returns. That is,

    stocks that have performed well continue to perform well and stocks that have performed

    poorly continue to perform poorly (WML).

    12. The Fama and French model redefined to include a momentum factor is:

    !

    13. Other factors that may influence hedge fund returns, as suggested by various researchers,

    include trading styles, interest rates, equity related factors, growth, the age and/or size of the

    fund, and so on.14. An analyst or investor can use multiple-factor models to help explain hedge fund returns and

    risks, help create performance benchmarks and index trackers, help develop trading

    strategies to increase sensitivity to some risk factors and decrease it to others, and to help

    assess whether a manager generated positive alpha.

    15. Traditional asset models assume that volatility is the appropriate risk measure for an asset (or

    portfolio of assets) and that risk exposures are constant over time.

    16. Skewness is a measure of the asymmetry of the distribution. It is the third moment of the

    return distribution. Investors prefer positive skew and want to avoid negative skew. Co-

    skewness considers an assets contribution to the skewness of the portfolio or fund, and is

    more relevant than the skewness of the asset alone.

    17. Kurtosis is the fourth moment of the distribution and measures the peakedness of thedistribution. Co-kurtosis is concerned with the kurtosis contribution of an asset to the portfolio

    or fund.

  • 8/8/2019 Key Concepts + Figures

    17/69

    18. Rubinstein developed a model relating return to the higher moments of the distribution such as

    skewness and kurtosis.

    19. Kraus and Litzenberger developed a three-moment CAPM (i.e. a quadratic CAPM) that allows

    an analyst to test for skewness preferences. The model is:

    20. The cubic form of the model includes a factor that is related to the fourth moment (kurtosis).

    The model is:

    21. In 1966 Treynor and Mazuy proposed a quadratic asset pricing model that considers market

    timing as a systematic risk factor. The model is identical to the quadratic model proposed by

    Kraus and Litzenberger.

    22. Conditional asset pricing models such as the CAPM assume risk is constant through time.

    Unconditional models do not make this assumption and thus may be more appropriate for

    modeling hedge fund returns.

    23. Hedge funds are like options in terms of their fee structure, in terms of trading strategies that

    target classes of investors and in terms of the asymmetric, non-linear payoff profiles.24. Henriksson and Merton (1981) developed a model that views a hedge funds return as the

    sum of the return on the market and on a put option on the market. The model is:

    "#$

    25. Fung and Hsieh identify three components to hedge fund returns: location factors, trading

    strategy factors, and a leverage factor.

    26. Fung and Hsieh provide a framework for looking at hedge fund strategies in terms of option

    strategies.

    27. Agarwal and Naik propose several buy-and-hold risk factors that influence hedge fund returns.

    They include equity, bond, style and default spreads.

    28. Agarwal and Naik find that their model has better explanatory power (higher R2) than the Fung

    and Hsieh model.29. Amin and Kat show that investors can generate the same payoff distribution as a hedge fund

    using dynamic trading strategies.

    30. An investor can achieve superior returns by either increasing beta (taking on more risk) or

    generating alpha (being a superior market timer, asset selector, etc.).

    31. If the asset pricing model is specified incorrectly, it is possible to make the erroneous

    conclusion that hedge fund managers are generating positive alpha when in reality they are

    merely increasing beta exposure.

  • 8/8/2019 Key Concepts + Figures

    18/69

    9. Styles, Clusters, and Classifications

    1. Hedge funds can use self-classification, but because of the lack of stringent regulation, it is not

    unusual for a hedge fund manager to deviate from the goal of the fund as implied by its nameand even the goal as stated in the prospectus. Hence, quantitative methods are needed to

    classify hedge funds.

    2. Fundamental style analysis, also called holding-based style analysis, classifies hedge funds

    by the type of assets held. Characteristics like the assets markets and geographical location

    classify the fund.

    3. The characteristic-based approach is similar to the fundamental approach except it uses

    measures like the assets book-to-market, momentum, etc.

    4. Return-based style analysis only requires the historical returns of the funds. Hence, it is an

    easier and cheaper substitute for holdings-based (fundamental style) analysis. However, it can

    be much less accurate.

    5. Multifactor analysis involves the estimation of the following model: ,where the factors are usually style indices and the betas are the style sensitivities.

    6. In estimating the multifactor model, the style indices should have the following three

    properties: mutually exclusive, exhaustive, and sufficiently different in their behavior from each

    other.

    7. Analysts usually constrain multifactor analysis with either or both of the following: (1) the sum

    of the betas equals 100%, (2) some or all of the betas are greater than or equal to zero.

    8. Strong style analysis is the name used when both constraints are imposed: the betas cannot

    be negative, and the betas sum to 100%.

    9. The familiar R-square, or percent of variation explained, measure applies to constrained

    optimization estimation techniques used in multifactor analysis. It tells how well the model fitsthe data.

    10. Style analysis radar charts can give a succinct, visual representation of the style exposures of

    a fund. Such a chart has rays extending from a point of origin, and each ray indicates a

    different style index. A set of lines crossing the rays indicate the level of exposure to each

    style.

    11. The Herfindahl-Hirschman concentration index, or HHI, gives a quantitative measure of the

    diversification of the fund. It is the sum of the squared betas from a multifactor analysis divided

    by the square of the sum of the betas:

    !!" #$ %%&%& '#$ %%&%& '12. Rolling window analysis can indicate how a funds investment style changed over time. It

    performs the multifactor analysis on an initial sub-period and then performs it over and over on

    a new sub-period moved one period forward from the previous period. The pattern of the

    many sets of betas reveals whether the strategies of the fund might have drifted over time.

    13. Rolling benchmarks allow an analyst to track the managers style drifts, while static analysis

    does not.

    14. The Lobesco and DiBartolomeo style weight error is essentially the standard error of a style

    exposure: %. It is represented by ()*, and it can be used to compose standard confidenceintervals like #% + ()* , % + ()*' for a 95% confidence interval.

  • 8/8/2019 Key Concepts + Figures

    19/69

    15. Style analysis can be misused due to the use of subjective inputs and because an analyst

    may find it easy to rely too much on the quantitative results and neglect important qualitative

    analyses.

    16. Making adjustments to measure current exposure is important because a factor analysis of

    past data may not accurately reflect current exposure.

    17. The Kalman filter uses multifactor analysis to model the changes of sensitivities over time and

    can give an indication of current exposure and even predict future exposure.

    18. The Swinkels and van der Sluis application of the Kalman filter models the changes of

    sensitivities (e.g., the betas) as a random walk.

    19. The Kalman smoother is a descriptive technique that uses the entire data set to describe the

    style sensitivities at any point. It is superior to rolling windows, which only use past data to

    show changes in style sensitivities at any point in time.

    20. In hedge fund analysis, cluster analysis attempts to group funds by certain characteristics.

    21. The four common steps of cluster analysis are choosing (1) the characteristics, (2) the

    measure of distance, (3) the particular algorithm, and (4) the interpretation of the results.

    22. In cluster analysis, an analyst may wish to standardize the data in some way (e.g., removing

    the effect of leverage from the funds under study).

    23. Cluster analysis uses distance functions to determine the degree of similarity of objects.

    24. Distance functions must have the following four properties:

    Identity: D(m,m) = 0

    Non-negativity: D(m,n) = 0

    Symmetry: D(m,n) = D(n,m)

    Triangle inequality: D(m,n) < D(m,q) + D(q,n)

    25. The Minkowski class of distances refers to all functions that take the following shape:

    -., /012, ,134

    &5 36

    26. There are two often-used values for s in the Minkowski class: s = 1 and s = 2. An analyst

    would rarely assign s > 2, because this would give outliers too high of a weight.

    27. When s = 1 in the Minkowski class, it is simply the sum of absolute deviations called the

    Manhattan distance or city-block distance.

    28. When S = 2 in the Minkowski class, the result is the standard Euclidean distance.

    29. A higher correlation between two funds means they are more similar and should have a

    smaller distance. One transformation to create a distance function using correlation is:

    -., 72,30. Binary distance is a simplistic measure in which an exact match gives a value of one, and the

    value zero indicates no match exists between any pair of cases.

    31. Euclidean distance can be biased if the characteristics have widely different variances.

    32. The Mahalanobis distance corrects for the biases in Euclidean distance associated with

    correlated characteristics and different variances.

    33. The hierarchical clustering approach is a step-wise procedure that can be either top down or

    bottom up.

    34. Johnsons hierarchical clustering algorithm begins by having each fund in its own group. It

    then looks for the distance between two clusters that is the shortest of all distances between

    any two clusters. The two clusters that are the closest become one cluster. The process then

    repeats with the new set of clusters.

    35. Johnsons hierarchical clustering algorithm can use one of many distance measures. Once the

    clustering has started, the distance may be either the distance between the closes two

    members of each pair of clusters, for example, or the most distant two members.

    36. Wards method uses an analysis of variance approach to evaluate each step in the clustering

    process. It forms the next cluster at each step that minimizes the increase in the total of the

    sums of squares within groups.

  • 8/8/2019 Key Concepts + Figures

    20/69

    37. Fuzzy clustering gives each object a set of probability values one probability for each

    cluster. Each of the probability values essentially tells the likelihood of that object belonging to

    a given cluster.

    38. Hard clustering is the term used for traditional approaches that set up partitions and give only

    one cluster assignment to each object.

    39. The Rand index is a measure of the difference of the clustering results that different

    algorithms produce.

    40. When comparing two methods, the Rand index is essentially the percent of pairs that are in

    the same cluster in both clustering algorithms. Rand index = 1 means identical groupings.

    Rand index > 0.7 means the groupings are stable across methods.

    41. Martin, Brown and Goetzmann, and Bares et al. are three studies of clustering analysis as

    applied to hedge funds:

    Martin found that some classifications were more stable over time than others, and

    that individual funds in a given cluster can react quite differently to changes in

    economic conditions.

    Brown and Goetzmann found a great deal of heterogeneity in hedge fund behavior.

    Style classifications explained 20% of the differences, and the style of fund

    management determined the persistence of fund returns from year to year.

    Bares et al. performed cluster analysis based on managers. The managers records

    had been gathered from the FRM database where they had already been classified.

    Bares et al. found that the clustering analysis gave results similar to the classifications

    the funds had in the FRM database. Other analysis supported the usefulness of

    cluster analysis in classifying hedge funds.

  • 8/8/2019 Key Concepts + Figures

    21/69

    10. Benefits and Risks Revisited

    1. Three reasons for superior returns of the hedge fund industry are:

    Fewer investment restrictions Less constrained work environment

    Superior compensation

    2. Hedge fund index returns have been higher than most major stock and bond market indices.

    Standard deviations and maximum drawdowns have also been less than those of major equity

    indices.

    3. Correlations among traditional global market indices increased recently. Correlations between

    stock and bond markets, at times, also have been high. Therefore, hedge funds have become

    an increasingly attractive diversification alternative.

    4. Rates of return and standard deviations are much better for hedge fund indices versus the

    S&P 500 during bear markets. Therefore, hedge funds (as a group) have delivered on their

    intended objective to provide important downside risk protection.5. Risk, return, and correlations have varied widely across the different hedge fund strategies. As

    expected, market neutral and arbitrage strategies exhibited the lowest risk and strategies with

    directional bias exhibited the highest risk. Short bias strategies exhibited a large negative

    correlation with the S&P 500, while net long, distressed risk, and emerging market strategies

    exhibited the largest positive correlations with the S&P 500.

    6. Hedge fund strategies are generally positioned into four different styles or categories:

    Low risk low return: relative value strategies, such as market-neutral and arbitrage

    strategies

    Low risk high return: distressed risk strategies

    High risk low return: risk diversifying strategies, such as managed futures and

    emerging market strategies High risk high returns: global macro and long/short strategies

    7. A long-term analysis of hedge fund performance is difficult because very few funds have

    existed for an extended time period. Since current analysis has been confined to a period in

    which the broad market has risen, strategies with a long bias have enjoyed a performance

    advantage.

    8. Hedge fund index performances are difficult to track because they are computed gross of

    transaction costs and do not consider the minimum investment. Moreover, hedge funds are

    under no mandatory reporting requirement, so underperforming managers may elect to not

    report their performance. This introduces an upward bias. Therefore, hedge fund index

    performance may not be representative of the entire hedge fund industry.

    9. Some hedge fund strategies rely on the exploitation of a limited set of opportunities.

    Therefore, constrained opportunity strategies have encountered a particularly difficult time as

    more capital has poured into these funds, and as a result have not sustained their earlier

    successes.

  • 8/8/2019 Key Concepts + Figures

    22/69

    11. Strategic Asset Allocation

    1. Strategic asset allocation is the process of choosing portfolio weights for asset classes as

    opposed to choosing the individual assets within each class. The choice of weights shouldbe for a long-term horizon and should be congruous with the investors return objective

    and risk tolerance.

    2. The St. Petersburg paradox is a game with an infinite expected value payoff. However,

    people will only pay a finite amount to play because of risk aversion. It serves as an

    example to prove people are risk averse.

    3. Utility functions can describe an investors preference for return and aversion to risk. The

    quadratic utility function is a popular function that includes an easily interpreted

    expression for risk aversion: 4. The steps in portfolio optimization: select the appropriate asset classes, make forecasts,

    recognize constraints such as non-negativity of certain asset class weights, optimizebased on a criteria such as minimizing risk for each given level of return, review the

    results, and perform sensitivity analysis.

    5. There are problems with saying that hedge funds are an asset class. There are many

    different types of hedge funds. They often include risk exposures found in traditional

    assets, such as stocks and bonds.

    6. When including hedge funds in a portfolio, a manager should recognize how they might

    serve as substitutes for traditional asset classes. This is because they often have risk

    exposures and return characteristics similar to traditional assets (e.g., a long/short equity

    fund that is net long will have returns that are correlated with the returns of an equity

    index).

    7. Since traditional methods of analysis do not seem to apply to hedge funds, managersoften choose informal approaches such as arbitrarily picking a small allocation (e.g., 5%,

    for hedge funds). Another informal approach is to simply allocate zero capital to hedge

    funds without performing any analysis.

    8. Mean-variance optimizer solutions have problems when applied to hedge funds because

    they do not consider skewness and kurtosis, which hedge fund returns generally exhibit.

    9. The Taylor series expansion illustrates the role higher moments about the mean (such as

    skewness and kurtosis) play in an investors utility function. The formula is:

    where n indicates the moment about the mean.

    10. Mean-variance optimization works fairly well when a portfolio only includes traditional

    asset classes because the returns for these classes are usually close to being normal,

    which means that moments higher than the second moment are (close to) zero. In

    contrast, the higher moments for hedge funds can be quite large, and mean-variance

    optimization may not give the most efficient asset class weights when hedge funds are

    included in the portfolio.

    11. The differences between static buy-and-hold portfolios and dynamic portfolios is another

    reason that mean-variance optimization may not be appropriate for choosing hedge fund

    weights. This is because hedge funds are usually actively managed and their

    characteristics can change quite dramatically over the investment horizon. Thus, the initial

    strategic asset allocation may become inappropriate soon after its implementation.

    12. Because of the way hedge fund returns are reported (i.e., based on estimates rather than

    actual transactions), the time series of the returns may be artificially smooth. This can lead

    to downward bias in the calculated values of return variance.

  • 8/8/2019 Key Concepts + Figures

    23/69

    13. Geltner proposed the following transformation, which uses the autocorrelation coefficient

    of the observed returns to unsmooth the data:

    ! !# $%&!'!( !# )$%&!'!( !#*+ )14. Methods for eliminating estimation risk include (1) subjectively adjusting forecast as new

    information arrives, (2) the statistical shrinkage approach, (3) combining marketequilibrium values with investor expected returns, and (4) bootstrapping. There is also the

    Michaud resampled efficiency algorithm to limit the impact of estimation risk.

    15. Non-standard efficient frontiers use risk measures other than variance (e.g., the semi-

    variance, VaR, and shortfall risk). Evidence of the benefits of this approach is not strong.

    There are potential problems with the approach including estimation risk and difficulties in

    relating the risk measures of individual positions to the overall portfolio.

    16. Portable alpha refers to an investment strategy designed to transfer the excess return

    (alpha) of one portfolio to another portfolio.

    17. The following is an expression for total return:

    $,- ! ,-./, 0,&/ ! %!0/1,2 ! 0,&/ !This helps indicate how to separate alpha from a position that is not market neutral. Themanager can neutralize the second expression with, for example, a position in derivatives.

    The new portfolio has only pure alpha and can serve as a means for adding alpha to

    another portfolio.

    18. The portable alpha mindset is one where an investor can benefit from the skills of

    managers of funds that have risk exposures the investor may not want. The investor can

    eliminate the risk and transport the alpha to his or her portfolio.

    19. Hedge funds usually have both alternative risk exposures and exposures to the risks of

    traditional assets. Pinpointing the specific types and levels of risk of hedge funds and

    other assets under consideration will lead to the construction of more optimal portfolios.

    20. Risk budgeting is the selection of asset classes on the basis of their expected

    contributions to overall return and risk with the goal of achieving a level of return with onlythe desired risk exposures and aggregate level of risk.

  • 8/8/2019 Key Concepts + Figures

    24/69

    12. Risk Measurement and Management

    1. The crucial risk management activities are to understand the risk exposures, measure the

    exposure to each risk, measure the aggregate exposure of each fund, measure the risk of thehedge fund portfolio, and choose the risk exposures.

    2. Value at risk (VaR) is generally interpreted as the worst possible loss under normal conditions

    over a specified period.

    3. The main assumptions of parametric VaR are that the risk factors are normally distributed and

    that the funds prices and returns have a linear relationship with the risk factors.

    4. Historical VaR assumes that the historical patterns indicate the distribution of current and

    future outcomes such that losses in the future will occur with the same frequency and

    magnitude as they did in the past.

    5. Monte Carlo VaR simulates values for risk factors and uses them to simulate returns for a

    hedge fund or portfolio of funds.

    6. Parametric VaR can greatly underestimate VaR if the normality of returns and otherparametric VaR assumptions are not true. Modified VaR attempts to incorporate adjustments

    when the distribution is near but not exactly normal.

    7. Modified VaR incorporates skewness and kurtosis into parametric VaR. 8. Modified VaR is not reliable if the measures of skewness and kurtosis are too large.

    9. Extreme events are those that occur in the tail of the assumed distribution. Two approaches to

    model them are (1) gather a sample, divide the sample into blocks, measure the minimum

    return in each of these blocks; (2) model the behavior of observations in the tail of a

    distribution separately from those near the center.

    10. A multi-factor style analysis allows for the calculation of value at market risk (VaMR), which isthe maximum loss from adverse changes in market factors that can occur under normal

    conditions. Estimating value at market risk uses the betas from a multi-factor style analysis

    and assumed values for the factors that have been pushed to a disadvantageous level.

    11. Value at specific risk is essentially the square root of the unexplained variance of the return

    from a multi-factor analysis times the corresponding z-value for the level of confidence.

    12. The expression for total VaR obtained using style analysis is: 13. The liquidity spread is the loss incurred from withdrawing funds today as opposed to waiting N

    periods, and it is the difference in the corresponding net asset values. 14. Laportes model or L-VaR breaks down VaR into the following parts: systematic market VaR,

    specific market VaR, systematic liquidity VaR, specific liquidity VaR, and correlation effects

    between liquidity and market risk. This decomposition allows for a more detailed analysis of

    the effect of liquidity constraints.

  • 8/8/2019 Key Concepts + Figures

    25/69

    15. The main limitation of VaR is that it gives a value of potential loss only under normal

    conditions, and it does not indicate the losses that can occur when extreme events occur.

    Other limitations are that it is a single measure that should not preclude the use of qualitative

    analysis.

    16. Stress tests compute the losses to a fund when extreme market moves occur. The test may

    use extreme historical values, estimates of extreme values using statistics, or worse case

    losses from a Monte Carlo simulation.

    17. Monte Carlo simulations can provide thousands of hypothetical returns for both a single fund

    and a portfolio of funds. One general approach is to fit a statistical distribution to the historical

    returns of a fund and generate values using that distribution. When applied to generating

    portfolio returns, the simulations will need to include the correlations of the funds returns.

    18. Marginal VaR (MVaR) is a measure of how a portfolio VaR changes from a small change in

    the position of a fund in the portfolio.

    19. Incremental VaR (IVaR) is an estimate of the amount of risk a proposed position will add to

    the total VaR of an existing portfolio. It is the proposed weight for that fund times the marginal

    VaR evaluated when wi is zero.

    20. Component VaR (CVaR) is the contribution of a particular fund to a portfolio of funds. It will

    generally be less than the VaR of the fund by itself because of the diversification of some of

    the funds risk at the portfolio level.

    21. Managing a portfolio of funds might begin with decomposing the risk into the CVaR of each

    position. This can indicate if any one position is a hot spot. In determining the correct course

    of action, the manager can use marginal VaR to estimate the effect of changes in existing

    positions, or the manager can use incremental VaR to estimate the effect of adding a new

    fund to the portfolio.

    22. A manager should remember that VaR values are estimates. Furthermore, each fund will have

    special features that will affect its liquidity.

    23. Studies have shown significant benefits even from nave diversification, which is taking equal

    positions in different funds. Smart diversification, which uses the characteristics of the funds to

    adjust the weights, can further increase the benefits. Most of the benefits can be achieved withfive to ten funds.

    24. Di-worse-ification is the result of having too many positions. Adding a position without

    reducing other positions will generally increase the portfolios VaR in nominal terms. Also,

    there is a decrease in the marginal benefits of diversification as the manager adds new

    positions. Transaction costs can exceed potential benefits if there are too many positions.

  • 8/8/2019 Key Concepts + Figures

    26/69

    Standards of Professional Conduct

    I. Professionalism

    A. Knowledge of the Law

    Members and candidates must understand and comply with all applicable laws, rules,

    and regulations of any government, regulatory organization, licensing agency, or

    professional association governing their professional activities. In the event of conflict,

    members and candidates must comply with the more strict law, rule or regulation.

    Members and candidates must not knowingly participate or assist in any violations of

    laws, rules, or regulations and must disassociate themselves from any such violation.

    B. Independence and Objectivity

    Members and candidates must use reasonable care and judgment to achieve and

    maintain independence and objectivity in their professional activities. Members and

    candidates must not offer, solicit, or accept any gift, benefit, compensation, or

    consideration that reasonably could be expected to compromise their own or anothers

    independence and objectivity.

    C. Misrepresentation

    Members and candidates must not knowingly make any misrepresentations relating to

    investment analysis, recommendations, actions, or other professional activities.

    D. Misconduct

    Members and candidates must not engage in any professional conduct involving

    dishonesty, fraud, or deceit or commit any act that reflects adversely on their

    professional reputation, integrity, or competence.

    II. Integrity of Capital Markets

    A. Material Nonpublic Information

    Members and candidates who possess material nonpublic information that couldaffect the value of an investment must not act or cause others to act on the

    information.

    B. Market Manipulation

    Members and candidates must not engage in practices that distort prices or artificially

    inflate trading volume with the intent to mislead market participants.

  • 8/8/2019 Key Concepts + Figures

    27/69

    III. Duties to clients

    A. Loyalty, Prudence, and Care

    Members and candidates have a duty of loyalty to their clients and must act with

    reasonable care and exercise prudent judgment. Members and candidates must act

    for the benefit of their clients and place their clients interests before their employers

    or their own interests. In relationship with clients, members and candidates must

    determine applicable fiduciary duty and must comply with such duty to persons and

    interests to whom it is owed.

    B. Fair Dealing

    Members and candidates must deal fairly and objectively with all clients when

    providing investment analysis, making investment recommendations, taking

    investment action, or engaging in other professional activities.

    C. Suitability

    1. When members and candidates are in an advisory relationship with a client, they

    must:

    a. Make a reasonable inquiry into the clients or prospective clients investment

    experience, risk and return objectives, and financial constraints prior to

    making any investment recommendation or taking investment action and must

    reassess and update this information regularly.

    b. Determine that an investment is suitable to the clients financial situation and

    consistent with the clients written objectives, mandates, and constraints

    before making an investment recommendation or taking investment action.

    c. Judge the suitability of investments in the context of the clients total portfolio.

    2. When members and candidates are responsible for managing a portfolio to a

    specific mandate, strategy, or style, they must make only investment

    recommendations or take investment actions that are consistent with the stated

    objectives and constraints of the portfolio.

    D. Performance Presentation

    When communicating investment performance information, members or candidatesmust make reasonable efforts to ensure it is fair, accurate, and complete.

    E. Preservation of Confidentiality

    Members and candidates must keep information about current, former, and

    prospective clients confidential unless any of the following occur.

    1. The information concerns illegal activities on the part of the client or prospective

    client.

    2. Disclosure is required by law.

    3. The client or prospective client permits disclosure of the information.

  • 8/8/2019 Key Concepts + Figures

    28/69

    IV. Duties to Employers

    A. Loyalty

    In matters related to their employment, members and candidates must act for the

    benefit of their employer and not deprive their employer of the advantage of their skillsand abilities, divulge confidential information, or otherwise cause harm to their

    employer.

    B. Additional Compensation Arrangements

    Members and candidates must not accept gifts, benefits, compensation, or

    consideration that competes with, or might reasonably be expected to create a conflict

    of interest with, their employers interest unless they obtain written consent from all

    parties involved.

    C. Responsibilities of Supervisors

    Members and candidates must make reasonable efforts to detect and prevent

    violations of applicable laws, rules, regulations, and the Standards by anyone subject

    to their supervision or authority.

    V. Investment Analysis, Recommendations, and Action

    A. Diligence and Reasonable Basis

    Members and candidates must:

    1. Exercise diligence, independence, and thoroughness in analyzing investments,

    making investment recommendations, and taking investment actions.

    2. Have a reasonable and adequate basis, supported by appropriate research and

    investigation, for any investment analysis, recommendation, or action.

    B. Communication With Clients and Prospective Clients

    Members and candidates must;

    1. Disclose to clients and prospective clients the basic format and general principles

    of the investment processes used to analyze investments, select securities, and

    construct portfolios, and must promptly disclose any changes that might materially

    affect those processes.

    2. Use reasonable judgment in identifying which factors are important to their

    investment analyses, recommendations, or actions, and include those factors in

    communications with clients and prospective clients.

    3. Distinguish between fact and opinion in the presentation of investment analysis

    and recommendations.

  • 8/8/2019 Key Concepts + Figures

    29/69

    C. Record Retention

    Members and candidates must develop and maintain appropriate records to support

    their investment analysis, recommendations, actions, and other investment-related

    communications with clients and prospective clients.

    VI. Conflicts of Interest

    A. Disclosure of Conflicts

    Members and candidates must make full and fair disclosure of all matters that could

    reasonably be expected to impair their independence and objectivity or interfere with

    respective duties to their clients, prospective clients, and employer. Members and

    candidates must ensure that such disclosures are prominent, delivered in plain

    language, and communicate the relevant information effectively.

    B. Priority of Transactions

    Investment transactions for clients and employers must have priority over investment

    transactions in which a member or candidate is the beneficial owner.

    C. Referral Fees

    Members and candidates must disclose to their employer, clients, and prospective

    clients, as appropriate, any compensation, consideration, or benefit received by, or

    paid to, others for the recommendation of products or services.

  • 8/8/2019 Key Concepts + Figures

    30/69

    16. Introduction to Real Estate Valuation

    1. The future value of a lump sum is equal to the present value compounded by an interest rate

    for one or more periods. Future value can be calculated for any number of periods and interest rates using the

    following formula:

    Future value can be calculated for any number of periods using the same interest rate

    with the following formula:

    2. The present value of a lump sum is equal to the future value discounted by an interest rate for

    one or more periods.

    Present value can be calculated for any number of periods and interest rates using

    the following formula:

    Present value can be calculated for any number of periods using the same interest

    rate with the following formula:

    3. The discounted cash flow approach to valuing real estate property requires estimating all

    future cash flows for the property, which are then discounted to their corresponding present

    values and summed.

    If it is appropriate to use a different discount rate for each period, the value of a

    property can be calculated using the following formula:

    If it is appropriate to use the same discount rate for each period, the value of a

    property can be calculated using the following formula:

    4. Generally it is not feasible to forecast all future cash flows. Thus, the use of a reversion value

    at the end of the forecast period is necessary to represent cash flows beyond the forecast

    period. The reversion value is added to the final forecast years cash flow and requires that net

    operating income is stabilized and will grow at a constant rate. Reversion value can becalculated as follows:

    5. Net present value (NPV) is the present value of an asset minus its initial cost and represents

    the increase in firm value by investing in the asset. NPV can be used to make investment

    decisions using the following rules:

    If NP


Recommended