Some statistical concepts relevant to proteomics data analysis

Post on 30-Apr-2015

629 views 2 download

description

From the UC Davis Proteomics 2014 Summer Workshop www.proteomics.ucdavis.edu by Blythe Durbin-Johnson, Ph D.

transcript

Some Statistical Concepts Relevant to Proteomics Data Analysis

Blythe Durbin-Johnson, Ph.D.August 7, 2014

Topics

• Hypothesis Testing, p-values, power• Comparing two groups• More complicated experimental designs• Models for count data• The log transformation• Graphics

Topics

• General concepts, not formulas• No equations• Assuming you will be doing any analysis with a

computer

HYPOTHESIS TESTING, P-VALUES, AND POWER

Hypothesis Testing• Test “null hypothesis” of no effect against

“alternative hypothesis”

• Calculate test statistic, reject null if test statistic large relative to what one would expect under null distribution

P-Values

• P-value = probability of seeing a test statistic as large or larger than your test statistic when the null hypothesis is true

• Typically reject null if P < 0.05–This is purely a historical convention–Nothing magic happens at the P = 0.05

threshold

A P-Value is NOT

• …the probability that the null hypothesis is true

• …the probability that an experiment will not be replicated

• …a direct measure of the size or importance of an effect

• …a measure of biological/clinical significance

Power

• Power = probability of rejecting null hypothesis for a given effect size

• Depends on:–Effect size (difference between groups)–Sample size–Amount of variability in data–Hypothesis test being used–How “significance” is defined

Power and P-Values

• Under the null hypothesis, p-values uniformly distributed between 0 and 1–Expect 5% to be less than 0.05, on

average• Under alternatives, higher probability of

smaller p-values (higher power), but still can theoretically get any p-value between 0 and 1

Power Example

• Simulate two groups of normally distributed data with means 0, 0.5, 1, and 2 standard deviations apart

• Conduct two-sample t-test• Repeat 5000 times, look at distribution of p-

values• Repeat for various sample sizes

COMPARING TWO GROUPS

Two-Sample T-Test

• Compares mean of two groups• Does NOT explicitly require normally

distributed (Gaussian) data, unless sample sizes small

• Surprisingly robust under a wide range of conditions, but data skewness is a problem

• Generally use version NOT assuming equal variances

Wilcoxon Rank-Sum Test

• Non-parametric test• General test of location shift• (Not actually comparing medians unless you

make add’l assumptions)• Less powerful than t-test when t-test

assumptions are satisfied

Permutation Tests

• Can get permutation-based p-values for any test statistic

• Perform e.g. t-test on original data• Randomly declare samples to be “control” or

“treatment”, do t-test on permutated data• Repeat many times• Compare original test statistic to permutation

“null” distribution

Permutation Tests

• Permutation tests don’t work for very small sample sizes

• With n = 3 in each of two groups, there are only 20 possible permutations

• Smallest possible p-value is 0.07• Recommend at least 6 samples per group,

more if adjusting for multiple testing

BEYOND TWO GROUPS

Examples of More Complicated Designs

• Compare protein expression in three bacterial strains– One-way ANOVA

• Compare expression in two tomato genotypes under two conditions– Two-way ANOVA

• Compare protein expression in matched hair samples from three different body regions– Mixed effects model

One-Way ANOVA

• Compare group means• Extension of two-sample t-test to more than 2

groups• Except: Generally assume equal variances• P-value from an ANOVA F-test is from “global”

test of any differences among groups• Need to do post-hoc testing (e.g. Tukey) to get

pairwise differences

Two-Way ANOVA

• Analyze two experimental factors at the same time– E.g. genotype and treatment

• More power for main effects than in separate analyses

• Can look at interaction of experimental factors• Can also analyze three, four or more factors– But: Define your questions well!

Mixed Effects Models

• Advanced topic!• Be aware of when these are required, then get

help

Mixed Effects Models

• Used for longitudinal or repeated-measures studies– Same subject observed over time– Matched samples from same subject– Subjects from same family– Anytime there may be correlation among samples

Mixed Effects Models

• Modify ANOVA model to include “random effect” for subject, family, etc.

• This accounts for within-subject or within-family correlation

• If you don’t do this, you will greatly underestimate the variability in the data, p-values too small

Models for Count Data

• Counts (esp. small counts) often require special models

• “Count” means 0, 1, 2, 3……

Models for Count Data

• Poisson model often used for count data

• Assumes data come from Poisson distribution

• Poisson model assumes mean = variance

• Too restrictive!

Models for Count Data

• “Quasipoisson” model allows variance to be proportional to mean

• Allows for overdispersion

• Why “quasi”?– There’s no “quasipoisson” distribution

Models for Count Data

• Negative binomial model also allows overdispersion

• Variance is quadratic function of mean

• Can be derived as mixture of Poisson distributions

• May be more conservative than quasipoisson model (Leitch, 2012)

Models for Count Data

• Complex experimental designs can be analyzed with quasipoisson or negative binomial models– Generalized linear models

• Model parameter estimates are log fold changes

The Log Transformation

Intensity

Frequency

0.0e+00 5.0e+07 1.0e+08 1.5e+08 2.0e+08

05000

10000

15000

• Intensity data are often skewed

• Skewness causes problems for t-tests and ANOVA

The Log Transformation

• The log transformation can fix skewness

• Doesn’t matter what base you use

• Parameter estimates from ANOVA on logged data are log FC’s

ln(Intensity)

Frequency

15 20 25

05000

10000

15000

20000

25000

30000

MULTIPLE TESTING

Multiple Testing Example• Patient samples treated with different radiation

doses and observed over time• Illumina microarray experiment, 16,801 genes

used in analysis• Four replicates per patient/time/dose• All samples used in this example were replicates

from same patient, untreated• T-tests gene by gene comparing replicates 1 and 3

to replicates 2 and 4• 196 genes with P < 0.05

Multiple Testing Example

• Entered list of genes with P < 0.05 into DAVID’s functional annotation tool– http://david.abcc.ncifcrf.gov

• Overrepresented terms (P < 0.05) included disease mutation, mutagenesis site, and 79 others

• If you were doing radiation research, would you be excited about this?

Multiple Testing Example

• We know there is no difference between the “groups”

• What is going on?

Multiple Testing Example

• Expect P < 0.05 about 5% of the time under null hypothesis

• (We see 196/16801 = 1.1% of genes with P < 0.05, but our data aren’t perfectly normal and our p-values are correlated)

• When conducting multiple tests, need to make adjustments to avoid spurious results

Familywise Error Rate

number declarednon-significant

number declared

significanttotal

true null hypotheses

U V m0

false null hypotheses

T S m - m0

m - R R m

FWER = P(V ≥ 1)

FWER = Probability of ANY false positives

Multiple Testing

One way of controlling FWER:set α’ = α/n (Bonferroni Correction)

Problems: 1. Very conservative, even for FWER

control.2. Is the FWER really what we want to

control?

Multiple Testing

False Discovery Rate (FDR)

FDR = E[V/R]

number declarednon-significant

number declared

significanttotal

true null hypotheses

U V m0

false null hypotheses

T S m - m0

m - R R m

(Benjamini and Hochberg, 1995)

Multiple Testing

False Discovery Rate (FDR)

FDR = E[V/R]

FWER = P(V ≥ 1)control this

not this

number declarednon-significant

number declared

significanttotal

true null hypotheses

U V m0

false null hypotheses

T S m - m0

m - R R m

(Benjamini and Hochberg, 1995)

Multiple Testing

Multiple Testing

• False Discovery Rate-controlling procedure: (Benjamini and Hochberg, 1995)

1. Sort p-values from smallest to largest (1 to m), let k be the rank

2. Select a desired FDR α3. Find the largest rank k’ where P(k) ≤ (k/m)*α

4. Null hypotheses 1 through k’ are rejected

Multiple Testing

• Note that the protein with the smallest p-value is still tested using α/m (like Bonferroni)

• Significance cutoff gets less stringent • The number of proteins included in the

analysis still matters• Filtering can help (but don’t filter based on

treatment/group membership)

Multiple Testing Example (Revisited)

• Recall example of testing differential expression between 2 pairs of replicates in a microarray experiment• No genes are differentially expressed

at FDR-level 0.1

Graphics

• Displaying data from individual proteins– Barplots– Boxplots– Dotplots

• Displaying data from multiple proteins– Multidimensional scaling plots– Hierarchical clustering– Heatmaps

Barplots

Mean

Mean + 1 standard error

Barplots

• Shows mean and standard error of mean (or 95% CI)

• Poor information-to-ink-ratio• Can be misleading for skewed data• Commonly used, easily interpreted

Boxplots

Median

75th Percentile

25th Percentile

Largest Data Point that is Less than 1.5 IQR From Edge of Box

Smallest Data Point that is Less than 1.5 IQR From Edge of Box

Boxplots

• Non parametric data display

• Lots of information given

• Less commonly used than barplots, may require explanation

Dotplots

Mean

Actual Data!

Dotplots

• Great way to display small data sets (n < 10)

• Shows mean, all data points

• Unwieldy for larger sample sizes

• Beware of overlapping points

• Distance matrix = all pairwise distances between samples

• MDS takes distance matrix, recreates data in two dimensions while preserving distances

• Useful diagnostic plot

• PCA is special case of MDS

• Many ways to define distance

Multidimensional Scaling Plots

A “Good” MDS Plot

http://statlab.bio5.org/foswiki/pub/Main/RBioconductorWorkshop2012/Day6_demo.pdf

A “Bad” MDS Plot

Hierarchical Clustering• Hierarchical clustering starts by treating each

sample as its own cluster• The “closest” clusters are merged successively

until only one cluster remains• Produces tree with series of nested clusterings

rather than one set of clusters• Plots of these trees are called “dendrograms”

Hierarchical Clustering

Heat Maps

• Data are plotted with color corresponding to numeric value

• Dendrograms of rows (genes) and columns (samples) displayed on sides

• Rows/columns are reordered by their means, this tends to create blocks of color

Conclusions

• Take p-values with a grain of salt– Not significant ≠ no difference

• Be aware of multiple testing issues– Use FDR adjustment when doing 1000’s of tests

• Good experimental design is just as important in ‘omics as anywhere else