Date post: | 15-May-2018 |
Category: |
Documents |
Upload: | trinhthien |
View: | 217 times |
Download: | 3 times |
IntroductionConfidence Intervals
Hypothesis tests
Confidence intervals and hypothesis tests
Patrick Breheny
January 19
Patrick Breheny STA 580: Biostatistics I 1/46
IntroductionConfidence Intervals
Hypothesis tests
Recap
In our last lecture, we discussed at some length the PublicHealth Service study of the polio vaccine
We discussed the careful design of the study to ensure thathuman perception and confounding factors could not bias theresults in favor of or against the vaccine
However, there was one factor we could not yet rule out: therole of random chance in our findings
Patrick Breheny STA 580: Biostatistics I 2/46
IntroductionConfidence Intervals
Hypothesis tests
Are our results generalizable?
Recall that in the study, the incidence of polio was cut by71/28 ≈ 2.5 times
This is what we saw in our sample, but remember – this is notwhat we really want to know
What we want to know is whether or not we can generalizethese results to the rest of the world’s population
The two most common ways of addressing that question are:
Confidence intervalsHypothesis testing
Both methods address the question of generalization, but doso in different ways and provide different, and complimentary,information
Patrick Breheny STA 580: Biostatistics I 3/46
IntroductionConfidence Intervals
Hypothesis tests
Why we would like an interval
Not to sound like a broken record, but
What we know: people in our sample were 2.5 times less likelyto contract polio if vaccinatedWhat we want to know: how much less likely would the rest ofthe population be to contract polio if they were vaccinated
This second number is almost certainly different from 2.5 –maybe by a little, maybe by a lot
Since it is highly unlikely that we got the exactly correctanswer in our sample, it would be nice to instead have aninterval that we could be reasonably confident contained thetrue number (the parameter)
Patrick Breheny STA 580: Biostatistics I 4/46
IntroductionConfidence Intervals
Hypothesis tests
What is a confidence interval?
It turns out that the interval (1.9,3.5) does this job, with aconfidence level of 95%
We will discuss the nuts and bolts of constructing confidenceintervals often during the rest of the course
First, we need to understand what a confidence interval is
Why (1.9,3.5)? Why not (1.6,3.3)?
And what the heck does “a confidence level of 95%” mean?
Patrick Breheny STA 580: Biostatistics I 5/46
IntroductionConfidence Intervals
Hypothesis tests
What a 95% confidence level means
There’s nothing special about the interval (1.9,3.5), but thereis something special about the procedure that was used tocreate it
The interval (1.9,3.5) was created by a procedure that, whenused repeatedly, contains the true population parameter 95%of the time
Does (1.9,3.5) contain the true population parameter? Whoknows?
However, in the long run, our method for creating confidenceintervals will successfully do its job 95% of the time (it has to,otherwise it wouldn’t be a 95% confidence interval)
Patrick Breheny STA 580: Biostatistics I 6/46
IntroductionConfidence Intervals
Hypothesis tests
Simulated 80% confidence intervals
Imagine replicating the polio study 40 times (red line = truth):
12
34
56
Replications
Dro
p in
pol
io r
isk
Patrick Breheny STA 580: Biostatistics I 7/46
IntroductionConfidence Intervals
Hypothesis tests
Simulated 95% confidence intervals
Same studies, same data, difference confidence level:
12
34
56
Replications
Dro
p in
pol
io r
isk
Patrick Breheny STA 580: Biostatistics I 8/46
IntroductionConfidence Intervals
Hypothesis tests
What’s special about 95%?
The vast majority of confidence intervals in the world areconstructed at a confidence level of 95%
What’s so special about 95%?
Nothing
However, it does make things easier to interpret wheneveryone sticks to the same confidence level, and theconvention that has stuck in the scientific literature is 95%
So, we will largely stick to 95% intervals in this class as well
Patrick Breheny STA 580: Biostatistics I 9/46
IntroductionConfidence Intervals
Hypothesis tests
The width of a confidence interval
The width of a confidence interval reflects the degree of ouruncertainty about the truth
Three basic factors determine the extent of this uncertainty,and the width of any confidence interval:
The confidence levelThe amount of information we collectThe precision with which the outcome is measured
Patrick Breheny STA 580: Biostatistics I 10/46
IntroductionConfidence Intervals
Hypothesis tests
Confidence levels
As we saw, the width of a confidence interval is affected bywhether it was, say, an 80% confidence interval or a 95%confidence interval
This percentage is called the confidence level
Confidence levels closer to 100% always produce largerconfidence intervals than confidence intervals closer to 0%
If I need to contain the right answer 95% of the time, I needto give myself a lot of room for error
On the other hand, if I only need my interval to contain thetruth 10% of the time, I can afford to make it quite small
Patrick Breheny STA 580: Biostatistics I 11/46
IntroductionConfidence Intervals
Hypothesis tests
Amount of information
It is hopefully obvious that the more information you collect,the less uncertainty you should have about the truthDoing this experiment on thousands of children should allowyou to pin down the answer to a tighter interval than if onlyhundreds of children were involvedIt may be surprising that the interval is as wide as it is for thepolio study: after all, hundreds of thousands of children wereinvolvedHowever, keep in mind that a very small percentage of thosechildren actually contracted polio – the 99.9% of children inboth groups who never got polio tell us very little aboutwhether the vaccine worked or notOnly about 200 children in the study actually contractedpolio, and these are the children who tell us how effective thevaccine is (note that 200 is a lot smaller than 400,000!)
Patrick Breheny STA 580: Biostatistics I 12/46
IntroductionConfidence Intervals
Hypothesis tests
Precision of measurement
The final factor that determines the width of a confidenceinterval is the precision with which things are measured
I mentioned that the diagnosis of polio is not black and white– misdiagnoses are possible
Every misdiagnosis increases our uncertainty about the effectof the vaccine
As another example, consider a study of whether anintervention reduces blood pressure
Blood pressure is quite variable, so researchers in such studieswill often measure subjects’ blood pressure several times atdifferent points in the day, then take the average
The average will be more precise than any individualmeasurement, and they will reduce their uncertainty about theeffect of the treatment
Patrick Breheny STA 580: Biostatistics I 13/46
IntroductionConfidence Intervals
Hypothesis tests
The subtle task of inference
Inference is a complicated business, as it requires us to thinkin a manner opposite than we are used to:
Usually, we think about what will happen, taking for grantedthat the laws of the universe work in a certain wayWhen we infer, we see what happens, then try to concludesomething about the way that the laws of the universe mustwork
This is difficult to do: as Sherlock Holmes puts it in A Studyin Scarlet, “In solving a problem of this sort, the grand thingis to be able to reason backward.”
Patrick Breheny STA 580: Biostatistics I 14/46
IntroductionConfidence Intervals
Hypothesis tests
Confidence interval subtleties
This subtlety leads to some confusion with regard toconfidence intervals – for example, is it okay to say, “There isa 95% probability that the true reduction in polio risk isbetween 1.9 and 3.5”?
Well, not exactly – the true reduction is some fixed value, andonce we have calculated the interval (1.9,3.5), it’s fixed too
Thus, there’s really nothing random anymore – the intervaleither contains it or it doesn’t
Is this an important distinction, or are we splitting hairs here?Depends on who you ask
Patrick Breheny STA 580: Biostatistics I 15/46
IntroductionConfidence Intervals
Hypothesis tests
What do confidence intervals tell us?
So, in the polio study, what does the confidence interval of(1.9,3.5) tell us?
It gives us a range of likely values by which the polio vaccinecuts the risk of contracting polio: it could cut the risk by asmuch as 3.5 times less risk, or as little as 1.9 times less risk
But – and this is an important but – it is unlikely that thevaccine increases the risk, or has no effect, and that what wesaw was due to chance
Our conclusions may be very different if our confidenceinterval looked like (0.5,7), in which case our study would beinconclusive
Patrick Breheny STA 580: Biostatistics I 16/46
IntroductionConfidence Intervals
Hypothesis tests
Not all values in an interval are equally likely
It is important to note, however, that not all values in aconfidence interval are equally likely
The ones in the middle of the interval are more likely than thevalues toward the edges
One way to visualize this is with a multilevel confidence bar:
1.0 1.5 2.0 2.5 3.0 3.5 4.0
Drop in polio risk
0.99 0.
90.
60.
80.
95
Patrick Breheny STA 580: Biostatistics I 17/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Specific values of interest
Although confidence intervals are excellent and invaluableways to express a range of likely values for the parameter aninvestigator is studying, we are often interested in a particularvalue of a parameter
In the polio study, it is of particular interest to know whetheror not the vaccine makes any difference at all
In other words, is the ratio between the risk of contractingpolio for a person taking the vaccine and the risk ofcontracting polio for a person who got the placebo equal to 1?
Because we are particularly interested in that one value, weoften want to know how likely/plausible it is
Patrick Breheny STA 580: Biostatistics I 18/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Hypotheses
The specific value corresponds to a certain hypothesis aboutthe world
For example, in our polio example, a ratio of 1 correspondedto the hypothesis that the vaccine provides no benefit or harmcompared to placebo
This specific value of interest is called the null hypothesis(“null” referring to the notion that nothing is differentbetween the two groups – the observed differences are entirelydue to random chance)
The goal of hypothesis testing is to weigh the evidence anddeliver a number that quantifies whether or not the nullhypothesis is plausible in light of the data
Patrick Breheny STA 580: Biostatistics I 19/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
p values
All hypothesis tests are based on calculating the probabilityof obtaining results as extreme or more extreme thanthe one observed in the sample, given that the nullhypothesis is true
This probability is denoted p and called the p-value of the test
The smaller the p-value is, the stronger the evidence againstthe null:
A p-value of 0.5 says that if the null hypothesis was true, thenwe would obtain a sample that looks like the observed sample50% of the time; the null hypothesis looks quite reasonableA p-value of 0.001 says that if the null hypothesis was true,then only 1 out of every 1,000 samples would resemble theobserved sample; the null hypothesis looks doubtful
Patrick Breheny STA 580: Biostatistics I 20/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
The scientific method
Hypothesis tests are a formal way of carrying out the scientificmethod, which is usually summarized as:
Form a hypothesisPredict something observable about the world on the basis ofyour hypothesisTest that prediction by performing an experiment andgathering data
The idea behind hypothesis testing and p-values is that atheory should be rejected if the data are too far away fromwhat the theory predicts
Patrick Breheny STA 580: Biostatistics I 21/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
The scientific method: Proof and disproof
There is a subtle but very fundamental truth to the scientificmethod, which is that one can never really prove a hypothesiswith it – only disprove hypotheses
In the words of Albert Einsten, “No amount ofexperimentation can ever prove me right; a single experimentcan prove me wrong”
Hence all the fuss with the null hypothesis:
Patrick Breheny STA 580: Biostatistics I 22/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
The scientific method: Summing up
The healthy application of the scientific method rests on theability to rebut the arguments of skeptics, who propose otherexplanations for the results you observed in your experiment
One important skeptical argument is that your results maysimply be due to chance
The p-value – which directly measures the plausibility of theskeptic’s claim – is the evidence that will settle the argument
Patrick Breheny STA 580: Biostatistics I 23/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Polio study: what does hypothesis testing tell us?
In the polio study, for the null hypothesis that contractingpolio is just as probable in the vaccine group as it is in theplacebo group, p = .0000000008, or about 1 in a billion
So, if the vaccine really had no effect, the results of the poliovaccine study would be a one-in-a-billion finding
Is it possible that the vaccine has no effect? Yes, but very,very unlikely
Patrick Breheny STA 580: Biostatistics I 24/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
p-values do not assess the design of the study
As another example from class last week, let’s calculate ap-value for the clofibrate study, where 15% of adherers died,compared with 25% on nonadherers
The p-value turns out to be 0.0001
So the drop in survival is unlikely to be due to chance, but itisn’t due to clofibrate either: recall, the drop was due toconfounding
It is important to consider the entire study and how well itwas designed and run, not just look at p-values (FYI: thep-value comparing Clofibrate to placebo was 0.51)
Patrick Breheny STA 580: Biostatistics I 25/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Confidence intervals tell us about p-values
It may not be obvious, but there is a close connection betweenconfidence intervals and hypothesis tests
For example, suppose we construct a 95% confidence interval
Whether or not this interval contains the null hypothesis tellsus something about the p-value we would get if we were toperform a hypothesis test
If it does, then p > .05
If it doesn’t, then p < .05
Patrick Breheny STA 580: Biostatistics I 26/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
p-values tell us about confidence intervals
We can reason the other way around, also:
If we get a p-value above .05, then the 95% confidenceinterval will contain the null hypothesis (and vice versa)
In general, a 100(1− α)% confidence interval tells us whethera p-value is above α or not
Patrick Breheny STA 580: Biostatistics I 27/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Conclusion
In general, then, confidence levels and hypothesis tests lead tosimilar conclusions
For example, in our polio example, both methods indicatedthat the study provided strong evidence that the vaccinereduced the probability of contracting polio well beyond whatyou would expect by chance alone
This is a good thing – it would be confusing otherwise
However, the information provided by each technique isdifferent: the confidence interval is an attempt to providelikely values for a parameter of interest, while the hypothesistest is an attempt to measure the evidence against thehypothesis that the parameter is equal to a certain, specificnumber
Patrick Breheny STA 580: Biostatistics I 28/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
p-value cutoffs
To be sure, there are shades of gray when it comes tointerpreting p-values; how low does a p-value have to bebefore one would say that we’ve collected sufficient evidenceto refute the null hypothesis?
Suppose we used a cutoff of .05
If p < .05 and the null hypothesis is indeed false, then wearrive at the correct conclusion
If p > .05 and the null hypothesis is indeed true, then we onceagain fail to make a mistake
Patrick Breheny STA 580: Biostatistics I 29/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Types of error
However, there are two types of errors we can commit;statisticians have given these the incredibly unimaginativenames type I error and type II error
A type I error consists of rejecting the null hypothesis in asituation where it was true
A type II error consists of failing to reject the null hypothesisin a situation where it was false
Patrick Breheny STA 580: Biostatistics I 30/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Possible outcomes of comparing p to a cutoff
Thus, there are four possible outcomes of a hypothesis test:
Null hypothesisTrue False
p > α (accept) Correct Type II errorp < α (reject) Type I error Correct
Patrick Breheny STA 580: Biostatistics I 31/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Consequences of type I and II errors
Type I and type II errors are different sorts of mistakes andhave different consequences
A type I error introduces a false conclusion into the scientificcommunity and can lead to a tremendous waste of resourcesbefore further research invalidates the original finding
Type II errors can be costly as well, but generally go unnoticed
A type II error – failing to recognize a scientific breakthrough– represents a missed opportunity for scientific progress
Patrick Breheny STA 580: Biostatistics I 32/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
“Significance”
The proper balance of these two sorts of errors certainlydepends on the situation and the type of research beingconducted
That being said, the scientific community generally starts tobe convinced at around the p = .01 to p = .10 level
The term “statistically significant” is often used to describep-values below .05; the modifiers “borderline significant”(p < .1) and “highly significant” (p < .01) are also used
However, don’t let these clearly arbitrary cutoffs distract youfrom the main idea that p-values measure how far off the dataare from what the theory predicts – a p-value of .04 and ap-value of 0.000001 are not at all the same thing, even thoughboth are “significant”
Patrick Breheny STA 580: Biostatistics I 33/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
p-value misconceptions
Certainly, p-values are widely used, and when used andinterpreted correctly, very informative
However, p-values are also widely misunderstood and misused– by everyone from students in introductory stat courses toleading scientific researchers
For this reason, we will now take some time to cover severalof the most common p-value misconceptions
Patrick Breheny STA 580: Biostatistics I 34/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Reporting p-values
One common mistake is taking the 5% cutoff too seriously
Indeed, some researchers fail to report their p-values, and onlytell you whether it was “significant” or not
This is like reporting the temperature as “cold” or “warm”
Much better to tell someone the temperature and let themdecide for themselves whether they think it’s cold enough towear a coat
Patrick Breheny STA 580: Biostatistics I 35/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Example: HIV Vaccine Trial
For example, a recent study involving a vaccine that mayprotect against HIV infection found that, if they analyzed thedata one way, they obtained a p-value of .08
If they analyzed the data a different way, they obtained ap-value of .04
Much debate and controversy ensued, partially because thetwo ways of analyzing the data produce p-values on either sideof .05
Much of this debate and controversy is fairly pointless; bothp-values tell you essentially the same thing – that the vaccineholds promise, but that the results are not yet conclusive
Patrick Breheny STA 580: Biostatistics I 36/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Interpretation
Another big mistake is misinterpreting the p-value
A p-value is the probability of getting data that looks acertain way, given that the null hypothesis is true
Many people misinterpret a p-value to mean the probabilitythat the null hypothesis is true, given the data
These are completely different things
Patrick Breheny STA 580: Biostatistics I 37/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Conditional probability
The probability of A given B is not the same as theprobability of B given A
For example, in the polio study, the probability that a childgot the vaccine, given that he/she contracted polio, was 28%
The probability that the child contracted polio, given thatthey got the vaccine, was 0.03%
Patrick Breheny STA 580: Biostatistics I 38/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Absence of evidence is not evidence of absence
Another mistake (which is, in some sense, a combination ofthe first two mistakes) is to conclude from a high p-value thatthe null hypothesis is probably true
We have said that if our p-value is low, then this is evidencethat the null hypothesis is incorrect
If our p-value is high, what can we conclude?
Absolutely nothing
Failing to disprove the null hypothesis is not the same asproving the null hypothesis
Patrick Breheny STA 580: Biostatistics I 39/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Hypothetical example
As a hypothetical example, suppose you and Michael Jordanshoot some free throws
You make 2 and miss 3, while he makes all five
If two people equally good at shooting free throws were tohave this competition, the probability of seeing a differencethis big is 17% (i.e., p = .17)
Does this experiment constitute proof that you and MichaelJordan are equally good at shooting free throws?
Patrick Breheny STA 580: Biostatistics I 40/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Real example
You may be thinking, “that’s clearly ridiculous; no one wouldreach such a conclusion in real life”
Unfortunately, you would be mistaken: this happens all thetime
As an example, the Women’s Health Initiative found thatlow-fat diets reduce the risk of breast cancer with a p-value of.07
The New York Times headline: “Study finds low-fat dietswon’t stop cancer”
The lead editorial claimed that the trial represented “strongevidence that the war against fats was mostly in vain”, andsounded “the death knell for the belief that reducing thepercentage of total fat in the diet is important for health”
Patrick Breheny STA 580: Biostatistics I 41/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Women’s Health Initiative: Confidence interval
What should people do when confronted with a high p-value?
Turn to the confidence interval
In this case, the confidence interval for the drop in risk was(0.83, 1.01)
The study suggests that a woman could likely reduce her riskof breast cancer by about 10% by switching to a low-fat dietMaybe a low-fat diet won’t affect your risk of breast cancerOn the other hand, it could reduce it to 83% of what it wouldotherwise be
Patrick Breheny STA 580: Biostatistics I 42/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
A closer look at “significance”
A final mistake is reading too much into the term“statistically significant”:
Saying that results are statistically significant informs thereader that the findings are unlikely to be due to chance aloneHowever, it says nothing about the clinical or scientificsignificance of the study
A study can be important without being statisticallysignificant, and can be statistically significant but of nomedical/clinical relevance
Patrick Breheny STA 580: Biostatistics I 43/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Nexium
As an example of statistical vs. clinical significance, considerthe story of Nexium, a heartburn medication developed byAstraZeneca
AstraZeneca originally developed the phenomenally successfuldrug Prilosec
However, with the patent on the drug set to expire, thecompany modified Prilosec slightly and showed that for acondition called erosive esophagitis, the new drug’s healingrate was 90%, compared to Prilosec’s 87%
Because the sample size was so large (over 5,000), this findingwas statistically significant, and AstraZeneca called the newdrug Nexium
Patrick Breheny STA 580: Biostatistics I 44/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Nexium (cont’d)
The FDA approved Nexium (which, some would argue, wasbasically the same thing as the now-generic Prilosec, only for20 times the price)
AstraZeneca went on to spend half a billion dollars inmarketing to convince patients and doctors that Nexium wasa state of the art improvement over Prilosec
It worked – Nexium became one of the top selling drugs in theworld and AstraZeneca made billions of dollars
The ad slogan for Nexium: “Better is better.”
Patrick Breheny STA 580: Biostatistics I 45/46
IntroductionConfidence Intervals
Hypothesis tests
IntroductionConfidence intervals and hypothesis testsSignificancep-value misconceptions
Benefits and drawbacks of hypothesis tests
The attractive feature of hypothesis tests is that p always hasthe same interpretationNo matter how complicated or mathematically intricate ahypothesis test is, you can understand its result if youunderstand p-valuesUnfortunately, the popularity of p-values has led to overuseand abuse: p-values are used in cases where they aremeaningless or unnecessary, and p < .05 cutoffs used whenthey make no senseThis overuse has also led people to confuse low p-values withclinical and practical significanceConfidence intervals, which make a statement about bothuncertainty and effect size, are very important, less vulnerableto abuse, and should be included alongside p-values wheneverpossible
Patrick Breheny STA 580: Biostatistics I 46/46