+ All Categories
Home > Documents > Introduction to Meta-analysis

Introduction to Meta-analysis

Date post: 04-Mar-2023
Category:
Upload: battelle
View: 0 times
Download: 0 times
Share this document with a friend
31
Introduction to Meta-analysis Jim Derzon, PhD Battelle Memorial Institute Presented at the AEA/CDC Summer Institute, 2008 1
Transcript

Introduction to Meta-analysis

Jim Derzon, PhD

Battelle Memorial Institute

Presented at the AEA/CDC Summer Institute,

2008

1

Jim Derzon, PhDSenior Evaluation Specialist,Centers for Public Health Research and Evaluation, Battelle2101 Wilson Blvd. #800Arlington, VA, 22201-3008v. 703.248.1640, f. 703.527.5640, c. [email protected]

2

Introduction to Meta-analysisJim Derzon ( [email protected] )

Presented at the AEA/CDC Summer Institute, 2008

3

IntroductionMeta-analysis, or quantitative synthesis, is the technique of statistically combining the results of different studies,done on different samples that have each examined and presented findings on a similar relationship. While the actual methods of synthesis are complex, they are based on the fundamental assumptions underlying all quantitative, aggregate research. Because these basic assumptions are so integral to the justificationfor meta-analysis, this essaybegins with an explication ofthose assumptions and then moves into the justification of why meta-analysis is the single best method currently available for synthesizing empirical knowledge.

While people tend to look andact very differently, scienceand experience tell us that people with similar backgrounds or experiences tend to behave more alike than those who do not share that history. We also know from experience that shared experience doesn’t always lead to identical outcomes, that is, all aggregate research is probabilistic.

Thus, when studying human behavior and the effects of interventions on that behavior, we look for commonalities and tendencies,not absolute or deterministicrelationships. We expect people to be different, what we look, and test for, are commonalities. We estimate these commonalities using statistics that test the distribution of individual findings against mathematicalmodels of those distributions.

Now, just as people and theirexperiences differ, the findings obtained from different groups of individuals may differ due todifferences inherent in the sample (e.g., the age, race, gender, or socio-economic status of the sample), differences in the way data were collected (e.g., whetherrespondents were surveyed or were interviewed), or how thestudy was conducted (e.g., quality of randomization, attrition, or properties of the measure or summary index). These influences are referred to as potential “confounds” because they may inflate or diminish the strength of the relationship they are trying to estimate.

4

The consequence of this is that the results of any individual study ( ) may not well represent the true population value for the estimate, μ.

Until fairly recently, there were no tools available to researchers to systematicallyseparate the potential influence of these confounds nor for assessing the relative stability of findings across different samples. Without the tools for disentangling these potential confounds, social scientists were often in a quandary when asked to explain differences in findings across different samples and across different studies that used different methodologies.

MethodIn 1979 Gene Glass invented amethod for combining estimates across samples and studies. He called this technique “meta-analysis.” Inthe intervening years this method has since been considerably refined and it currently provides the singlebest method for systematically reviewing and summarizing the evidence

across multiple studies. Because meta-analysis summarizes evidence across multiple studies and samples,it produces a better (more accurate, more statistically robust) estimate of the strength and stability of a relationship or intervention impact than could be obtainedin any single study.

Meta-analysis is characterized by a systematic, detailed, and organized approach to systematic review that makes explicit the domain of research covered, the nature and quality of the information extracted from that research, and the analytic techniques and results upon which interpretation is based. At it’s core is the concept of “effect size,” any standardized estimate of study findings. Effect sizes can take a variety of forms (e.g., percentages, logged odds ratios, correlations d-scores) depending on the literature being summarized. In method and procedure, meta-analysis is most akin tosurvey research except that research studies are “interviewed” instead of persons. A population of

5

research studies is defined, a sample is drawn (or, more often, an attempt is made to retrieve the entire population), a questionnaire of data elements is completedfor each study, and the resulting database is analyzed using statistical methods appropriate for groupwise data.

Thus, meta-analysis is at once the logical extension ofthe theories and practice undergirding traditional quantitative methodologies and an improvement in that methodology for estimating quantitative results. No onestudy, no matter how good or how thorough, provides a fully adequate knowledge basefor understanding and action.Each study inevitably has itsidiosyncrasies of operationalization and constructs, method and procedure, samples and context, and sampling and non-sampling error that compromise the replicability and generalizability of its findings. Just as probabilistic estimates basedon a group of individuals will be more reliable than estimates based on single individuals, the most robust,reliable knowledge comes only

from some form of synthesis or integration of the resultsfrom multiple studies.

When the issues involve quantitative research findings, meta-analysis has distinct advantages as a synthesis method. It provides a systematic, explicit, detailed, and statistically cogent approachto the task of identifying convergent findings, accounting for differences among studies, revealing gapsin research, and integrating the results, to the extent possible, into a coherent depiction of the current state of evidence for research findings.

Response to criticsThis is not to suggest that meta-analysis does not have its detractors. There are those who decry meta-analysisfor lumping together both good and bad studies. This is true, but using meta-analytic techniques we can both test – and adjust for – the systematic influence of questionable research methodson the strength of a relationship. The meta-analyst does not have to resort to statistical or methodological theory to make

6

claims about the merit of anyparticular study. At the meta-analytic level these issues become empirical questions, ones that are readily managed using well-established statistical techniques.

A second complaint is that meta-analysis lumps together “apples and oranges,” for example, combining different studies using different measures of the outcome or the predictor – differences that are often meaningful in a clinical context. However,in meta-analysis we are not interested in the outcome or the predictor, per se, we are interested in the magnitude and the stability of the relationship between the two.If estimates do not differ more than would be expected due to sampling error, the meta-analyst would respond that, while the measures or outcomes contributing to a finding may be different, theoverall magnitude of the relationship between the two is similar. That is not a claim that the outcome and/orthe predictor are similar, only that the strength of relationship between the measured items or practices are similar.

A third complaint about meta-analysis is that it is simplistic, it only assesses the simple bivariate relationship between two constructs or the main effects of social interventions. This is true, although there are methods available for performing morecomplex multivariate “model-driven” syntheses (Becker, 1994, 1995). Yet, we, as program planners and administrators, are often interested in these simple bivariate relationships for identifying mediators, for selecting cases for intervention, for identifyingbest practices or interventions, for allocatingresources, for delivering services to those most likelyto benefit from those services, and for estimating the impact of social interventions. Thus, while this is a valid observation, it no more obviates the need for such research than it eliminates the need for such findings in the primary literature.

Yet a fourth complaint about meta-analysis is that it is primarily descriptive in nature. The goal of meta-

7

analysis is often not a highly developed conceptual theory. Rather, it focuses on what descriptive theory can be derived or supported by existing empirical research on the relationships or findings examined. In a developmental context knowingwhich psychological, behavioral, or interpersonal systems are best targeted forintervention, the times in the developmental sequence when interventions might be most productive, the characteristics of the individuals to whom intervention should be directed, the relative size of the groups at risk, and the potential change that might reasonably be expected from social interventions each provide sufficient warrant for using meta-analysis to summarize a literature. Deeper theory is nice, but it should be developed in the context of reliable evidence. Meta-analysis can provide such evidence. As a final note, the limitations and complexities of available research should not inhibit attempts to integrate what is known and to configure that knowledge

in ways that may aid social action. Meta-analysis offersa systematic accounting of existing knowledge and an organized framework within which to separate robust, convergent information from the vagaries of sampling error, methodological and substantive differences amongstudies, and the flukes and outliers that inevitably occur in a complex, diverse research domain. Just as thevariability and complexities among people provide stimulusto traditional research, the intricacies of cross-study synthesis dictate the kind ofsystematic, careful, and unbiased handling of evidencethat meta-analysis provides.

Resources:

Cook, T. D., Cooper, H., Cordray, D. S., Hartmann, H., Hedges, L.V., Light, R. J., Louis,T. A., & Mosteller, F. (Eds.). (1992). Meta-analysis for explanation: A casebook. NY: Russell Sage Foundation.

Cooper, H., & Hedges, L. V. (Eds.). (1994). The handbook of research synthesis.New York: Russell Sage Foundation.

8

Durlak, J. A., & Lipsey, M. W. (1991). A practitioner's guide to meta-analysis. American Journal of Community Psychology, 19(3), 291-332.

Glass, G. V., McGaw, B., & Smith, M. L. (1981). Meta analysis in social research.Beverly Hills, CA: Sage Publications.

Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. NY: Academic Press.

Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park, CA: Sage.

Lipsey, M.W. & Wilson, D.B. (2001). Practical Meta-analysis. Applied social Research Methods Series, Vol. 49. Thousand Oaks, CA: Sage.

Rosenthal, R. (1991). Meta-analytic procedures for social research (Revised edition). Newbury Park, CA: Sage.

9

Slide 1

Slide 2

10

Slide 3

Slide 4

11

Slide 5

Slide 6

12

Slide 7

Slide 8

Randomness of observation: the results of any particular experiment are random, but the distribution of individual results can be described mathematically = Statistics

13

Slide 9

Slide 10

Lind: “it is no easy matter to root out prejudices … it becamerequisite to exhibit a full and impartial view of what had hitherto been published… by which the sources of these mistakesmay be detected. Indeed, before the subject could be set in a clear and proper light, it was necessary to remove a great dealof rubbish”

14

Slide 11

Slide 12

15

Slide 13

Slide 14

16

Slide 15

Slide 16

17

Slide 17

Slide 18

18

Slide 19

Slide 20

19

Slide 21

Slide 22

20

Slide 23

Slide 24

21

Slide 25

Slide 26

22

Slide 27

Slide 28

23

Slide 29

Slide 30

24

Slide 31

Slide 32

25

Slide 33

Slide 34

26

Slide 35

Slide 36

27

Slide 37

Slide 38

28

Slide 39

Slide 40

Basically, this shows the gain associated with having or not having the characteristic in a mass media trial. The vast majority of estimates are one-group pre-post data. Zero equals no change in substance use behavior. Negative means substance use increased between time periods, positive means substance use decreased between time periods.

29

Slide 41

A principal downside to linear summary (i.e., thumbs up/thumbs down) approach with the multidimensional data being summarized in these reviews is that not all institutions might weigh the factors similarly. Currently there is a multiple gated approach to the recommendation, pass each gate and the practice is recommended. An alternative approach would be to provide that summary statement, but to also standardize each arm of the assessment (effectiveness, applicability, unintended effects, cost, and barriers to implementation). This is in some regards akin to the Consumer Reports approach, and is well explicated by Gary Henry in Graphing Data (1995, pp. 74-81, but see also profile graphs, p. 73). That is, if each measure is standardized 1-5 and a five pointed star is used to represent each practice, a visual map of each aspect of the practice could be presented and comparisons between each aspect of the practices easily made across practices.

30

Slide 42

31


Recommended