+ All Categories
Home > Documents > Meta-Analysis: A Systematic Method for Synthesizing Counseling Research

Meta-Analysis: A Systematic Method for Synthesizing Counseling Research

Date post: 30-Nov-2023
Category:
Upload: lesley
View: 0 times
Download: 0 times
Share this document with a friend
10
Meta-Analysis: A Systematic Method for Synthesizing Counseling Research Susan C. Whiston and Peiwei Li • • T h e authors provide a template for counseling researchers who are interested in quantitatively aggregating research findings. Meta-analytic studies can provide relevant information to the counseling field by systematically synthesizing studies performed by researchers from diverse fields. Methodologically sound meta-analyses require careful plan- ning, diligent literature searches, detailed coding of study information, and knowledge of meta-analyfic approaches to statistical analyses. The authors provide steps to guide counseling researchers in conducting meta-analyfic reviews. In response to the theme of this special section on getting published in counseling journals, the purpose of this article is to provide an overview of steps in conducting a meta-analysis. Meta-analysis has been increasingly recognized as a method- ologically sound approach to synthesizing research (Cooper, 2010; Cooper, Robinson, & Dorr, 2006). The purpose of meta- analysis is to quantitatively aggregate the results of numerous empirical studies on a topic of interest (Erford, Savin-Murphy, & Butler, 2010). It can be a particularly attractive endeavor for researchers and practitioners who have been working within a specific area for some time or for doctoral students who have completed very comprehensive literature reviews. The term meta-analysis was first coined by Gene V Glass in 1976. Meta is a Greek word that means "behind" or "in back of," but Glass (2000) emphasized that meta-analysis "is not the grand theory of research; it is simply a way of speaking of the statistical analysis of statistical analyses." He defined meta-analysis as "the statistical analysis of a large collection of analysis results from individual studies for purpose of integrating the findings" (Glass, 1976, p. 3). Counseling researchers may want to consider meta-analytic reviews because their efforts could result in major contribu- tions to thefield.As a type of literattire review, a meta-analysis aggregates research studies that usually are published in diverse journals (e.g., education, social work, psychology). Therefore, counseling practitioners often appreciate meta- analytic studies in which researchers systematically analyze research findings from diverse journals because practitioners often do not have time to read a wide variety ofjotimals. Fur- thermore, meta-analytic researchers are increasingly including unpublished studies such as dissertations and theses in their quantitative reviews, which can further expand the comprehen- sive nature of a meta-analytic review and increase the degree to which the results are useful. Meta-analyses, moreover, can supplement traditional qualitative literature reviews because the process produces effect sizes, which are quantitative in- dices of the practical significance of the effect. For example, meta-analytic procedures can determine the magnitude of results of a counseling treatment over no treatment or the level of association between attending counseling sessions and well-being. With the current focus on the empirical support for treatment approaches (e.g., evidence-based practices), meta- analysis can provide useful information regarding the degree of empirical support for a type of treatment. This, however, should not be taken to mean that meta-analytic studies are the only source of evidence-based practices. Durlak (1995) contended that conducting a meta-analysis is analogous to conducting a single scientific experiment in the social or behavioral sciences. As compared with collect- ing data from participants, in a meta-analysis, the data are collected from individual studies. Hence, rather than doing statistical analyses on the data gathered from multiple partici- pants, the statistical analyses in meta-analyses are conducted on data gathered from multiple studies. Later in this article, we will discuss steps in conducting a meta-analysis, which will include formulating a research question and then finding studies that are specifically related to that question. Similar to other types of quantitative studies, with meta-analyses there are independent and dependent variables. In counseling research, many of the meta-analyses have focused on examin- ing the effectiveness of different interventions, programs, or therapeutic modalities (Erford et al., 2010). In these studies, the dependent variable is some measure of effectiveness (e.g., well-being, level of depression, symptomatology) and the independent variables are study variables (e.g., client age, counselor training, types of treatment). As an example, Allumbaugh and Hoyt (1999) examined the effectiveness of grief therapy. Other meta-analyses have compared different schools of psychotherapy (e.g.. Ahn & Wampold, 2001 ; Miller, Wampold, & Varhely, 2008; Wampold, Minami, Baskin, & Tiemey, 2002). As will be discussed in Step 2 (see below), however, meta-analysis can be used to synthesize other types of research such as whether the personality characteristic of conscientious is associated with longevity (Kern & Friedman, 2008). Meta-analysis also can be used to combine information from different studies that were conducted regarding an as- sessment instrument, such as Helms's (1999) meta-analysis of Cronbach alphas of the White Racial Identity Attitude Scale. Susan C. Whiston and Peiwei Li, Department of Counseling and Educational Psychology, Indiana University, Bloomington. Corre- spondence concerning this article should be addressed to Susan C. Whiston, Department of Counseling and Educational Psychology, Indiana University, 201 North Rose Avenue, Bloomington, IN 47405-1006 (e-mail: [email protected]). © 2011 by the American Counseling Association. All rights reserved. Journal ofCounseling& Development • Summer 2011 • Volume 89 273
Transcript

Meta-Analysis: A Systematic Method forSynthesizing Counseling ResearchSusan C. Whiston and Peiwei Li

• •The authors provide a template for counseling researchers who are interested in quantitatively aggregating researchfindings. Meta-analytic studies can provide relevant information to the counseling field by systematically synthesizingstudies performed by researchers from diverse fields. Methodologically sound meta-analyses require careful plan-ning, diligent literature searches, detailed coding of study information, and knowledge of meta-analyfic approaches tostatistical analyses. The authors provide steps to guide counseling researchers in conducting meta-analyfic reviews.

In response to the theme of this special section on gettingpublished in counseling journals, the purpose of this article isto provide an overview of steps in conducting a meta-analysis.Meta-analysis has been increasingly recognized as a method-ologically sound approach to synthesizing research (Cooper,2010; Cooper, Robinson, & Dorr, 2006). The purpose of meta-analysis is to quantitatively aggregate the results of numerousempirical studies on a topic of interest (Erford, Savin-Murphy,& Butler, 2010). It can be a particularly attractive endeavorfor researchers and practitioners who have been workingwithin a specific area for some time or for doctoral studentswho have completed very comprehensive literature reviews.The term meta-analysis was first coined by Gene V Glass in1976. Meta is a Greek word that means "behind" or "in backof," but Glass (2000) emphasized that meta-analysis "is notthe grand theory of research; it is simply a way of speakingof the statistical analysis of statistical analyses." He definedmeta-analysis as "the statistical analysis of a large collectionof analysis results from individual studies for purpose ofintegrating the findings" (Glass, 1976, p. 3).

Counseling researchers may want to consider meta-analyticreviews because their efforts could result in major contribu-tions to the field. As a type of literattire review, a meta-analysisaggregates research studies that usually are published indiverse journals (e.g., education, social work, psychology).Therefore, counseling practitioners often appreciate meta-analytic studies in which researchers systematically analyzeresearch findings from diverse journals because practitionersoften do not have time to read a wide variety of jotimals. Fur-thermore, meta-analytic researchers are increasingly includingunpublished studies such as dissertations and theses in theirquantitative reviews, which can further expand the comprehen-sive nature of a meta-analytic review and increase the degreeto which the results are useful. Meta-analyses, moreover, cansupplement traditional qualitative literature reviews becausethe process produces effect sizes, which are quantitative in-dices of the practical significance of the effect. For example,meta-analytic procedures can determine the magnitude ofresults of a counseling treatment over no treatment or the

level of association between attending counseling sessions andwell-being. With the current focus on the empirical support fortreatment approaches (e.g., evidence-based practices), meta-analysis can provide useful information regarding the degreeof empirical support for a type of treatment. This, however,should not be taken to mean that meta-analytic studies are theonly source of evidence-based practices.

Durlak (1995) contended that conducting a meta-analysisis analogous to conducting a single scientific experiment inthe social or behavioral sciences. As compared with collect-ing data from participants, in a meta-analysis, the data arecollected from individual studies. Hence, rather than doingstatistical analyses on the data gathered from multiple partici-pants, the statistical analyses in meta-analyses are conductedon data gathered from multiple studies. Later in this article,we will discuss steps in conducting a meta-analysis, whichwill include formulating a research question and then findingstudies that are specifically related to that question. Similarto other types of quantitative studies, with meta-analysesthere are independent and dependent variables. In counselingresearch, many of the meta-analyses have focused on examin-ing the effectiveness of different interventions, programs, ortherapeutic modalities (Erford et al., 2010). In these studies,the dependent variable is some measure of effectiveness(e.g., well-being, level of depression, symptomatology) andthe independent variables are study variables (e.g., clientage, counselor training, types of treatment). As an example,Allumbaugh and Hoyt (1999) examined the effectiveness ofgrief therapy. Other meta-analyses have compared differentschools of psychotherapy (e.g.. Ahn & Wampold, 2001 ; Miller,Wampold, & Varhely, 2008; Wampold, Minami, Baskin, &Tiemey, 2002). As will be discussed in Step 2 (see below),however, meta-analysis can be used to synthesize other typesof research such as whether the personality characteristic ofconscientious is associated with longevity (Kern & Friedman,2008). Meta-analysis also can be used to combine informationfrom different studies that were conducted regarding an as-sessment instrument, such as Helms's (1999) meta-analysis ofCronbach alphas of the White Racial Identity Attitude Scale.

Susan C. Whiston and Peiwei Li, Department of Counseling and Educational Psychology, Indiana University, Bloomington. Corre-spondence concerning this article should be addressed to Susan C. Whiston, Department of Counseling and Educational Psychology,Indiana University, 201 North Rose Avenue, Bloomington, IN 47405-1006 (e-mail: [email protected]).

© 2011 by the American Counseling Association. All rights reserved.

Journal ofCounseling& Development • Summer 2011 • Volume 89 273

Whiston & Li

The following discussion of the steps involved in con-ducting a meta-analysis is only a primer designed to providean overview of meta-analytic techniques for counselingresearchers who are interested in quantitatively summariz-ing counseling research studies. This article is not intendedas a comprehensive guide, and counseling researchers whodecide to conduct a meta-analysis are directed to othersources, such as Borenstein, Hedges, Higgins, and Rothstein(2009), Cooper (2010), Hunter and Schmidt (2004), andLipsey and Wilson (2001). In particular. Cooper, Hedges,and Valentine's (2009) The Handbook of Research Synthesisand Meta-Analysis is an excellent resource to accompanythis overview of meta-analysis.

• s t e p 1: Formulate Research Question(s)In considering conducting a meta-analysis, the first stepinvolves formulating the research question(s). This first stepis critical because it influences all future decisions, such asdetermining whether a study should be included or whetherit should be excluded. In our view, the formulation of meta-analytic research question(s) should start with a counselingresearcher's interests. At this point in the process. Cooper(2010) suggested that individuals ask themselves, "What arethe constructs that I would like study?" Unless an individual isinterested in the research topic, he or she may not complete themeta-analysis because the technique requires hours of read-ing and coding a substantial number of studies. Counselingresearchers frequently are curious about topics that are wellsuited to meta-analytic techniques, such as. Does treatmentX help clients with depression? What factors are associatedwith resiliency? What counselor factors are associated withbetter client outcomes? Is group counseling more effectivethan individual counseling for clients with posttraumaticsfress disorder?

After a research area of interest has been identified, thecounseling researcher should then move to operationally de-fining the constructs of interest. One of the basic premises ofmeta-analysis is that it analyzes a series of studies that addressan identical conceptual hypothesis (Cooper, 2010). Identicalconceptual hypothesis means the studies are not measuresof similar concepts (e.g., self-esteem and self-efficacy) butmeasures of conceptually identical topics of interest. There-fore, the counseling researcher must develop clear operationaldefinition(s) of the central construct(s) to ensure the resultsfrom the studies they will be combining are conceptually alikeeven though researchers may be measuring the construct(s)using different instruments. For many researchers, operationaldefinitions require an initial examination of the literature tosee how other researchers have operationally defined theconstruct of interest. An example of an operational definitionfor career coxmseling interventions is Spokane and Oliver's(1983), who defined career interventions as "any treatment oreffort intended to enhance an individual's career development

or to enable the person to make better career-related decisions"(p. 100). This operational definition can help a counselingresearcher select studies for the meta-analysis that involve atreatment or effort, and the outcomes of these treatments orefforts must involve career development or making a career-related decision.

In meta-analysis, there is frequently a main effect question,such as. Are career interventions effective? However, withadvances in meta-analysis, this is rarely the sole researchquestion. Often there are questions that include moderatorvariables to address questions such as. What types of careerinterventions (e.g., individual or group) are most effectivewith which types of clients (e.g., adolescents or adults)?Again, the moderator variables need to be operationally de-fined. For example. Does a series of career exercises providedby a school counselor with ninth graders in English classesmeet the operational definition of a group intervention? Forcounseling researchers who are interested in meta-analyticquestions, spending time clearly articulating the variables ofinterest before starting to gather data from studies will oftenresult in a better meta-analytic study.

• s t e p 2: Determine Meta-AnalyticApproach That Best Fits

After the counseling researcher has clearly articulated theresearch question and operationally defined constructs andvariables, then the next step is to decide which method ofquantitatively summarizing studies can best address theresearch question(s) (Lipsey, 2009). This process involvesdetermining the key metric in meta-analysis, namely, the indexoí effect size. Effect size serves as a standardized quantitativemetric that researchers are able to calculate from the resultsof individual studies. For most counseling-related researchquestions, a counseling researcher can consider using oneof the following major approaches to categorize the index ofeffect; the d index, r index, or odds ratios. It is now commonfor counseling researchers to report effect sizes in all researchstudies, and researchers are directed to Trusty, Thompson, andPetrocelli (2004) for a more detailed discussion on reportingeffect sizes.

The d index (also referred as g or ES) or standardized meandifference is commonly applied when the research questionis related to group differences. For example, if a counselingresearcher is interested in whether Treatment A is better thanTreatment B, he or she would use the d index. The counselingresearcher would need to find studies in which Treatment Ais compared with Treatment B, and the effect size would becalculated using the difference between the groups (the meanof Treatment A minus the mean of Treatment B) divided bytheir pooled standard deviations on the outcome measure. Theeffect size for each study comparing Treatment A with Treat-ment B then becomes a scale-free measure (Cooper et al., 2006)

274 Journal of Counseling & Development • Summer 2011 • Volume 89

Meta-Analysis

because dependent measures that vary in terms of means andstandard deviations are converted to a eommon metric. If thecounseling researcher's question involved using one specificdependent measure, then the simple difference between groupmeans can be used as the index of effect size (Durlak & Lipsey,1991). However, in most cases, researchers in counseling usedifferent dependent measures (e.g., there are multiple measuresof career development), and thus, the counsehng researcherwould use the d index. An example of a meta-analysis that usedthe d index measure is by Whiston, Sexton, and Lasoff (1998),who examined mean difference between individuals receivingcareer interventions and those receiving no career interventions.

When the counseling researcher's question involves arelationship (e.g.. Is there a correlation between the numberof sessions a group member attends and a measure of groupeffectiveness or counseling outcome?), then the counselingresearcher would select the r index or product-moment cor-relation index of effect (Durlak, Meerson, & Foster, 2003). Tocalculate this type of effect size, both the independent and de-pendent variables of interest need to be continuous variables.An example of a meta-analysis study using an r index was oneconducted by Martin, Garske, and Davis (2000), who reporteda mean correlation of .22 between the therapeutic alliance andpositive outcomes. When studies use both continuous anddichotomous dependent measures, biserial and point-biserial(or phi coefficient) can be used as variations to the r index toexpress effect size (Duriak & Lipsey, 1991). There are alsoa few researchers, such as Sheu et al. (2010), who are usingpath analysis to conduct meta-analyses.

Odds ratio is another common index of effect when di-chotomous (or binary, categorical) outcome measurementsare involved in analysis. It is considered as a less intuitivemeasure of effect size compared with the d index or the r index(Fleiss & Berlin, 2009). Basically, odds ratio describes thestrength of association between two dichotomous variables.An odds ratio of 1 indicates no effect, and the number of oddsratio describes the degree of effect against no effect, that is,how many times a typical person in a group is more likelyto fall into a positive binary outcome category (Durlak et al.,2003). An example of an odds-ratio meta-analysis is Bauer,Tharmanathan, Volz, Moeller, and Freemantle (2009), whocompared the response rate of venlafaxine with other antide-pressant medications. Odds ratios are often used when onlyone variable (typically the dependent variable) is dichotomous.In this case, an odds ratio characterizes the change in the oddson the dichotomous dependent variable for a unit change inthe independent variable (Trusty et al., 2004). Besides oddsratio, a few other effect size estimates have been commonlyused to handle dichotomous variables, including the differencebetween two probabilities, the ratio of two probabilities, andthe phi coefficient. Fleiss and Berlin (2009) provided detailedstatistical descriptions and computation procedures for all fourtypes of odds-ratio measures.

The three general approaches described above are quitegeneral, and the actual calculation of eifect sizes is infinitelymore complex. In a meta-analysis, there are methods forcombining d index, r index, and odds ratio, and these threeindices can also possibly be converted from one to another(see Borenstein et al., 2009; Fleiss & Beriin, 2009; Thompson,2002). However, for beginning meta-analytic researchers,we suggest considering which ofthe three general classes ofeffect sizes matches their research questions in order to startthe process of identifying studies that can be used in theirmeta-analyses.

• s t e p 3: Search Literature andIdentify Possible Studies

The process of searching for studies that correspond to theresearch question and that meet the operational definitions ofthe constructs defined earlier is typically a time-consumingtask. The purpose of a meta-analysis is to aggregate all ofthe research studies in a defined area to make conclusionsbased on the compendium of research. Incorporating theresults of all studies may be difficult for many reasons (e.g.,a school counselor just presented the results to the localschool board or the results of a study are not statisticallysignificant and thus is not published). Valentine, Pigott,and Rothstein (2010) contended that the terms systematicreview and meta-analysis are often used interchangeably, soa meta-analytic study should include a description about thesystematic nature ofthe search for pertinent studies. In thisway, individuals can have more confidence in the results ofmeta-analytic studies when it appears the researchers madeconcerted efforts to find a good sampling of studies in thearea of interest.

Advances in technology have helped meta-analytic research-ers identify potential studies. Counseling researchers can usea number of search engines and databases such as PsycINFOand ERIC through the libraries of many universities. With thesedatabases, researchers generally use multiple terms or descrip-tors (e.g., career counseling, career development, interventions,occupational guidance, and occupational choice). Meta-analyticresearchers also frequently use the reference lists from pertinentstudies and possibly existing reviews of research in the area.One ofthe decisions a meta-analytic researcher needs to makeis whether to include both published and unpublished studies.In the past, many meta-analyses were conducted with onlypublished studies because of the difficulties with identifyingand getting copies of unpublished studies. Some universitylibraries, however, have databases of dissertations and theses(e.g., ProQuest) that make the retrieval of unpublished studiesa little easier. The inclusion of unpublished studies strengthensthe findings of a meta-analysis because it generally means thatthe authors have a more representative sample of all of thepossible studies that might exist.

Journal ofCounseling& Development • Summer 2011 • Volume 89 275

Whiston & Li

Sometimes researchers will wonder about sample size andhow many studies they need to conduct a meta-analysis. In mostcases, counseling researchers should attempt to find as manystudies as possible. Those interested in power analyses to esti-mate a sufficient sample size for a specific meta-analytic studyare directed to Valentine et al. (2010) and Hedges and Pigott(2001 ). In determining power, the meta-analytic researcher willneed to determine if they are conducting a fixed- or random-effects meta-analysis, which we discuss later. Another techniqueis \hs fail-safe N, which is calculated after the average effect sizehas been computed. Fail-safe A is a calculation of the niunberof unfound studies with null results that would be needed toreduce the average effect size to the point of nonsignificance(Lipsey & Wilson, 2001).

• s t e p 4: Determine Inclusion Criteria andDevelop the Coding Manual

As indicated previously, the determination of whether a studycan be used in a meta-analysis is based primarily on the formu-lation of the research questions. One criticism of meta-analysisis that researchers combine studies of diverse constructs result-ing in an analysis of "fruits" but it may be more relevant toknow about "apples" and "oranges" separately. Thus, the firststep in determining whether a study can be included is whetherthat research study involved the independent and dependentmeasures that were operationally defined in the first step of themeta-analytic process. Some meta-analytic researchers onlyselect studies in which the dependent measure is assessed withspecified instruments for which the meta-analytic researchershave investigated the reliability and validity evidence. Otherselection criteria for studies to be included in a meta-analysismay involve whether the clinicians used a treatment manual,the type of control group, whether there was random assign-ment to treatment groups, minimal sample sizes, a minimumnumber of counseling sessions provided, whether the treat-ment was provided in a school setting, or other inclusioncriteria that relate to the research question. A final decisionof whether a study can be included often involves whetherthe study contains sufficient information (e.g., means andstandard deviations or correlations) to calculate the type ofeffect size the counseling researcher selected in earlier steps.

Before a counseling researcher can begin recording studyinformation to be used in calculating an average effect size,the counseling researcher should develop a coding manual thatwill guide the systematic coding of pertinent information fromeach study that will be used to calculate results. In the processof developing the coding manual, counseling researchers needto consider how they will address issues of independence ofeffect sizes per study. One of the assumptions of meta-analysisis that the effect sizes are independent (Hedges, 2009). It is notunusual for researchers to use multiple dependent measures(e.g., career maturity and career decidedness), or a researcher

could evaluate an intervention at the conclusion of treatment(e.g., level of substance use) and then again 6 months later tosee if the positive effects of treatment continued. The problemof nonindependence could also occur if the researcher used thesame sample in two different studies. For example, there wouldbe a problem with dependency if meta-analytic researchersincluded one study published by researchers regarding therelationship between self-efficacy and academic achievementand another study on the relationship between self-efficacy anddegree of planning for college with the same sample. Thereare multiple approaches to addressing issues of dependency,such as selecting the smallest effect size. A common approachto issues of dependency is to average the effect size from allof the different dependent measures, but this approach mayblur findings if diverse dependent measures are used. Anotherapproach is to select the most relevant or psychometricallysound dependent measures. The issue of dependency needs tobe resolved before a researcher starts extracting informationfi-om studies and, therefore, is critical in the development ofthe coding manual.

The primary purpose of the coding manual for the meta-analysis is to guide the counseling researcher's extraction ofstudy information that will provide him or her with interestingdata to analyze. As Wilson (2009) indicated, replication isthe bedrock of the scientific method, and the coding manualensures that other coders will record the same data ft-om thestudies. The description of a coding manual in a meta-analyticstudy documents to the reader that a potential replication ofa meta-analysis would result in the same findings. A codingmanual usually starts with a process for evaluating whethera potential study meets the criteria for inclusion. Further-more, the coding manual provides instructions for recordinginformation from each study that will be used to calculateeffect sizes and the analyses of moderator variables; hence,recording instructions for all potential moderators should beincluded in the coding manual or protocol. For example, if acounseling researcher had not thought about how effect sizesmay vary among men and women and has already extractedinformation from 50 out of 100 studies, he or she may bedisinclined to recode those 50 studies already completed tobe able to examine gender differences. Not only do meta-analytic researchers need to identify interesting moderatorvariables in developing a coding manual, but they also needto consider study quality variables (see Valentine, 2009).One of the criticisms of meta-analysis is that studies of poorquality are given the same weight as more methodologicallyrigorous studies in the calculation of effect size (Chow, 1987);therefore, study characteristics are often recorded so thatanalyses can be conducted regarding the characteristics suchas random assignment, reliability and validity of measures,and participant attrition rate.

The coding manual is primarily developed to ensure sys-tematic coding, and the reliability of the coding will be better

276 Journal ofCounselmg& Development • Summer 2011 • Volume 89

Meta-Analysis

when the manual is precise, detailed, and comprehensive. Ameticulous coding manual reduces the chances of variablesbeing miscoded and, therefore, improves the legitimacy of theresults. Usually, the study coders will conduct a trial run on thecoding process with a couple of studies and make revisions tothe coding manual based on these experiences. These studieswill be officially coded later in the process; the intent of thisinitial coding is only to improve the coding manual.

• s t e p 5: Extract and CodeStudy Information

To exfract the necessary information from each study toconduct a meta-analysis, researchers will need a copy of theentire published or nonpublished study. Coders of researchstudies for a meta-analysis have the advantage of becomingquite knowledgeable about research in the area of the meta-analysis because they read and record detailed informationfrom each of the studies they code.

Reliability and validity of dependent and independentvariables are of issue in any experimental study, therefore, thereliability and validity of data gathered by the coder or codersare also of concern in a meta-analysis. A common problemin meta-analysis is coder drift (Wilson, 2009), wherein subtlechanges in the coding process evolve during the laborious cod-ing process and the coding procedure at the end is somewhatdifferent from the initial coding process. In most meta-analyticstudies, a portion or all of the studies are double-coded andindicators of reliability are calculated to give the readersevidence of whether drift occurred. Researchers typicallyselect procedures such as kappa or intraclass correlations tocalculate interrater reliability in meta-analyses (Wilson, 2009).A comprehensive and detailed coding manual and training ofcoders can often reduce differences in coding and result ingood interrater reliability estimates.

As a part of designing the coding process, counselingresearchers should also consider the next step, data analyses,and how to structure the database to facilitate data analyses.Some meta-analytic researchers have developed methods bywhich their coding is recorded directly into a database. Thereare various software options to conduct meta-analysis, andcounseling researchers need to determine their methods foranalyzing data early on so that the coding process easily leadsto data entry and data analyses. For example, if the coimselingresearchers are using the SPSS macros written by Lipsey andWilson (2001), then the data files must correspond to thosemacro files.

•s tep 6: Data AnalysesThe data analysis in a meta-analysis involves a process by whichthe information from individual studies is synthesized usingvarious calculations, which produces statistical results. Basedon the results of these calculations, statistical inferences are

then made about the population from which the study samplesare drawn (Cooper, 2010). The statistical analyses in a meta-analysis focus on the variation and distribufion of effect sizesand the relationships between effect sizes and moderators ofinterest. Typically in meta-analyses, effect size estimates areoften treated as the dependent variable, whereas the modera-tor variables are typically considered as independent variables(Durlak & Lipsey, 1991). In meta-analytic data analyses, theresearcher at a minimum produces results concerning (a) aver-age effect size, (b) confidence intervals for the effect size, (c)tests of significance, and (d) homogeneity analysis. The averageeffect size is an indicator of the average magnitude of the effect,whether it is correlational or group differences. The next stepis to calculate confidence intervals, which examine the degreeto which there is variability in effect sizes across the differentstudies. This step is crucial because confidence intervals typi-cally are used to examine significance of the effect size and laterto begin the process of examining whether there are differencesin effect sizes across studies. There are multiple methods fordetermining both the statistical and practical significance ofan average effect size, but a common method is examining the95% confidence intervals and determining if the confidenceinterval includes zero (Lipsey & Wilson, 2001).

Consistent with other statistical procedures, meta-analysisinvolves sampling variance and probability. In meta-analysis,however, instead of examining the results from a sample ofpeople and the results occurring by chance, the counselingresearcher is attempting to determine if the variation amongstudies is a result of study difference or if the differences arewhat would be expected based on the probability of samplingerror In meta-analysis, this is expressed in tests of homogene-ity (Hedges & Olkin, 1985) or comparisons of observed andexpected variance (Hunter & Schmidt, 2004). These analysesinform the counseling researchers whether the variance issubstantial enough to proceed with investigations or whetherthere is systematic variation related to moderator variables.

Initial Data Analysis Considerations

Before we discuss data analysis related to average effect size,confidence intervals, and test of homogeneity, it important todiscuss some preliminary considerations related to conductinga meta-analysis. There are a series of decisions that will guidethe process of combining effect size estimates from individualstudies and additional data analyses. These different sets ofdecisions have an influence on the capacity to which it ispossible to explain variation in effect size measures and howconfident the counseling researcher can be in determining ifthe moderator variables influence effect sizes.

Underlying assumptions of meta-analysis. The first deci-sion counseling researchers need to make before proceedingwith data analysis is whether their data meet the followingunderlying assumptions of meta-analysis: (a) All individualfindings are related to the same group differences or the

Journal ofCounselingßc Development • Summer 2011 • Volume 89 277

Whiston & Li

correlations examine the same constructs, (b) comparisonsor tests performed in the meta-analysis are independent ofeach other, and (c) the results from the primary studies areaccurate and valid. We have previously discussed, in brief,the importance of these three assumptions, but to summarize,first, a meta-analysis is a synthesis of research in a specificarea; therefore, if a researcher includes studies outside thatarea then the calculation of effect sizes are "contaminated."Second, the statistical procedures used in meta-analysis as-stime independence. For example, if some studies were fromthe same sample and this sample produced unusually high orlow effect sizes, then the overall effect size could be skewedif these dependent measures were used. The third assumptionconcerns the accuracy of the reported results from each studyand the belief that the primary researchers made valid as-stimptions when computing their results (Cooper, 2010). Anydeviation or violation of these asstimptions may introduce biasor distortion to the resuhs of the meta-analysis, which mustbe considered, and adjustments are typically made to corrector minimize the influence of not meeting an assumption. Forexample, several statistical techniques are discussed in Gleserand Olkin (2009) to adjust for issues of interdependence.

Unit of analysis. The second question a counseling re-searcher needs to consider is what unit of analysis will beused for each study in calculating d index or standardizedmean differences (i.e., group differences), r index or correla-tion coefficients (i.e., relationship measure), or the odds-ratioeffect sizes. Related to decisions about unit of analysis areissues of dependence, addressed earlier in this article, whenmultiple dependent or outcome measures are used in onestudy (Durlak & Lipsey, 1991). One approach is to use everyoutcome or dependent measure in each study as the tinit ofanalysis regardless of the number of measures in a study. Thisapproach often leads to disproportional weighting for studiesthat use numerous measures and potential interdependenciesamong effect size measures. Alternatively, some meta-analyticresearchers use the average of the effect sizes from multipleoutcome measures (see Whiston et al., 1998), but this ap-proach may result in a combination of unrelated meastiresand the average effect size could be based on both sound andunsound measures. The third approach is to treat each con-struct domain as the unit of analysis and keep each dependentor outcome measure separate in the tests of the influences ofmoderators (see Anderson & Whiston, 2005). Whereas thisapproach provides specificity, in some cases there may be toofew studies that used those specific dependent or outcomemeastu-es to conduct tests of the effects of moderator variables.Lipsey and Wilson (2001 ) provided a detailed discussion andcomparison of these approaches.

Fixed-effect and random-effect models. Another decisiona counseling researcher needs to make when conducting ameta-analysis is whether a fixed-effect (FE) or a random-effect(RE) model is more appropriate because this influences the

calculations within the meta-analysis and the inferences thatcan be drawn from the findings. This decision centers on howthe counseling researcher conceptualizes the meta-analysisand the degree to which the researcher can generalize the re-sults (Hedges, 2009). If the counseling researcher only wishesto make inferences about the effect size parameters from thespecific set of studies included in the meta-analysis or sets ofstudies identical or similar enough to it, then the FE modelis appropriate. However, if a counseling researcher wishesto make inferences and generalizations beyond the observedstudies and about the hypothetical population from whichthe studies are drawn, then the RE model should be used.Currently, there are still controversies surrotmding this issue(Hedges, 2009). Nevertheless, Hunter and Schmidt (2000)recommended that individuals routinely select the RE modelover the FE approach because of the larger biases that the FEmodel may introduce.

In Chapter 7 of their classic book on meta-analysis. Hedgesand Olkin (1985) described the procedures for the FE ap-proach and analyzing moderator variables using categoricalmodels, general linear models, and multiple regression. TheRE models have two major variations: the classical approachand the Bayesian approach. The difference is in their differ-ent ways to conceptualize the random variation consideredin the RE models (Raudenbush, 2009). The classic approachis more similar to the FE model; however, the counselingresearcher must incorporate a good estimate of the randomeffects variance, which is used in calculating of mean effectsize, confidence intervals, tests of significance, and tests ofhomogeneity of effect sizes. Konstantopoulos and Hedges(2009) and Raudenbush (2009) provided detailed descrip-tions and discussions of the FE and RE models, respectively.

Data Analyses and Adjustments for Biases

According to Erford et al. (2010), average effect sizes areeither biased on unbiased. For example, biased effects do notaccount for sample size, whereas unbiased estimates of effectsize do account for sample size. There are a number of meth-ods for adjusting effect sizes to reduce sources of possible bias.

Calculation of effect size. One of the goals in many meta-analyses is to calculate a main effect or overall effect size;as indicated in Step 2, there are three primary types of effectsizes (i.e., d index, r index, or odds ratio). When calculatingeffect sizes, one needs to consider sources of biases such asdifferences in sample size and study quality across individualstudies. In meta-analysis, counseling researchers need to es-timate potential biases and apply weights to adjust for them.For example, statistical theories posit that studies with largersamples provide more accurate estimates of population pa-rameters and allow for more reliable inferences from the find-ings, whereas studies with small sample sizes (e.g., less than20) often lead to an overestimation of the population effect(Cooper, 2010). Although sample size is an obvious concern in

278 Journal ofCounseling& Development • Summer 2011 • Voiume 89

Meta-Analysis

weighting effect sizes, there are various methods for adjustingoverall effect sizes. For example, many researchers who are us-ing d index or group difference effect sizes use the proceduresdescribed by Hedges and Olkin (1985), whereas for r indexor correlational effect sizes, researchers would be advised toconsult Hunter and Schmidt (2004). Meanwhile, counselingresearchers need to consider whether they are using the FEmodel (see Konstantopoulos & Hedges, 2009) or RE model(see Raudenbush, 2009). Moreover, counseling researchersalso need to examine their effect sizes for all of the studiesand determine the influence of effect sizes that may be quitehigh or low and determine the influence of these outliers ontheir calculation of overall effect size.

Studies included in a meta-analysis also vary in theirqualities in terms of research designs, reliability and validityof measurements, and other research factors. For example,theoretically, studies using random assignment design pro-vide sounder results than those using quasi-experimentaldesigns (Shadish & Haddock, 2009). The difference betweendesigns can be examined after effect sizes are combined.Also, different studies may use diverse instruments to mea-sure the same construct, and, therefore, the reliability andvalidity of these measures may vary. Measurement theoryhas illustrated that r index and d index effect size estimatesbased on unreliable measures tend to be smaller than thoseresulting from more reliable measures (Shadish & Haddock,2009). For adjusting effect sizes for study characteristicsor artifacts, interested readers are directed to Schmidt, Le,and Oh (2009).

Confidence intervals. Confidence intervals (CIs) arecalculations of the range in which one would find the "true"effect size. In meta-analysis, the CIs are calculated in such away that one would find the true effect size 95% of the time(Quintana & Minami, 2006). The calculation of CI will alsovary depending on the type of meta-analysis that the coun-seling researcher is conducting and whether an RE or FEmodel is being used. Not only do CIs provide an indicationof anticipated effect size ranges, but they also are used to testsignificance. In most meta-analytic situations, if the CI doesnot include zero (e.g., -.50 to -.25 or .33 to .67), then theaverage effect size is considered significant.

Test of homogeneity of effect sizes. One commonly usedtest to examine the two sources of variance in data is calledhomogeneity analysis. It compares the observed variancein the effect sizes with the expected variance in theoreticaleffect sizes under the assumption that sampling error aloneprovides the source of variance; in other words, the samplecomes from the same population. The null hypothesis ofhomogeneity analysis is that the observed variance in effectsizes is not statistically different than what is expected basedsolely on sampling error. If the null hypothesis is rejected, itmeans that the variance in results cannot be explained onlyby sampling error alone or the effect sizes estimate different

population values (Cooper, 2010). In this case, the researchercan then explore sources that may explain variance such asstudy characteristics (e.g., types of treatment, number of ses-sions, different outcome measures). The analyses of moderatorvariables can involve tests of homogeneity and other statisti-cal techniques, and interested readers are encouraged to seeCooper et al. (2009), Hedges and Olkin (1985), and Hunterand Schmidt (2004).

•s tep 7: Writing Meta-AnalyticManuscripts

The Publication Manual of the American PsychologicalAssociation (6th ed.; American Psychological Association,2010) includes substantial infonnation about manuscriptdevelopment and provides specific information that needs tobe included in a meta-analytic study. Familiarity with this in-formation before beginning to write the manuscript will assistcounseling researchers in developing a manuscript that is morelikely to be published in a counseling journal. Another goodresource to consult in writing a meta-analysis for publicationis Quintana and Minami's (2006) article. We also suggest thatcounseling researchers model their manuscripts after otherwell-cited meta-analyses.

•ConclusionsOur goal for this article was to provide a brief introductionto meta-analytic techniques and to encourage counselingresearchers to consider it as a method of quantitativelyreviewing literature. The use of meta-analyses has beengrowing in diverse disciplines (Cooper, 2010), and we sug-gest that counseling researchers could also benefit frommore meta-analyses being published in counseling journals.Meta-analyses provide unique information by quantitativelyaggregating the results of numerous studies and can assistcounselors in understanding the magnitude of the effect. Notonly do meta-analyses assist counselors in understanding anarea of research, but the results can also be used to documentthe effectiveness of counseling interventions and services togoverrmiental officials and legislators, community or schoolboard members, administrators, parents, and even clients. Wehave purposely avoided statistical formulas in this overview;however, knowledge and understanding of the mathematicalfoundations of meta-analysis is necessary. For individualsunfamiliar with meta-analysis, we recommend that they beginthe process with Lipsey and Wilson's (2001) practical text.

•ReferencesAhn, H., & Wampold, B. E. (2001). Where oh where are the specific

ingredients? A meta-analysis of component studies in counsel-ing and psychotherapy. Journal of Counseling Psychology, 48,251-257. doi:10.1037/0022-0167.48.3.251

Journal of Counseling & Development • Summer 2011 • Volume 89 279

Whiston & Li

Allumbaugh, D. L., & Hoyt, W. T (1999). Effectiveness of grieftherapy: A meta-analysis. Journal of Counseling Psychology, 46,370-380. doi:10.1037/0022-0167.46.3.370

American Psychological Association. (2010). Publication manual ofthe American Psychological Association (6th ed.). Washington,DC: Author.

Anderson, L. A., & Whiston, S. C. (2005). Sexual assault educationprograms: A meta-analytic examination of their effectiveness.Psychology of Women Quarterly, 29, 374-388. doi:10.1111/J.1471-6402.2005.00237.X

Bauer, M., Tharmanathan, P., Volz, H., Moeller, H., & Free-mantle, N. (2009). The effect of venlafaxine compared withother antidepressants and placebo in the treatment of majordepression: A meta-analysis. European Archives of Psychia-try and Clinical Neuroscience, 259, 172-185. doi:f0.1007/S00406-008-0849-0

Borenstein, M., Hedges, L. V, Higgins, J. P. T, & Rothstein, H. R.(2009). Introduction to meta-analysis. New York, NY: Wiley.

Chow, S. L. (1987). Meta-analysis of pragmatic and theoreticalresearch: A critique. Journal of Psychology: Interdisciplinaryand Applied, 121, 259-11 \.

Cooper, H. (2010). Research synthesis and meta-analysis (4th ed.).Thousand Oaks, CA: Sage.

Cooper, H., Hedges, L. V, & Valentine, J. C. (2009). The handbookof research synthesis and meta-analysis (2nd ed.). New York, NY:Russell Sage Foundation.

Cooper, H., Robinson, J. C , & Dorr, N. (2006). Conducting a meta-analysis. In F. T. L. Leong & J. T. Austin (Eds.), The psychologyresearch handbook (2nd ed., pp. 315-325). Thousand Oaks,CA: Sage.

Durlak, J. A. (1995). Understanding meta-analysis. In G. L. Grim& P. R. Yamold (Eds.), Reading and understanding multivariatestatistics (pp. 319-352). Washington, DC: American Psychologi-cal Association.

Durlak, J. A., & Lipsey, M. W. (1991). A practitioner's guide tometa-analysis. American Journal of Community Psychology,19, 291-332.

Durlak, J. A., Meerson, I., & Foster, C. J. E. (2003). Meta-analysis.In J. C. Thomas & M. Hersen (Eds.), Understanding researchin clinical and counseling psychology (pp. 243-267). Mahwah,NJ: Erlbaum.

Erford, B. T, Savin-Murphy, J. A., & Butler, C. (2010). Conductinga meta-analysis of counseling outcome research: Twelve stepsand practical procedures. Counseling Outcome Research andEvaluation, 1, 19-42. doi:10.1177/2150137809356682

Fleiss, J. L., & Berlin, J. A. (2009). Effect sizes for dichotomousdata. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.), Thehandbook of research synthesis and meta-analysis (2nd ed., pp.237-253). New York, NY: Russell Sage Foundation.

Glass, G. V (1976). Primary, secondary, and meta-analysis of re-search. Educational Researcher, 5, 3-8.

Glass, G. V (2000). Meta-analysis at 25. Retrieved from http://www.gvglass.info/papers/meta25.html

Gleser, L. J., & Olkin, I. (2009). Stochastically dependent effectsizes. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.), Thehandbook of research synthesis and meta-analysis (2nd ed., pp.357-376). New York, NY: Russell Sage Foundation.

Hedges, L. V (2009). Statistical considerations. In H. Cooper, L.V Hedges, & J. C. Valentine (Eds.), The handbook of researchsynthesis and meta-analysis (2nd ed., pp. 37-47). New York, NY:Russell Sage Foundation.

Hedges, L. V, & Olkin, I. (1985). Statistical methods for meta-analysis. New York, NY: Academic Press.

Hedges, L. V, & Pigott, T. D. (2001). The power of statisticaltests in meta-analysis. Psychological Methods, 6, 203-217.doi: 10.1037/1082-989X.6.3.203

Helms, J. E. ( 1999). Another meta-analysis ofthe White Racial IdentityAttitude Scale's Cronbach alphas: Implications for validity. Measure-ment and Evaluation in Counseling and Development, 32,122-137.

Hunter, J. E., & Schmidt, F L. (2000). Fixed effects vs. random ef-fects meta-analysis models: Implications for cumulative researchknowledge. International Journal of Selection and Assessment,5, 275-292. doi:10.1111/1468-2389.00156

Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Cor-recting error and bias in research findings. Thousand Oaks, C A: Sage.

Kern, M. L., & Friedman, H. S. (2008). Do conscientious individu-als live longer? A quantitative review. Health Psychology, 27,505-512. doi: 10.1037/0278-6133.27.5.505

Konstantopoulos, S., & Hedges, L. V (2009). Analyzing eifect sizes: Fixed-effects models. The handbook of research synthesis and meta-analysis(2nd ed., pp. 257-293). New York, NY: Russell Sage Foundation.

Lipsey, M. W. (2009). Identifying interesting variables and analysisopportunities. In H. Cooper, L. V Hedges, & J. C. Valentine(Eds.), The handbook of research synthesis and meta-analysis(2nd ed., pp. 147-158). New York, NY: Russell Sage Foundation.

Lipsey, M. W, & Wilson, D. B (2001). Practical meta-analysis.Thousand Oaks, CA: Sage.

Martin, D. J., Garske, J. P, & Davis, M. K. (2000). Relation ofthetherapeutic alliance with outcome and other variables: A meta-analytic review. Journal of Consulting and Clinical Psychology,68, 438^50. doi:10.1037/0022-006X.68.3.438

Miller, S., Wampold, B., & Varhely, K. (2008). Direct comparisonsof treatment modalities for youth disorders: A meta-analysis. Psy-chotherapy Research, 18, 5-14.doi:10.1080/10503300701472131

Quintana, S. M., & Minami, T. (2006). Guidelines for meta-analysesof counseling psychology research. The Counseling Psychologist,34, 839-877. doi:10.1177/0011000006286991

Raudenbush, S. W (2009). Analyzing effect sizes: Random-effectsmodels. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.), Thehandbook of research synthesis and meta-analysis (2nd ed., pp.279-315). New York, NY: Russell Sage Foundation.

Schmidt, F L., Le, H., & Oh, I.-S. (2009). Correcting for the distort-ing effects of study artifacts in meta-analysis. In H. Cooper, L.V Hedges, & J. C. Valentine (Eds.), The handbook of researchsynthesis and meta-analysis (2nd ed., pp. 317-333). New York,NY: Russell Sage Foundation.

280 Journal of Counseling & Development • Summer 2011 • Volume 89

Meta-Analysis

Shadish, W. R., & Haddock, C. K. (2009). Combining estimates ofeffect size. In H. Cooper, L. V Hedges, & J. C. Valentine (Eds.),The handbook of research synthesis and meta-analysis (2nd ed.,pp. 257-277). New York, NY: Russell Sage Foundation.

Sheu, H., Lent, R. W., Brown, S. D., Miller, M. J., Hennessy, K.D., & Duffy, R. D. (2010). Testing the choice model of socialcognitive career theory across Holland themes: A meta-analyticpath analysis. Journal of Vocational Behavior, 76, 252-264.doi:10.1016/j.jvb.2009.10.015

Spokane, A. R., & Oliver, L. W. (1983). Outcomes of vocationalintervention. In S. H. Osipow & W. B. Walsh (Eds.), Handbookof vocational psychology (pp. 99-136). Hillsdale, NJ: Erlbaum.

Thompson, B. (2002). "Statistical," "praetieal," and "clinical": Howmany kinds of significance do counselors need to consider?Journal of Counseling & Development, 80, 64—7 L

Trusty, J., Thompson, B., & Petroeelli, J. V (2004). Practical guidefor reporting effect size in quantitative research in the Journal ofCounseling & Development. Journal of Counseling & Develop-ment, 82, 107-110.

Valentine, J. C. (2009). Judging the quality of primary research. InH. Cooper, L. V Hedges, & J. C. Valentine (Eds.), The handbookof research synthesis and meta-analysis (2nd ed., pp. 129-146).New York, NY: Russell Sage Foundation.

Valentine, J. C , Pigott, T. D., & Rothstein, H. R. (2010). How manystudies do you need? A primer on statistical power for meta-analysis. Journal of Educational and Behavioral Statistics, 35,215-247. doi: 10.3102/1076998609346961

Wampold, B. E., Minami, T, Baskin, T. W., & Tiemey, S. C. (2002).A meta-(re)analysis of the effects of cognitive therapy versus"other therapies" for depression. Journal of Affective Disorders,68, 159-165. doi:10.1016/S0165-0327(00)00287-l

Whiston, S. C , Sexton, T. L., & Lasoff, D. L. (1998). Career-intervention outcome: A replication and extension of Oliver and'&X>o^ríii{\9í,%). Journal of Counseling Psychology, 45, 150-165.doi: 10.1037/0022-0167.45.2.150

Wilson, D. B. (2009). Systematic coding. In H. Cooper, L. V Hedges, & J. C.Valentine (Eds.), The handbook of research synthesis and meta-analysis(2nd ed., pp. 159-176). New York, NY: Russell Sage Foundation.

Journal ofCounseling& Development • Summer 2011 • Volume 89 281

Copyright of Journal of Counseling & Development is the property of American Counseling Association and its

content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's

express written permission. However, users may print, download, or email articles for individual use.


Recommended