+ All Categories
Home > Documents > Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance...

Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance...

Date post: 08-Jul-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
12
Information Sharing and Team Performance: A Meta-Analysis Jessica R. Mesmer-Magnus University of North Carolina Wilmington Leslie A. DeChurch University of Central Florida Information sharing is a central process through which team members collectively utilize their available informational resources. The authors used meta-analysis to synthesize extant research on team informa- tion sharing. Meta-analytic results from 72 independent studies (total groups 4,795; total N 17,279) demonstrate the importance of information sharing to team performance, cohesion, decision satisfaction, and knowledge integration. Although moderators were identified, information sharing positively pre- dicted team performance across all levels of moderators. The information sharing–team performance relationship was moderated by the representation of information sharing (as uniqueness or openness), performance criteria, task type, and discussion structure by uniqueness (a 3-way interaction). Three factors affecting team information processing were found to enhance team information sharing: task demonstrability, discussion structure, and cooperation. Three factors representing decreasing degrees of member redundancy were found to detract from team information sharing: information distribution, informational interdependence, and member heterogeneity. Keywords: group, information sharing, information sampling bias, hidden profile, information processing Organizations are increasingly assigning complex decision- making tasks to teams rather than to lone individuals. Personnel selection decisions usually require input from a selection commit- tee rather than a single hiring manager; homicide investigations are typically conducted by a group of detectives rather than by a single officer; the assignment of guilt or innocence to an accused criminal is the responsibility of a jury rather than a judge. A primary advantage of using small groups and teams in these situations is to expand the pool of available information, thereby enabling groups to reach higher quality solutions than could be reached by any one individual. Still, superior solutions to complex decision tasks re- quire members to effectively integrate unique, relevant, and often diverse informational sets. Despite the intuitive importance of effective information sharing (IS) for team decision-making (e.g., Bunderson & Sutcliffe, 2002; Jehn & Shah, 1997), past research has shown teams often deviate from the optimal utilization of information when making deci- sions; discussion often serves to strengthen individual prediscus- sion preferences rather than as a venue to share new information (i.e., biased information sampling model; Stasser & Titus, 1985). These results raise a number of questions of significant importance to the research and practice of teams. We used meta-analysis to cumulate empirical findings culled from studies examining various task domains and discussion structures as well as different aspects of IS and performance criteria to address the following questions: First, to what extent does IS impact team performance? Second, what role do moderators play in this relationship (i.e., definition of IS, operationalization of performance criteria, discussion structure, and team task type)? Third, which factors promote (e.g., cooper- ation) and suppress (e.g., information distribution) IS? Figure 1 summarizes the relationships examined in the current study. Information Sharing Uniqueness and Openness Differing theoretical and operational definitions of IS in teams may partially explain discrepant findings reported in the extant literature regarding the role of IS in performance. Most prior work on IS originates with Stasser and Titus’s (1985, 1987) biased information sampling model, which demonstrates that groups spend more time discussing shared information (information al- ready known by all group members) than unshared information (information uniquely held by one group member; Stasser & Titus, 1985, 1987). Empirical studies examining biased information sam- pling have essentially examined what Hinsz, Tindale, and Vollrath (1997) described as the commonality– uniqueness dimension of IS, or “variability in how many group members have access to a piece of information” (p. 54). We refer to these studies as investigations of the uniqueness of IS. A second subset of studies relevant to team information processing has examined aspects of information ex- change more broadly, encompassing team communication related to goals, progress, coordination, and the like, independent of the initial distribution pattern of information among team members (Henry, 1995; Jehn & Shah, 1997). We refer to these studies as investigations of the openness of IS. Table 1 presents examples of Jessica R. Mesmer-Magnus, Department of Management, Cameron School of Business, University of North Carolina Wilmington; Leslie A. DeChurch, Department of Psychology, University of Central Florida. A prior version of this article was presented at the 2008 Society for Industrial and Organizational Psychology Conference in San Francisco, CA. We thank Verlin Hinsz, Dennis Devine, Ronald Piccolo, Chris Sager, and Chockalingam Viswesvaran for their constructive feedback and helpful suggestions. We thank Diana Keith and Andrea Saravia for their assistance in gathering articles. Correspondence concerning this article should be addressed to Jessica R. Mesmer-Magnus, Department of Management, Cameron School of Business, University of North Carolina Wilmington, 601 South College Road, Wilmington, NC 28403. E-mail: [email protected] Journal of Applied Psychology © 2009 American Psychological Association 2009, Vol. 94, No. 2, 535–546 0021-9010/09/$12.00 DOI: 10.1037/a0013773 535
Transcript
Page 1: Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance across all levels of moderators. The information sharing–team performance relationship

Information Sharing and Team Performance: A Meta-Analysis

Jessica R. Mesmer-MagnusUniversity of North Carolina Wilmington

Leslie A. DeChurchUniversity of Central Florida

Information sharing is a central process through which team members collectively utilize their availableinformational resources. The authors used meta-analysis to synthesize extant research on team informa-tion sharing. Meta-analytic results from 72 independent studies (total groups ! 4,795; total N ! 17,279)demonstrate the importance of information sharing to team performance, cohesion, decision satisfaction,and knowledge integration. Although moderators were identified, information sharing positively pre-dicted team performance across all levels of moderators. The information sharing–team performancerelationship was moderated by the representation of information sharing (as uniqueness or openness),performance criteria, task type, and discussion structure by uniqueness (a 3-way interaction). Threefactors affecting team information processing were found to enhance team information sharing: taskdemonstrability, discussion structure, and cooperation. Three factors representing decreasing degrees ofmember redundancy were found to detract from team information sharing: information distribution,informational interdependence, and member heterogeneity.

Keywords: group, information sharing, information sampling bias, hidden profile, information processing

Organizations are increasingly assigning complex decision-making tasks to teams rather than to lone individuals. Personnelselection decisions usually require input from a selection commit-tee rather than a single hiring manager; homicide investigations aretypically conducted by a group of detectives rather than by a singleofficer; the assignment of guilt or innocence to an accused criminalis the responsibility of a jury rather than a judge. A primaryadvantage of using small groups and teams in these situations is toexpand the pool of available information, thereby enabling groupsto reach higher quality solutions than could be reached by any oneindividual. Still, superior solutions to complex decision tasks re-quire members to effectively integrate unique, relevant, and oftendiverse informational sets.Despite the intuitive importance of effective information sharing

(IS) for team decision-making (e.g., Bunderson & Sutcliffe, 2002;Jehn & Shah, 1997), past research has shown teams often deviatefrom the optimal utilization of information when making deci-sions; discussion often serves to strengthen individual prediscus-sion preferences rather than as a venue to share new information(i.e., biased information sampling model; Stasser & Titus, 1985).

These results raise a number of questions of significant importanceto the research and practice of teams. We used meta-analysis tocumulate empirical findings culled from studies examining varioustask domains and discussion structures as well as different aspectsof IS and performance criteria to address the following questions:First, to what extent does IS impact team performance? Second,what role do moderators play in this relationship (i.e., definition ofIS, operationalization of performance criteria, discussion structure,and team task type)? Third, which factors promote (e.g., cooper-ation) and suppress (e.g., information distribution) IS? Figure 1summarizes the relationships examined in the current study.

Information Sharing Uniqueness and Openness

Differing theoretical and operational definitions of IS in teamsmay partially explain discrepant findings reported in the extantliterature regarding the role of IS in performance. Most prior workon IS originates with Stasser and Titus’s (1985, 1987) biasedinformation sampling model, which demonstrates that groupsspend more time discussing shared information (information al-ready known by all group members) than unshared information(information uniquely held by one group member; Stasser & Titus,1985, 1987). Empirical studies examining biased information sam-pling have essentially examined what Hinsz, Tindale, and Vollrath(1997) described as the commonality–uniqueness dimension of IS,or “variability in how many group members have access to a pieceof information” (p. 54). We refer to these studies as investigationsof the uniqueness of IS. A second subset of studies relevant to teaminformation processing has examined aspects of information ex-change more broadly, encompassing team communication relatedto goals, progress, coordination, and the like, independent of theinitial distribution pattern of information among team members(Henry, 1995; Jehn & Shah, 1997). We refer to these studies asinvestigations of the openness of IS. Table 1 presents examples of

Jessica R. Mesmer-Magnus, Department of Management, CameronSchool of Business, University of North Carolina Wilmington; Leslie A.DeChurch, Department of Psychology, University of Central Florida.A prior version of this article was presented at the 2008 Society for

Industrial and Organizational Psychology Conference in San Francisco,CA. We thank Verlin Hinsz, Dennis Devine, Ronald Piccolo, Chris Sager,and Chockalingam Viswesvaran for their constructive feedback and helpfulsuggestions. We thank Diana Keith and Andrea Saravia for their assistancein gathering articles.Correspondence concerning this article should be addressed to Jessica

R. Mesmer-Magnus, Department of Management, Cameron School ofBusiness, University of North Carolina Wilmington, 601 South CollegeRoad, Wilmington, NC 28403. E-mail: [email protected]

Journal of Applied Psychology © 2009 American Psychological Association2009, Vol. 94, No. 2, 535–546 0021-9010/09/$12.00 DOI: 10.1037/a0013773

535

Page 2: Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance across all levels of moderators. The information sharing–team performance relationship

conceptual and operational definitions adopted in primary studiesof uniqueness and openness.Empirical studies conducted within either domain demonstrate

the importance of effective IS to team performance (Greenhalgh &Chapman, 1998; Schittekatte & Van Hiel, 1996). Conceptually,these two aspects of IS parallel the two basic aspects of teamwork:task and socio-emotional functioning (Hackman, 1987). Unique-ness captures the extent to which teams are utilizing members’distinctive knowledge sets for the team’s benefit. Increasinguniqueness means teams are expanding the pool of knowledgeavailable for processing and therefore ought to increase team taskperformance. Although greater openness does not necessarily im-ply an increase in the team’s available knowledge stock, there areseveral ways openness could indirectly enhance performance (e.g.,by enhancing team socio-emotional functioning, the depth of teaminformation processing, and/or the opportunity for unique infor-mation to be shared). Overtly sharing information with teammatespromotes positive climactic states (e.g., trust, cohesion), whichought to improve team socio-emotional outcomes and, in turn,team task performance (Beal, Cohen, Burke, & McLendon, 2003).Both operationalizations of IS ought to relate to team perfor-

mance, but because of its direct link to team task functioning,uniqueness ought to be more strongly related to performance thanopenness. In addition to these construct-based reasons, there arealso methodological differences within these streams of researchthat may yield differential relationships to performance (e.g., re-liance upon manipulations vs. self-report measures, ad hoc vs.intact teams, objective vs. subjective performance criteria). There-fore, we expect the following:

Hypotheses 1–2: IS will positively predict team performance(H1), whereas IS uniqueness will more strongly predict teamperformance than will IS openness (H2).

Differential Prediction of Team Performance Criteria

Differing performance criteria may also partially explain dis-crepant findings. Past investigations link IS to three broad classesof performance criteria, which differ in terms of both their con-tamination and deficiency: decision effectiveness, objective mea-sures, and subjective measures (Campbell, McCloy, Oppler, &Sager, 1992). Objective measures are least contaminated withperformance-irrelevant content (e.g., rater bias) but are deficient inrepresenting the full domain of team performance. Subjectivemeasures are less deficient in representing the team performancedomain but are typically more contaminated with rater biases andother non-performance-relevant sources of variance. Decision ef-fectiveness measures are intermediate in terms of contaminationand deficiency; by specifying multiple dimensions along which adecision is evaluated, they capture more of the relevant perfor-mance domain than do objective measures, and they also sufferfrom less contamination than do subjective measures. We expectcontamination will inflate relationships with subjective criteria andthat deficiency will suppress relations to objective criteria. Thus,IS ought to exhibit differential validity with team performancecriteria descending as follows: subjective measures, decision ef-fectiveness, and objective measures.

Hypothesis 3: IS will differentially predict performance cri-teria such that IS will most strongly predict subjective mea-sures, then decision effectiveness and objective measures.

Moderating Role of Task Type

A recurring concern surrounding IS research has been the extentto which IS effects are applicable to task domains other thanintellective, hidden profile tasks (i.e., external validity; Moham-med & Dumville, 2001; Winquist & Larson, 1998; Wittenbaum,

Figure 1. Organizing framework for understanding team information sharing. H ! hypothesis.

536 RESEARCH REPORTS

Page 3: Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance across all levels of moderators. The information sharing–team performance relationship

Hollingshead, & Botero, 2004). An intellective task is high on taskdemonstrability because, on the basis of available information andcommonly accepted criteria, a correct answer exists; at the otherend of the demonstrability continuum are judgmental tasks, whichrequire groups to come to a consensus (Laughlin, 1980). A hiddenprofile task is one where the optimal decision choice differs fromeach team member’s initial decision preference and where relevantinformation is distributed among team members in such a way thatonly by incorporating the unique knowledge of each member canthe team realize the optimal decision (Stasser & Titus, 1985).The presence or absence of a hidden profile and the level of task

demonstrability distinguish four basic task types (i.e., intellectivehidden profile, judgmental hidden profile, intellective nonhiddenprofile, and judgmental nonhidden profile), which differ in termsof the information processing demands required for goal accom-plishment. In particular, either the presence of a hidden profile orthe demonstrability of a correct solution increases the informationprocessing required to make a high-quality decision. Hidden pro-file tasks require members to incorporate information that conflictswith their prediscussion preferences in order to make a nonintui-tive group choice. Similarly, in addition to requiring teams to reachconsensus, intellective tasks have the added processing demands of

requiring sufficient information, incorrect members capable ofrecognizing the correct response if proposed, and correct memberswith the ability, motivation, and time to demonstrate the correctresponse to incorrect members (Laughlin, 1996). Therefore, owingto differential information processing requirements, we expect thefollowing:

Hypothesis 4: IS will predict team performance more posi-tively when a hidden profile is present than when there is nohidden profile (H4A) and on intellective as opposed to judg-mental tasks (H4B).

Moderating Role of Discussion Structure

Team discussions range in their degree of structure from free-form to highly focused. Past research has examined the impact ofa variety of discussion structures on the pooling of information inteams (Wittenbaum & Bowman, 2004). Structured discussion pro-cedures (Stasser, Taylor, & Hanna, 1989), judge–advisor systems(Savadori, Van Swol, & Sniezek, 2001), and dialectical inquirymethods (Devine, 1999) have been investigated as means of im-proving the amount of information used in decision-making, the

Table 1Definitions and Operationalizations of Information Sharing

Author (year) Conceptual definition Operational definition

Uniqueness of information sharingDevine (1999) “Effectively incorporating the specialized

information provided by individual experts.”(p. 613)

Measured common and unique information sharing viaobserver ratings of videotaped group discussions.Represented as the “sum . . . of information cues ofeach type mentioned aloud during discussion.”(p. 619)

Johnson et al. (2006) “The degree to which team members shareinformation with each other.” (p. 106)

Measured as the number of times team members sentinformation they acquired while doing their task totheir teammates. The information being sent wasknown by only one team member.

Stasser & Titus (1987) The prediscussion distribution of shared andunshared information is “the way theinformation is distributed among groupmembers before discussion.” (p. 82)

Manipulated the proportion of unshared informationavailable to team members before discussion aseither 33% or 66%.

Openness of information sharingBunderson & Sutcliffe (2002) “Conscious and deliberate attempts on the part

of team members to exchange work-relatedinformation, keep one another apprised ofactivities, and inform one another of keydevelopments.” (p. 881)

Measured using a 3-item self report scale: (a)Information used to make key decisions was freelyshared among the members of the team, (b) teammembers worked hard to keep one another up to dateon their activities, and (c) team members were kept“in the loop” about key issues affecting the businessunit.

Jehn & Shah (1997) “Making statements to other group membersabout the task.” (p. 777)

Measured using 3 techniques: (a) Raters content-analyzed audio tapes for the percentage of discussionthat consisted of sharing information, (b) theexperimenter rated the quality of information sharingin the group based on a videotape, and (c) groupmembers responded to a self-report measure; asample item was “This group engaged in very opencommunication.”

Miranda & Saunders (2003) “Refers to oral and written discussion ofinformation among group members.” (p. 90)

Measured the breadth and depth of information sharing.“Breadth . . . was a count of the number of distinctdiscussion sequences initiated during a session.Depth . . . was the average number of comments orthreads in the discussion sequences.” (p. 94)

537RESEARCH REPORTS

Page 4: Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance across all levels of moderators. The information sharing–team performance relationship

logic being that more focused, structured discussions organize thegroup’s retrieval and combination of information, which likelyenhances the impact of IS on performance. Thus, we expect thefollowing:

Hypothesis 5: IS will more positively predict team perfor-mance when discussion structure is high than when discus-sion structure is low.

Information Processing

Past research has examined three information processing factorsthat tend to promote IS: task demonstrability, discussion structure,and cooperation. Highly demonstrable tasks (intellective; Laugh-lin, 1980; Stasser & Stewart, 1992), structured group discussions(Larson, Christensen, Franz, & Abbott, 1998; Mennecke, 1997;Okhuysen & Eisenhardt, 2002; Stasser, Stewart, & Wittenbaum,1995), and cooperative group discussions (Greenhalgh & Chap-man, 1998; Henningsen & Henningsen, 2003) have been found toincrease members’ in-depth processing and elaboration of infor-mation. Thus, we expect the following:

Hypotheses 6–8: Task demonstrability (H6), discussionstructure (H7), and cooperation (H8) will positively predictteam IS.

Member Redundancy

Past research has examined three factors that tend to undermineIS in groups: member heterogeneity, informational interdepen-dence, and information distribution (Stasser & Titus, 1985, 1987).These three factors reflect some variant of the extent to which teammembers are redundant in their informational contributions to theteam.1 To the extent that members are nonredundant, team perfor-mance could be enhanced through the effective sharing of infor-mation. However, prior research suggests that (a) group membersare less willing to share information with individuals they perceiveto be different from themselves (Devine, 1999; Miranda & Saun-ders, 2003; Stasser et al., 1995), (b) teams with more initiallycorrect and therefore informationally independent members tend toshare more information (Hollingshead, 1996b; Stasser & Stewart,1992), and (c) teams spend less time discussing initially distributed(unshared) information than shared information (Stasser & Titus,1985). Thus, we expect teams high in member redundancy willshare more information than teams low in member redundancy.

Hypotheses 9–11: Group member homogeneity (H9), infor-mational independence (H10), and information distribution(H11) will positively predict team IS.

Method

Database

Seventy-two independent studies reported in 71 manuscripts(total number of groups! 4,795; total N ! approximately 17,279)examining information sharing (IS) in teams were included in thismeta-analysis. To ensure a comprehensive search, studies werelocated using the following strategies: (a) searching the PsycINFO,ABI Inform, and ERIC databases using appropriate keywords and

phrases,2 (b) searching for articles that cited Stasser, Stewart,and/or Titus’s work on information sampling in teams, (c) check-ing references cited in studies included in this meta-analysis, and(d) requesting related manuscripts presented at annual conferenc-es.3 Studies were omitted from the meta-analytic database if suf-ficient information to compute a correlation between IS and arelevant correlate was not reported. Forty-seven of the 72 studiesdid not report correlations between IS and a relevant correlate butdid provide sufficient information to compute a point-biserialcorrelation (e.g., means and standard deviations for experimentaland control groups, t statistics).4 In six cases, authors reportedmultiple estimates of the same relationship from the same sample(i.e., two methods of measuring the same conceptualization of ISwere examined in relation to relevant correlates). In these cases, amean correlation was computed to maintain independence (Hunter& Schmidt, 1994, 2004).5

Coding Procedure and Intercoder Agreement

Each study was coded for (a) sample size, (b) number of teams,(c) IS uniqueness versus openness, (d) task type, (e) discussionstructure, (f) team performance criterion, (g) correlations betweenIS and relevant correlates and outcomes, (h) reliability estimatesfor IS, correlates, and outcomes, and (i) the proportion of discus-sion that was focused on shared versus unshared information. Toensure coding consistency and construct validity, we jointly de-veloped a coding scheme based upon the conceptual and opera-tional definitions for relevant constructs within the primary stud-ies. We each undertook an independent effort to code the 72studies that met criteria for inclusion in this study. Initial inter-coder agreement across all coded effect sizes was 90%. All in-stances of disagreement involved the coding of IS operationaliza-tion and were resolved through discussion.Moderators. Two key aspects of IS were adopted in the pri-

mary studies (see Table 1): (a) IS uniqueness—discussion of

1 We thank an anonymous reviewer for pointing out this commonunderlying factor.2 For example, (group OR team) AND information sharing, decision-

making, discussion, critical OR unshared information, information ex-change, hidden profile, and biased information sampling.3 We checked conference programs from the Society for Industrial and

Organizational Psychology and the Academy of Management meetingsover the past 2 years to incorporate research results that had not yet beenpublished. We contacted authors of potentially relevant studies directly viae-mail and received 80% of the manuscripts requested; half of those metcriteria for inclusion in this study.4 As point-biserial correlations are attenuated (in this case, due to the

dichotomization of IS), corrections were made to convert the correlationsto a full "1 scale. We also made adjustments to the sample sizes for thecorrected correlations to avoid underestimating sampling error varianceusing procedures described in Hunter and Schmidt (1990, 2004) and Ones,Viswesvaran, and Schmidt (1993).5 Although this is the norm in meta-analysis, average correlations are

flawed in that they can result in overestimates of sampling error. Althoughcomposite correlations would have been preferable, the majority of primarystudies did not report sufficient data to compute them (i.e., correlationsamong facet measures). Importantly, the correlations we used to computeaverages were highly similar, thus minimizing concerns associated withour approach.

538 RESEARCH REPORTS

Page 5: Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance across all levels of moderators. The information sharing–team performance relationship

previously unshared information (consistent with the biased infor-mation sampling approach; k ! 51) and (b) IS openness—breadthof information discussed during group tasks/decision-making (k !21). Three distinct indicators of team performance were adopted inthe primary studies: decision effectiveness (e.g., solution quality,comparison to expert solution, correctness; k ! 46), objectivemeasures (e.g., profitability, market growth, computer simulationscore; k ! 8), and subjective measures (e.g., evaluations of per-formance; k ! 4). As appropriate, tasks used in the primary studieswere coded as to whether (a) a hidden profile was present or absentand (b) task demonstrability was high or low, resulting in theidentification of four conceptually distinct types of decision-making tasks: hidden profile—intellective (k ! 23), hidden pro-file—judgmental (k ! 5), nonhidden profile—intellective (k ! 4),and nonhidden profile—judgmental (k ! 4). Further, group dis-cussions in the primary studies were coded as either structured(e.g., instructed to share information, told to be vigilant aboutdiscussing all information prior to reaching a decision, providedwith a discussion format designed to ensure member participation;k ! 12) or unstructured (k ! 27).Correlates. An examination of the role of task demonstrability

and decision structure in promoting IS was possible whenever aprimary study examined decision tasks (judgmental vs. intellec-tive) or decision structure (unstructured vs. structured discussions)in the same study and reported its relationship to IS. Cooperationduring discussion was typically operationalized using a Likert-typescale assessing team members’ perceptions of the team’s cooper-ativeness in sharing information during discussion. Team membersimilarity was typically operationalized as surface-level similarity(e.g., similarity of function, knowledge) using either the groupstandard deviation or Likert-type scales of member perceptions.Positive correlations indicate homogenous teams shared more in-formation. Positive correlations for information independence in-dicate more information was shared in teams where more membersknew the correct solution prior to discussion. Finally, positivecorrelations for information distribution indicate teams discussedmore shared than unshared (unique) information.

Analysis

The meta-analytic methods outlined by Hunter and Schmidt(2004) were employed. Corrections were made for sampling error,measure reliability, and, when necessary, attenuation of observedcorrelations due to dichotomization of IS.6 Corrections were madefor unreliability in both IS and correlate measures whenever pos-sible. When reliability estimates were available only for IS, wecorrected for reliability in this construct and made no correctionfor reliability in the other.7 Finally, given the possibility of afile-drawer effect (wherein significant findings are more likely tobe published; Rosenthal, 1979), we conducted a file-drawer anal-ysis (Hunter & Schmidt, 2004) to estimate the number of studiesreporting null findings that would be required to reduce reliability-corrected correlations to a specified lower value (we used # ! .05).

Results

Table 2 reports the results of the meta-analyses examining therole of information sharing (IS) in team outputs. In support ofHypothesis 1, IS positively predicted team performance (# ! .42,

k ! 43). IS also positively predicted cohesion (# ! .20, k ! 11),member satisfaction (# ! .33, k ! 3),8 and knowledge integration(# ! .34, k ! 9). Hypothesis 2 predicted IS conceptualizationwould differentially predict team performance, such that theuniqueness of the information shared would be more stronglyrelated to team performance than would be the openness of IS.Indeed, the credibility interval (CV) surrounding # for the IS–per-formance relationship was fairly wide (.14, .70), suggesting mod-erators may exist.9 Examining the IS–performance relationship byconceptualization of IS shows support for Hypothesis 2; specifi-cally, IS uniqueness was more strongly predictive of team perfor-mance than was IS openness (# ! .50, k ! 25 vs. # ! .32, k ! 19).Further, the confidence intervals (CIs) for these estimates of # donot overlap, supporting our proposition that the form of IS (unique-ness vs. openness) results in a meaningful difference in teamperformance. Although not hypothesized, it is interesting thatopenness of IS was more strongly related to cohesion than unique-ness (# ! .31, k ! 5 vs. # ! .11, k ! 6; the corresponding CIs donot overlap).Hypothesis 3 predicted operationalization of team performance

would moderate the IS–performance relationship. As expected,meta-analyses between IS and each performance criterion revealeddifferences in the estimates of #, such that IS was most stronglyrelated to subjective performance measures (# ! .51, k ! 4),followed by decision effectiveness (# ! .45, k ! 31); the smallestrelationship was found for objective measures (# ! .21, k ! 8).Largely nonoverlapping CIs as well as smaller SD# values forsubcategories of performance compared with the overall categoryprovide additional support for performance criteria as moderatorsof the IS–performance relationship (Judge & Piccolo, 2004).

6 Corrections for measure reliability were made using artifact distribu-tions. Individual corrections for reliability were not preferable becausereliability estimates were not consistently available for correlate/outcomevariables (Hunter & Schmidt, 2004). Further, although the Hunter andSchmidt (2004) method also permits corrections for the effects of rangerestriction, there was no evidence provided of range restriction in IS or itscorrelates in the samples used in the primary studies.7 Resulting corrected correlations from both approaches can be com-

pared across meta-analyses; however, # (the reliability-corrected meancorrelation) may be slightly underestimated for meta-analyses whereincorrections were made for reliability in only one measure. Superscripts areincluded in the tables to indicate whether reliability was corrected in one orboth measures in a given meta-analysis.8 Small k meta-analyses are subject to second-order sampling error

(Hunter & Schmidt, 2004). Although second-order sampling error tends toaffect meta-analytic estimates of standard deviations more than means, thereader is advised to interpret results of such meta-analyses with caution.9 We report both the CVs and the CIs, as each provides unique infor-

mation about the nature of # (Hunter & Schmidt, 2004; Whitener, 1990).Specifically, the CV provides an estimate of the variability of correctedcorrelations across studies. Wide CVs or those that include zero suggest thepresence of a moderator. An 80% CV that excludes zero indicates thatmore than 90% of the corrected correlations are different from zero (10%lie beyond the upper bound of the interval). The CI provides an estimate ofthe accuracy of our estimation of # (Whitener, 1990); in other words, theCI estimates the variability around # due to sampling error. A 90% CI thatexcludes zero indicates that if our estimation procedures were repeatedmany times, 95% of the estimates of # would be larger than zero (5%would fall beyond the upper limit of the interval).

539RESEARCH REPORTS

Page 6: Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance across all levels of moderators. The information sharing–team performance relationship

Table 3 reports the results of meta-analyses testing Hypothesis4, which predicted IS would more positively predict team perfor-mance on intellective hidden profile tasks than on nonhiddenprofile and/or judgmental tasks. Indeed, IS was more predictive ofperformance on intellective hidden profile tasks (# ! .53, k ! 23)than for any other task type (for judgmental hidden profile tasks,# ! .36, k ! 5; for intellective nonhidden profile tasks, # ! .36,k ! 4; for judgmental nonhidden profile tasks, # ! .37, k ! 4).However, the associated CIs overlapped, suggesting results shouldbe interpreted with caution, and additional research is needed todisentangle the role of task type.Table 4 reports the results of meta-analyses testing Hypoth-

esis 5, which predicted discussion structure would moderate theIS–performance relationship such that IS would more positivelypredict performance when structure was high than low. Resultssuggest that discussion structure only moderated the IS–perfor-

mance relationship for uniqueness; IS–uniqueness was morepositively related to performance when discussion structure washigh (# ! .46, k ! 8 vs. # ! .34, k ! 14), suggesting thatstructuring discussion enhances the importance of unique IS toteam performance.Table 5 reports the results of meta-analyses examining correlates of

IS. Results suggest teams share more information wherein (a) taskdemonstrability is high (# ! .45, k ! 5), (b) discussion structure ishigh (# ! .41, k! 13), and (c) members are more cooperative duringdiscussion (# ! .57, k ! 14; although the CV for this relationshipexcludes zero, it is fairly wide, suggesting a more complex relation-ship may exist), thus providing support for Hypotheses 6–8.Further, although teams are often deliberately composed of

diverse members in an effort to enhance decision quality, resultssuggest that IS was actually greater in teams wherein membersimilarity is high (# ! .22, k ! 9), which provides support for

Table 2Information Sharing (IS) and Team Outcomes

Meta-analysis variable k N r SDr # SD# 80% CV 90% CI % SEV % ARTV FD k

Team performance (all indicators)a 43 2,701 .37 .22 .42 .22 .14, .70 .35, .49 24.29 25.97 319IS–uniquenessa 25 1,490 .44 .25 .50 .25 .18, .82 .40, .60 18.44 21.03 225IS–opennessa 19 1,295 .28 .17 .32 .14 .14, .50 .25, .39 45.27 46.21 103

Decision effectivenessa 31 1,917 .40 .23 .45 .22 .17, .72 .37, .53 22.83 24.09 248IS–uniquenessa 21 1,220 .45 .27 .47 .25 .15, .79 .37, .57 16.01 16.29 177IS–opennessa 11 777 .29 .14 .35 .09 .23, .46 .27, .43 66.52 67.32 66

Team performance–objective measuresb 8 498 .21 .19 .21 .16 .01, .41 .09, .33 38.44 38.49 26IS–uniquenessb 2 140 .24 .07 .24 0 .24, .24 .16, .32 100 100 8IS–opennessb 6 358 .21 .22 .22 .18 $.02, .45 .07, .37 32.35 32.39 21

Team performance–subjective measuresa 4 286 .42 .09 .51 0 .51, .51 .43, .59 100 100 37IS–uniquenessa 2 129 .43 .10 .49 0 .49, .49 .36, .62 100 100 18IS–opennessa 2 157 .40 .07 .48 0 .48, .48 .38, .58 100 100 18

Cohesiona 11 682 .18 .14 .20 .06 .13, .28 .12, .28 83.69 84.69 33IS–uniquenessa 6 369 .10 .13 .11 .04 .06, .16 .01, .21 92.57 92.62 8IS–opennessa 5 313 .25 .09 .31 0 .31, .31 .23, .39 100 100 26

Satisfactiona 3 213 .28 .18 .33 .17 .12, .54 .13, .53 38.45 38.84 17Knowledge integrationb 9 467 .33 .28 .34 .26 .01, .67 .18, .50 19.75 19.83 53

Note. k ! number of correlations meta-analyzed; N ! total number of groups; r ! sample-size-weighted mean observed correlation; SDr !sample-size-weighted standard deviation of the observed correlations; # ! sample-size-weighted mean observed correlation corrected for unreliability inboth measures; SD# ! standard deviation of #; 80% CV ! 80% credibility interval around #; 90% CI ! 90% confidence interval around #; % SEV !percent variance due to sampling error; % ARTV ! percent variance due to all corrected artifacts; FD k ! file-drawer k representing the number of “lost”studies reporting null findings necessary to reduce # to .05.a Corrections for reliability were possible for both IS and team performance. b Corrections for reliability were possible for only IS.

Table 3Task Type as a Moderator of the Relationship Between Information Sharing (IS) and Team Performance

Meta-analysis variable k N r SDr # SD# 80% CV 90% CI % SEV % ARTV FD k

Hidden profile tasksIntellectivea 23 1,307 .46 .25 .53 .25 .21, .85 .43, .63 18.14 22.07 221Judgmentala 5 260 .32 .19 .36 .15 .17, .56 .20, .52 45.02 45.40 31

Nonhidden profile tasksIntellectivea 4 352 .34 .08 .36 0 .36, .36 .29, .43 100 100 25Judgmentala 4 181 .30 .11 .37 0 .37, .37 .25, .49 100 100 26

Note. k ! number of correlations meta-analyzed; N ! total number of groups; r ! sample-size-weighted mean observed correlation; SDr !sample-size-weighted standard deviation of the observed correlations; # ! sample-size-weighted mean observed correlation corrected for unreliability inboth measures; SD# ! standard deviation of #; 80% CV ! 80% credibility interval around #; 90% CI ! 90% confidence interval around #; % SEV !percent variance due to sampling error; % ARTV ! percent variance due to all corrected artifacts; FD k ! file-drawer k representing the number of “lost”studies reporting null findings necessary to reduce # to .05.a Corrections for reliability were possible for both IS and team performance.

540 RESEARCH REPORTS

Page 7: Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance across all levels of moderators. The information sharing–team performance relationship

Hypothesis 9. Also, as predicted in Hypothesis 10, as membersbecame more informationally independent (i.e., as the proportion ofthe team that could have reached a correct decision without discussionincreased), more information was shared (# ! .52, k ! 4).Table 6 reports meta-analytic estimates of the size of the team

information sampling bias. Specifically, Hypothesis 11 predictedteams would discuss more commonly held than uniquely heldinformation. Indeed, teams tended to spend more discussion timeon shared than unshared information (# ! .69, k ! 23).10 Inter-estingly, this effect was even stronger when task demonstrabilitywas low (# ! .86, k ! 4 vs. # ! .62, k ! 19; the CIs around theseestimates are distinct, suggesting differences in the nature ofinformation shared by task type).

Discussion

Organizations are increasingly relying upon the outputs ofknowledge-based project and management teams (Devine, 1999;Sundstrom, 1999). An intuitive expectation is that knowledge-based teams share and ultimately benefit from a greater pool ofavailable information and members’ collective processing of thatinformation. However, a seminal study by Stasser and Titus (1985)cast doubt on expectations of groups as effective informationprocessors, and a stream of empirical work has followed. Wecumulated 22 years of empirical research on information sharing(IS), some conducted within the Stasser and Titus tradition andsome more broadly focused on team process, in order to (a) betterunderstand the correlates and consequences of team IS and (b)explore moderators of the IS–performance relationship. Findingsshow IS is enhanced by factors that promote information process-ing (e.g., task demonstrability) and reflect various aspects ofmember redundancy (e.g., similarity). Further, our results confirmthat IS is a clear driver of team performance and that although theeffect is moderated, the relationship remains positive across levelsof all moderators.

Implications of Definitions of Information Sharingand Performance

Importantly, the manner in which IS has been theoretically andoperationally defined moderates the strength of the positive IS–performance relationship. IS defined as uniqueness (i.e., sharing

information not commonly held by all team members) is morepredictive of team performance than IS defined more broadly asopenness (i.e., breadth of information shared). Interestingly, ourresults suggest the reverse is true when predicting team cohesion;specifically, IS openness was more strongly correlated with teamcohesion than IS uniqueness. This pattern of findings is consistentwith the idea that the uniqueness and openness aspects of ISparallel the task and socio-emotional functions of teams. Sharingunique information builds the available knowledge stock, directlyimproving the team’s task outcomes. Openness was also related toperformance, though less strongly than uniqueness. A plausibleexplanation for this differential relationship is that openness influ-ences performance indirectly through promoting high-quality re-lationships and enabling members to have greater trust in oneanother’s informational inputs. Another possible explanation isthat discussing information with greater breadth may permit morein-depth information processing, thus enhancing the quality ofteam decisions. And third, although the current literature has notallowed us to jointly consider openness and uniqueness, perhapswhen teams are more open during discussions, the potential in-creases that unique information surfaces, thus promoting qualityperformance.Future research ought to explore uniqueness and openness in

combination. Figure 2 proposes a two-dimensional view of ISuniqueness and openness; we do not submit that openness anduniqueness are orthogonal, but rather, at least conceptually, thatthey are not so perfectly correlated as to represent ends of a singledimension (e.g., team discussion can be high in both openness anduniqueness). We hope this framework spawns research that will

10 Were equal discussion time devoted to shared and unshared informa-tion, we would expect this correlation to be near zero. The reliability-corrected correlation can also be converted to a d statistic (%) for ease ofinterpretation. The effect size d captures the same effect as the t statistic,but it is a better measure of effect because it allows for a consistent metricacross studies (i.e., does not rely upon sample size for interpretation;Hunter & Schmidt, 2004). A # of .69 converts to a % of 1.38, whichcaptures the difference in average discussion time devoted to shared versusunshared information, and indicates a large effect (Cohen, 1988) of biasedIS on group discussion.

Table 4Discussion Structure as a Moderator of the Relationship Between Information Sharing (IS) and Team Performance

Meta-analysis variable k N r SDr # SD# 80% CV 90% CI % SEV % ARTV FD k

Structured discussion (IS combined)a 12 764 .32 .20 .39 .20 .14, .64 .27, .51 32.00 33.54 82IS–uniquenessa 8 405 .38 .13 .46 .02 .44, .49 .36, .56 93.94 98.16 66IS–opennessa 4 287 .24 .27 .29 .29 $.08, .66 .03, .55 17.33 18.02 20

Unstructured discussion (IS combined)a 27 1,585 .33 .12 .36 .02 .32, .38 .31, .41 95.40 97.22 168IS–uniquenessa 14 730 .33 .14 .34 .07 .26, .43 .27, .41 77.95 78.78 82IS–opennessa 13 855 .33 .08 .37 0 .37, .37 .34, .40 100 100 84

Note. k ! number of correlations meta-analyzed; N ! total number of groups; r ! sample-size-weighted mean observed correlation; SDr !sample-size-weighted standard deviation of the observed correlations; # ! sample-size-weighted mean observed correlation corrected for unreliability inboth measures; SD# ! standard deviation of #; 80% CV ! 80% credibility interval around #; 90% CI ! 90% confidence interval around #; % SEV !percent variance due to sampling error; % ARTV ! percent variance due to all corrected artifacts; FD k ! file-drawer k representing the number of “lost”studies reporting null findings necessary to reduce # to .05.a Corrections for reliability were possible for both IS and team performance.

541RESEARCH REPORTS

Page 8: Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance across all levels of moderators. The information sharing–team performance relationship

ultimately enable a richer understanding of how multiple facets ofknowledge utilization impact team effectiveness.Furthermore, although there are construct-based explanations

for the observed differential pattern of findings with uniquenessand openness, there are equally plausible methodological explana-tions. These two streams of research have tended to differ in theextent to which they rely on manipulations (uniqueness) versusself-report retrospective measures (openness). Manipulations maystrengthen observed relations via situational strength and weakenthem due to artificial dichotomization. Likewise, sources of con-tamination in self-report measures of openness (e.g., hindsightbias) may inflate correlations with performance, though this wouldimply the actual relationship of openness to performance is weakerthan the current findings suggest. Other methodological explana-tions exist as well, for example, differential reliance on ad hoc(uniqueness) versus intact teams (openness) and differential use ofdecision effectiveness as the primary criterion (uniqueness) versususing a more balanced set of performance criteria (openness). Thecurrent findings show a clear pattern where uniqueness is more

predictive of team performance than openness, but the extent towhich these differences are attributable to conceptual versus meth-odological sources remains an open question.Performance criterion also moderates the IS–performance rela-

tionship; specifically, IS shows stronger effects to subjective anddecision-making effectiveness measures than to objective indices.Beal et al. (2003) discovered a similar pattern with cohesion andperformance. Hence, both cohesion and IS show stronger relationsto behavioral than outcome/results criteria, likely because behav-iors are more controllable by teams (Campbell et al., 1992). Futureinvestigations of team processes such as IS ought to explicitlyconsider controllability (i.e., the extent to which the outcome iswithin the domain of control of the team) in choosing an outcomemetric.

Implications of Task Type and Discussion Structure

A concern in the IS literature revolves around whether theIS–performance relationship holds only for highly demonstrable,

Table 5Correlates of Team Information Sharing (IS)

Meta-analysis variable k N r SDr # SD# 80% CV 90% CI % SEV % ARTV FD k

Task demonstrabilityb 5 416 .41 .06 .45 0 .45, .45 .40, .50 100 100 40IS–uniquenessb 5 414 .41 .06 .46 0 .46, .46 .41, .51 100 100 40IS–opennessb 2 150 .34 .09 .36 0 .36, .36 .24, .48 100 100 13

Decision structurea 13 688 .40 .14 .41 0 .41, .41 .34, .48 100 100 94IS–uniquenessb 11 566 .43 .12 .44 0 .44, .44 .37, .51 100 100 86IS–opennessa 2 122 .25 .07 .27 0 .27, .27 .19, .35 100 100 9

Cooperation during discussiona 14 1,028 .49 .29 .57 .32 .16, .97 .42, .72 9.35 11.43 146IS–uniquenessa 4 338 .40 .30 .44 .31 .04, .85 .16, .72 9.32 9.96 32IS–opennessa 10 690 .53 .28 .63 .30 .24, 1.0 .45, .81 9.83 12.24 116

Member similaritya 9 565 .19 .18 .22 .14 .04, .41 .10, .34 49.01 49.25 31IS–uniquenessa 4 247 .25 .15 .27 .09 .15, .39 .14, .40 66.57 66.66 18IS–opennessa 5 318 .15 .18 .18 .17 .03, .39 .02, .34 45.78 45.81 13

Informational independenceb 4 268 .49 .11 .52 .07 .44, .60 .42, .62 70.03 70.05 38IS–uniquenessb 3 227 .31 .13 .34 .08 .23, .44 .21, .47 64.25 64.24 18IS–opennessb 2 112 .65 .05 .69 0 .69, .69 .62, .76 100 100 26

Note. k ! number of correlations meta-analyzed; N ! total number of groups; r ! sample-size-weighted mean observed correlation; SDr !sample-size-weighted standard deviation of the observed correlations; # ! sample-size-weighted mean observed correlation corrected for unreliability inboth measures; SD# ! standard deviation of #; 80% CV ! 80% credibility interval around #; 90% CI ! 90% confidence interval around #; % SEV !percent variance due to sampling error; % ARTV ! percent variance due to all corrected artifacts; FD k ! file-drawer k representing the number of “lost”studies reporting null findings necessary to reduce # to .05.a Corrections for reliability were possible for both IS and team performance. b Corrections for reliability were possible for only IS.

Table 6Impact of Knowledge Distribution on Information Sharing (IS) in All Hidden Profile Tasks: Proportion of Discussion Devoted toShared Versus Unshared Information

Hidden profile task type k N r SDr # SD# 80% CV 90% CI % SEV % ARTV FD k

Alla 23 901 .65 .24 .69 .23 .39, .98 .61, .77 15.95 17.82 295Intellectivea 19 690 .58 .22 .62 .20 .37, .87 .54, .70 27.22 29.09 217Judgmentala 4 211 .86 .19 .86 .19 .63, 1.0 .70, 1.0 3.81 3.81 65

Note. k ! number of correlations meta-analyzed; N ! total number of groups; r ! sample-size-weighted mean observed correlation; SDr !sample-size-weighted standard deviation of the observed correlations; # ! sample-size-weighted mean observed correlation corrected for unreliability inboth measures; SD# ! standard deviation of #; 80% CV ! 80% credibility interval around #; 90% CI ! 90% confidence interval around #; % SEV !percent variance due to sampling error; % ARTV ! percent variance due to all corrected artifacts; FD k ! file-drawer k representing the number of “lost”studies reporting null findings necessary to reduce # to .05.a Corrections for reliability were possible for only IS.

542 RESEARCH REPORTS

Page 9: Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance across all levels of moderators. The information sharing–team performance relationship

hidden profile tasks (Mohammed & Dumville, 2001). Although wefound IS had the strongest impact on performance on intellectivehidden profile tasks, IS also positively affected performance onless demonstrable as well as nonhidden profile tasks, supportingthe generalizability of the effect and Laughlin’s (1980, 1996)classic postulate that team performance on more demonstrable(intellective) tasks requires greater information processing thandoes performance on less demonstrable tasks.Although discussion structure alone did not alter the strength of

the IS–performance relationship, there was evidence of a morecomplex interaction involving uniqueness and openness. Unique-ness and openness have similar effects on performance in unstruc-tured (free-form) discussions. Yet, in structured discussions, thereis more overall variation in effect size estimates, and uniquenessshows a somewhat larger effect on performance than does open-ness. Comparing the effects of uniqueness within structured versusunstructured discussions suggests the impact of uniqueness has amagnified effect on team performance in structured discussions.Research is needed to further examine the impact of structure ondifferent aspects of IS.

When Do Teams Share Information?

The current findings point to three situations wherein teams maynaturally avoid sharing information at times when it is particularlycritical for them to do so. Specifically, teams share more informa-tion when (a) all members already know the information (biasedinformation sampling), (b) members are all capable of makingaccurate decisions independently (informational independence),and (c) members are highly similar to one another (membersimilarity). These findings suggest that less knowledge-redundant

teams, precisely those teams who stand to gain the most fromsharing information, actually share less information than do moreknowledge-redundant teams. This redundancy effect reflects adivergence in what teams actually do (normatively) and what theyshould do in order to be maximally effective (prescriptively), andit has particularly meaningful implications for expert decision-making teams, like those employed for emergency response andmedical decision-making (Burke, Salas, Wilson-Donnelly, &Priest, 2004). Highly complex task domains typically require spe-cialized, nonredundant experts with dissimilar training and back-ground characteristics to integrate information in order to reach aquality solution. Future research is needed to elucidate reasons forthe member redundancy effect (e.g., conformity pressure, socialidentity, and/or relational motivation) as a first step toward devel-oping interventions to mitigate it.Current findings regarding knowledge distribution (i.e., an as-

pect of redundancy) demonstrate the robustness of Stasser andTitus’s (1985) biased information sampling effect; cumulating 22years of subsequent research shows teams deviate markedly froman even balance of time spent discussing both shared and unsharedinformation. Furthermore, information sampling is even more bi-ased toward shared information on judgmentally (compared tointellectively) framed tasks. In practice, framing tasks as intellec-tive rather than judgmental shows promise as a way to enhance thesharing of unique information.Findings show IS can be enhanced by (a) structuring team

discussions, (b) framing team tasks as intellective, and (c) promot-ing a cooperative team climate. All three factors have been foundto enhance teams’ in-depth processing of information. Structureappears to have similar effects on information sampling in team

Figure 2. Two-dimensional typology of team information sharing and team outcomes.

543RESEARCH REPORTS

Page 10: Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance across all levels of moderators. The information sharing–team performance relationship

discussions, as in personnel selection interviews (Conway, Jako, &Goodman, 1995); structure increases the team’s retrieval of decision-relevant information. Similarly, suggesting to teams that they have thenecessary knowledge, skills, and abilities to come to a superiorsolution likely sparks a greater vigilance in seeking out and integrat-ing decision-relevant information. Lastly, promoting a cooperativeclimate is linked to greater use of informational resources by teams.

New Directions in Information Sharing ResearchTwo exciting new directions are to examine IS across dimensions

of team virtuality and team boundaries. Widespread trends towardglobalized, digitized work are transforming the way teams commu-nicate. Future research is needed to examine IS and informationprocessing in teams operating under various configurations of teamvirtuality, that is, “the extent to which team members use virtual toolsto coordinate and execute team processes, the amount of informa-tional value provided by such tools, and the synchronicity of teammember virtual interaction” (Kirkman & Mathieu, 2005, p. 700).Likewise, thus far, this paradigm has explored how individuals in asingle team exchange and process information. The increased com-plexity of team operating environments as networked structures(Marks, DeChurch, Mathieu, Panzer, & Alonso, 2005) raises the issueof how knowledge and information are effectively shared both withinand across distinct interdependent teams. Many factors operating topromote IS within a team (e.g., shared identity, trust, and cohesion)are reduced when it comes to collaborating with members of otherteams (Sherif, 1966; Tajfel & Turner, 1986).

LimitationsAlthough this study makes an important contribution to the team

performance literature, it is not without its limitations. First, althoughwe are unaware of any concrete guidelines regarding a minimumnumber of studies necessary to utilize meta-analysis, the small num-ber of articles available in some of our meta-analyses heightensconcerns of second-order sampling error (Hunter & Schmidt, 2004).Second, there is causal ambiguity in some of the relationships weexamined. This is a clear concern with team cooperation; becausemost primary studies reported a correlational relationship, it is equallyplausible that cooperative groups share more information and/or thatIS improves team cooperation. This is less of a concern with relation-ships involving task demonstrability or discussion structure becauseexperimental designs were used in the primary studies.A third limitation is that correlations among the four moderators

we tested may operate to confound effects. Essentially, because wecould not directly calculate correlations among the moderatorsthemselves, it is certainly possible that if they are highly corre-lated, one factor or the other is the primary moderator. This issuewas discussed earlier as it concerns the uniqueness/openness andperformance criterion moderators; we raise it again here to notethat it applies to all of the moderator combinations. As such, theseresults are best interpreted as supporting the presence of modera-tors in the IS–performance relationship. These four moderators wehave examined explain variation in the relationship between IS andperformance, but research is needed to disentangle them.

ConclusionTeams are increasingly tasked with making high-stakes decisions

(Burke et al., 2004) in settings as varied as hospital operating rooms

(e.g., surgical teams), executive boardrooms (e.g., top managementteams), and provinces of Iraq (e.g., provincial reconstruction teams).Teams typically possess an informational advantage over individuals,enabling diverse personal experiences, cultural viewpoints, areas ofspecialization, and educational backgrounds to bring forth a rich poolof information on which to base decision alternatives and relevantcriteria. However, the current findings confirm that although sharinginformation is important to team outcomes, teams fail to share infor-mation when they most need to do so.

References

References marked with an asterisk indicate studies that were includedin the meta-analysis.

*Alge, B. J., Weithoff, C., & Klein, H. J. (2003). When does the mediummatter? Knowledge-building experiences and opportunities in decision-making teams. Organizational Behavior and Human Decision Pro-cesses, 91, 26–37.

*Atamanik-Dunphy, C., Berger, C., Perez-Cerini, E. I., & McCarthy, J. M.(2006, May). The effect of trust and information sharing on teamperformance. Poster presented at the 21st annual meeting of the Societyfor Industrial and Organizational Psychology, Dallas, TX.

Beal, D. J., Cohen, R. R., Burke, M. J., & McLendon, C. L. (2003).Cohesion and performance in groups: A meta-analytic clarification ofconstruct relations. Journal of Applied Psychology, 88, 989–1004.

*Brodbeck, F. C., Kerschreiter, R., Mojzisch, A., Frey, D., & Schulz-Hardt, S. (2002). The dissemination of critical, unshared information indecision-making groups: The effects of pre-discussion dissent. EuropeanJournal of Social Psychology, 32, 35–56.

*Bunderson, J. S., & Sutcliffe, K. M. (2002). Comparing alternative concep-tualizations of functional diversity in management teams: Process andperformance effects. Academy of Management Journal, 45, 875–893.

Burke, C. S., Salas, E., Wilson-Donnelly, K., & Priest, H. (2004). How toturn a team of experts into an expert medical team: Guidance from theaviation and military communities. Quality and Safety in Health Care,13, 96–104.

*Butler, J. K., Jr. (1999). Trust expectations, information sharing, climateof trust, and negotiation effectiveness and efficiency. Group and Orga-nization Management, 24, 217–238.

Campbell, J. P., McCloy, R. A., Oppler, S. H., & Sager, C. E. (1992). Atheory of performance. In N. Schmitt & W. C. Borman (Eds.), Newdevelopments in selection and placement (pp. 35–70). San Francisco:Jossey-Bass.

*Campbell, J., & Stasser, G. (2006). The influence of time and taskdemonstrability on decision-making in computer-mediated and face-to-face groups. Small Group Research, 37, 271–294.

*Chalos, P., & Poon, M. C. C. (2000). Participation and performance in capitalbudgeting teams. Behavioral Research in Accounting, 12, 199–229.

*Chidmbaram, L., & Tung, L. L. (2005). Is out of sight, out of mind? Anempirical study of social loafing in technology-supported groups. Infor-mation Systems Research, 16, 149–168.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences(2nd ed.). Hillsdale, NJ: Erlbaum.

Conway, J. M., Jako, R. A., & Goodman, D. F. (1995). A meta-analysis ofinterrater and internal consistency reliability of selection interviews.Journal of Applied Psychology, 80, 565–579.

*Cruz, M. G., Boster, F. J., & Rodriguez, J. I. (1997). The impact of groupsize and proportion of shared information on the exchange and integra-tion of information in groups. Communication Research, 24, 291–313.

*Cummings, J. N. (2004). Work groups, structural diversity, and knowledgesharing in a global organization. Management Science, 50, 352–364.

*Dahlin, K. B., Weingart, L. R., & Hinds, P. J. (2005). Team diversity andinformation use. Academy of Management Journal, 48, 1107–1123.

544 RESEARCH REPORTS

Page 11: Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance across all levels of moderators. The information sharing–team performance relationship

*Dennis, A. R. (1996a). Information exchange and use in group decisionmaking: You can lead a group to information, but you can’t make itthink. MIS Quarterly, 20, 433–457.

*Dennis, A. R. (1996b). Information exchange and use in small groupdecision making. Small Group Research, 27, 532–550.

*Devine, D. J. (1999). Effects of cognitive ability, task knowledge, infor-mation sharing, and conflict on group decision-making effectiveness.Small Group Research, 30, 608–634.

*Durham, C. (1997). Effects of interdependence on motivation, inter-iteminteraction processes and performance. Unpublished doctoral disserta-tion, University of Maryland, College Park.

*Farmer, S. M., & Hyatt, C. W. (1994). Effects of task language demandsand task complexity on computer-mediated work groups. Small GroupResearch, 25, 331–366.

*Franz, T. M., & Larson, J. R., Jr. (2002). The impact of experts oninformation sharing during group discussion. Small Group Research, 33,383–411.

*Galinsky, A. D., & Kray, L. J. (2004). From thinking about what mighthave been to sharing what we know: The effects of counterfactualmind-sets on information sharing in groups. Journal of ExperimentalSocial Psychology, 40, 606–618.

*Gigone, D., & Hastie, R. (1993). The common knowledge effect: Infor-mation sharing and group judgment. Journal of Personality and SocialPsychology, 65, 959–974.

*Greenhalgh, L., & Chapman, D. H. (1998). Negotiator relationships:Construct measurement, and demonstration of their impact on the pro-cess and outcomes of negotiation. Group Decision and Negotiation, 7,465–489.

*Greitemeyer, T., Schulz-Hardt, S., Brodbeck, F. C., & Frey, D. (2006).Information sampling and group decision making: The effects of anadvocacy decision procedure and task experience. Journal of Experi-mental Psychology: Applied, 12, 31–42.

*Habig, J. K., & Devine, D. J. (2006). The effects of structured decision-making techniques on information sharing, conflict, effectiveness andviability in teams. Unpublished manuscript, Indiana University andPurdue University Indianapolis.

Hackman, J. R. (1987). The design of work teams. In J. W. Lorsch (Ed.),Handbook of organizational behavior (pp. 315–342). Upper SaddleRiver, NJ: Prentice Hall.

*Henningsen, D. D., & Henningsen, M. L. M. (2003). Examining socialinfluence in information-sharing contexts. Small Group Research, 34,391–412.

*Henningsen, D. D., Henningsen, M. L. M., Jakobsen, L., & Borton, I.(2004). It’s good to be leader: The influence of randomly and system-atically selected leaders on decision-making. Group Dynamics: Theory,Research, and Practice, 8, 62–76.

*Henry, R. A. (1995). Improving group judgment accuracy: Informationsharing and determining the best member. Organizational Behavior andHuman Decision Processes, 62, 190–197.

*Hightower, R., & Sayeed, L. (1995). The impact of computer-mediatedcommunication systems on biased group discussion. Computers in Hu-man Behavior, 11, 33–44.

Hinsz, V. B., Tindale, R. S., & Vollrath, D. A. (1997). The emergingconceptualization of groups as information processes. PsychologicalBulletin, 121, 43–64.

*Hollingshead, A. B. (1996a). Information suppression and status persis-tence in group decision making: The effects of communication media.Human Communication Research, 23, 193–219.

*Hollingshead, A. B. (1996b). The rank-order effect in group decisionmaking. Organizational Behavior and Human Decision Processes, 68,181–193.

Hunter, J. E., & Schmidt, F. L. (1990). Dichotomization of continuousvariables: The implications for meta-analysis. Journal of Applied Psy-chology, 75, 334–349.

Hunter, J. E., & Schmidt, F. L. (1994). Estimation of sampling errorvariance in the meta-analysis of correlations: Use of average correlationin the homogeneous case. Journal of Applied Psychology, 79, 171–177.

Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Cor-recting error and bias in research findings (2nd ed.). New York: Sage.

*Jehn, K. A., & Shah, P. P. (1997). Interpersonal relationships and taskperformance: An examination of mediating processes in friendship andacquaintance groups. Journal of Personality and Social Psychology, 72,775–790.

*Johnson, M. D., Hollenbeck, J. R., Humphrey, S. E., & Ilgen, D. R.(2006). Cutthroat cooperation: Asymmetrical adaptation to changes inteam reward structures. Academy of Management Journal, 49, 103–120.

Judge, T. A., & Piccolo, R. F. (2004). Transformational and transactionalleadership: A meta-analytic test of their relative validity. Journal ofApplied Psychology, 89, 755–768.

*Kim, P. H. (1998). Working under the shadow of suspicion: The impli-cations of trust and distrust for information sharing in groups. Unpub-lished doctoral dissertation, Northwestern University.

Kirkman, B. L., & Mathieu, J. E. (2005). The dimensions and antecedentsof team virtuality. Journal of Management, 31, 700–718.

*Klein, O., Jacobs, A., Gemoets, S., Licata, L., & Lambert, S. M. (2003).Hidden profiles and the consensualization of social stereotypes: Howinformation distribution affects stereotype content and sharedness. Eu-ropean Journal of Social Psychology, 33, 755–777.

*Lam, S. S. K., & Schaubroeck, J. (2000). Improving group decisions bybetter pooling information: A comparative advantage of group decisionsupport systems. Journal of Applied Psychology, 85, 565–573.

*Larson, J. R., Jr., Christensen, C., Abbott, A. S., & Franz, T. M. (1996).Diagnosing groups: Charting the flow of information in medicaldecision-making teams. Journal of Personality and Social Psychology,71, 315–330.

*Larson, J. R., Jr., Christensen, C., Franz, T. M., & Abbott, A. S. (1998).Diagnosing groups: The pooling, management, and impact of shared andunshared case information in team-based medical decision-making.Journal of Personality and Social Psychology, 75, 93–108.

*Larson, J. R., Jr., Foster-Fishman, P. G., & Franz, T. M. (1998). Lead-ership style and the discussion of shared and unshared information indecision-making groups. Personality and Social Psychology Bulletin,24, 482–495.

*Larson, J. R., Jr., Foster-Fishman, P. G., & Keys, C. B. (1994). Discussionof shared and unshared information in decision-making groups. Journalof Personality and Social Psychology, 67, 446–461.

*Larson, J. R., Jr., Sargis, E. G., & Bauman, C. W. (2004). Sharedknowledge and subgroup influence during decision-making discussions.Journal of Behavioral Decision Making, 17, 245–262.

Laughlin, P. R. (1980). Social combination processes of cooperative,problem-solving groups on verbal intellective tasks. In M. Fishbein(Ed.), Progress in social psychology (Vol. 1, pp. 127–155). Hillsdale,NJ: Erlbaum.

Laughlin, P. R. (1996). Group decision making and collective induction. InE. H. Witte & J. H. Davis (Eds.), Understanding group behavior: Consen-sual action by small groups (Vol. 1, pp. 61–80). Mahwah, NJ: Erlbaum.

*Liljenquist, K. A., Galinsky, A. D., & Kray, L. J. (2004). Exploring therabbit hole of possibilities by myself or with my group: The benefits andliabilities of activating counterfactual mind-sets for information sharingand group coordination. Journal of Behavioral Decision Making, 17,263–279.

Marks, M. A., DeChurch, L. A., Mathieu, J. E., Panzer, F. J., & Alonso, A.(2005). Teamwork in multiteam systems. Journal of Applied Psychol-ogy, 90, 964–971.

*McLeod, P. L., Baron, R. S., Marti, M. W., & Yoon, K. (1997). The eyeshave it: Minority influence in face-to-face and computer-mediated groupdiscussion. Journal of Applied Psychology, 82, 706–718.

*Mennecke, B. E. (1997). Using group support systems to discover hidden

545RESEARCH REPORTS

Page 12: Information Sharing and Team Performance: A Meta-Analysis · 2013-03-04 · dicted team performance across all levels of moderators. The information sharing–team performance relationship

profiles: An examination of the influence of group size and meetingstructures on information sharing and decision quality. InternationalJournal of Human–Computer Studies, 47, 387–405.

*Mennecke, B. E., & Valacich, J. S. (1998). Information is what you makeof it: The influence of group history and computer support on informa-tion sharing, decision quality, and member perceptions. Journal ofManagement Information Systems, 15, 173–197.

*Miranda, S. M., & Saunders, C. S. (2003). The social construction ofmeaning: An alternative perspective on information sharing. InformationSystems Research, 14, 87–106.

Mohammed, S., & Dumville, B. C. (2001). Team mental models in a teamknowledge framework: Expanding theory and measurement across dis-ciplinary boundaries. Journal of Organizational Behavior, 22, 89–106.

*Moye, N. A., & Langfred, C. W. (2004). Information sharing and groupconflict: Going beyond decision making to understand the effects ofinformation sharing on group performance. International Journal ofConflict Management, 15, 381–435.

*Okhuysen, G. A., & Eisenhardt, K. M. (2002). Integrating knowledge ingroups: How formal interventions enable flexibility. Organization Sci-ence, 13, 370–386.

Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (1993). Comprehensivemeta-analysis of integrity test validities: Findings and implications forpersonnel selection and theories of job performance. Journal of AppliedPsychology, 78, 679–703.

*Parks, C. D. (1991). Effects of decision rule and task importance onsharing of “unique” information. Unpublished doctoral dissertation,University of Illinois at Urbana–Champaign.

*Phillips, K. W., Mannix, E. A., Neale, M. A., & Gruenfeld, D. H. (2004).Diverse groups and information sharing: The effects of congruent ties.Journal of Experimental Social Psychology, 40, 497–510.

*Phillips, K. W., Northcraft, G. B., & Neale, M. A. (2006). Surface-leveldiversity and decision-making in groups: When does deep-level simi-larity help? Group Processes and Intergroup Relations, 9, 467–482.

*Quigley, N. R., Tesluk, P. E., Locke, E. A., & Bartol, K. M. (2007). Amultilevel investigation of the motivational mechanisms underlyingknowledge sharing and performance. Organization Science, 18, 71–88.

Rosenthal, R. (1979). The “file drawer problem” and tolerance for nullresults. Psychological Bulletin, 86, 638–641.

*Savadori, L., Van Swol, L. M., & Sniezek, J. A. (2001). Informationsampling and confidence within groups and judge advisor systems.Communication Research, 28, 737–771.

*Schittekatte, M., & Van Hiel, A. (1996). Effects of partially sharedinformation and awareness of unshared information on informationsampling. Small Group Research, 27, 431–449.

*Schulz-Hardt, S., Brodbeck, F. C., Mojzisch, A., Kerschreiter, R., & Frey,D. (2006). Group decision making in hidden profile situations: Dissentas a facilitator for decision quality. Journal of Personality and SocialPsychology, 91, 1080–1093.

Sherif, M. (1966). In common predicament. Boston: Houghton Mifflin.*Shirani, A. I. (2006). Sampling and pooling of decision-relevant infor-mation: Comparing the efficiency of face-to-face and GSS supportedgroups. Information and Management, 43, 521–529.

*So, W. H., & Williams, K. J. (2004, April). Mood and pooling ofunshared information in group decision-making. Paper presented at the19th annual meeting of the Society for Industrial and OrganizationalPsychology, Chicago, IL.

*Srivastava, A. (2001). Antecedents and effects of knowledge sharing inteams: A field study. Unpublished doctoral dissertation, University ofMaryland, College Park.

*Stasser, G. (1992). Information salience and the discovery of hiddenprofiles by decision-making groups: A “thought” experiment. Organi-zational Behavior and Human Decision Processes, 52, 156–181.

*Stasser, G., & Stewart, D. (1992). Discovery of hidden profiles by

decision-making groups: Solving a problem versus making a judgment.Journal of Personality and Social Psychology, 63, 426–434.

*Stasser, G., Stewart, D. D., & Wittenbaum, G. M. (1995). Expert roles andinformation exchange during discussion: The importance of knowing whoknows what. Journal of Experimental Social Psychology, 31, 244–265.

*Stasser, G., Taylor, L. A., & Hanna, C. (1989). Information sampling instructured and unstructured discussions of three- and six-person groups.Journal of Personality and Social Psychology, 57, 67–78.

*Stasser, G., & Titus, W. (1985). Pooling of unshared information in groupdecision making: Biased information sampling during discussion. Jour-nal of Personality and Social Psychology, 48, 1467–1478.

*Stasser, G., & Titus, W. (1987). Effects of information load and percent-age of shared information on the dissemination of unshared informationduring group discussion. Journal of Personality and Social Psychology,53, 81–93.

*Stasser, G., Vaughan, S. I., & Stewart, D. D. (2000). Pooling unsharedinformation: The benefits of knowing how access to information isdistributed among group members. Organizational Behavior and Hu-man Decision Processes, 82, 102–116.

*Stewart, D. D., Billings, R. S., & Stasser, G. (1998). Accountability andthe discussion of unshared, critical information in decision-makinggroups. Group Dynamics: Theory, Research, and Practice, 2, 18–23.

*Stewart, D. D., & Stasser, G. (1995). Expert role assignment and infor-mation sampling during collective recall and decision making. Journalof Personality and Social Psychology, 69, 619–628.

*Stewart, D. D., & Stasser, G. (1998). The sampling of critical, unsharedinformation in decision-making groups: The role of an informed minor-ity. European Journal of Social Psychology, 28, 95–113.

*Stewart, D., & Stewart, C. B. (2001). Group recall: The picture-superiority effect with shared and unshared information. Group Dynam-ics: Theory, Research, and Practice, 5, 48–56.

*Straus, S. G. (1996). Getting a clue: The effects of communication mediaand information distribution on participation and performance incomputer-mediated and face-to-face groups. Small Group Research, 27,115–142.

Sundstrom, E. (1999). Challenges of supporting work team effectiveness.In E. Sundstrom (Ed.), Supporting work team effectiveness: Best man-agement practices for fostering high performance (pp. 3–23). San Fran-cisco: Jossey-Bass.

Tajfel, H., & Turner, J. C. (1986). The social identity theory of inter-groupbehavior. In S. Worchel & L. W. Austin (Eds.), Psychology of inter-group relations (pp. 7–24). Chicago: Nelson-Hall.

*Van Hiel, A., & Schittekatte, M. (1998). Information exchange in context:Effects of gender composition of group, accountability, and intergroupperception on group decision making. Journal of Applied Social Psy-chology, 28, 2049–2067.

*Vaughan, S. (1999). Information sharing and cognitive centrality: Pat-terns in small decision-making groups of executives. Unpublished doc-toral dissertation, Miami University.

Whitener, E. M. (1990). Confusion of confidence intervals and credibilityintervals in meta-analysis. Journal of Applied Psychology, 75, 315–321.

*Winquist, J. R., & Larson, J. R., Jr. (1998). Information pooling: When itimpacts group decision making. Journal of Personality and SocialPsychology, 74, 371–377.

*Wittenbaum, G. M., & Bowman, J. M. (2004). A social validationexplanation for mutual enhancement. Journal of Experimental SocialPsychology, 40, 169–184.

Wittenbaum, G. M., Hollingshead, A. B., & Botero, I. C. (2004). Fromcooperative to motivated information sharing in groups: Moving beyond thehidden profile paradigm. Communication Monographs, 71, 286–310.

Received September 20, 2007Revision received June 4, 2008

Accepted July 25, 2008 !

546 RESEARCH REPORTS


Recommended