+ All Categories
Home > Documents > Satisfaction with outcome and process from web-based meetings for idea generation and selection: The...

Satisfaction with outcome and process from web-based meetings for idea generation and selection: The...

Date post: 26-Jan-2017
Category:
Upload: dianne
View: 212 times
Download: 0 times
Share this document with a friend
16
Satisfaction with outcome and process from web-based meetings for idea generation and selection: The roles of instrumentality, enjoyment, and interface design Alex Ivanov a,, Dianne Cyr b a Department of Media and Communication, City University of Hong Kong, Hong Kong b Beedie School of Business, Simon Fraser University, Canada article info Article history: Received 1 December 2013 Received in revised form 22 December 2013 Accepted 25 December 2013 Available online 8 January 2014 Keywords: Group support systems Electronic brainstorming Satisfaction Teamwork Goal attainment model Technology acceptance Idea generation Decision-making Virtual teams Task enjoyment Visual aesthetics Interface design Motivation Instrumentality Hedonic effects Structural equation modeling Human–computer interaction abstract This study examines individual group member satisfaction with outcome and process from an idea generation and selection task via a Group Support System. 126 Participants formed 20 virtual teams, each completing a 20 minute task, followed by a questionnaire about their experience. The study validated a proposed scale for a new construct and tested several hypotheses using a structural equation model. While the new construct of perceived instrumentality did not directly influence satisfaction with outcome, satisfaction with process was significantly influenced by perceived task enjoyment and interface design aesthetics. Scholars of GSS meeting satisfaction are therefore advised to include hedonic constructs in their research models. The study’s qualitative data also indicated that a GSS designed to support evaluability within the group could lead to increased satisfaction with process within the majority of group members. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction Increasing globalization has made virtual teamwork quite common in business organizations, government agencies, and educational institutions (Dasgupta et al., 2002; Saafein and Shaykhian, 2014; Saunders and Ahuja, 2006). Most virtual team- work, however, is still conducted through email, chat, or teleconferencing (Quan-Haase et al., 2005) rather than taking advantage of specialized e-collaboration technologies, called group support systems (GSS). GSS have been available since the late eighties, but their acceptance by managers and end-users in organizations has been surprisingly low (Dennis and Reinicke, 2004; Roszkiewicz, 2007). GSS structure interaction processes within groups in such a way as to optimize problem formulation, idea generation and evaluation, decision-making, and consensus building (Gallupe et al., 1988). Typically, a GSS 0736-5853/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.tele.2013.12.004 Corresponding author. Tel.: +852 65717744. E-mail addresses: [email protected] (A. Ivanov), [email protected] (D. Cyr). Telematics and Informatics 31 (2014) 543–558 Contents lists available at ScienceDirect Telematics and Informatics journal homepage: www.elsevier.com/locate/tele
Transcript

Telematics and Informatics 31 (2014) 543–558

Contents lists available at ScienceDirect

Telematics and Informatics

journal homepage: www.elsevier .com/locate / te le

Satisfaction with outcome and process from web-basedmeetings for idea generation and selection: The rolesof instrumentality, enjoyment, and interface design

0736-5853/$ - see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.tele.2013.12.004

⇑ Corresponding author. Tel.: +852 65717744.E-mail addresses: [email protected] (A. Ivanov), [email protected] (D. Cyr).

Alex Ivanov a,⇑, Dianne Cyr b

a Department of Media and Communication, City University of Hong Kong, Hong Kongb Beedie School of Business, Simon Fraser University, Canada

a r t i c l e i n f o a b s t r a c t

Article history:Received 1 December 2013Received in revised form 22 December 2013Accepted 25 December 2013Available online 8 January 2014

Keywords:Group support systemsElectronic brainstormingSatisfactionTeamworkGoal attainment modelTechnology acceptanceIdea generationDecision-makingVirtual teamsTask enjoymentVisual aestheticsInterface designMotivationInstrumentalityHedonic effectsStructural equation modelingHuman–computer interaction

This study examines individual group member satisfaction with outcome and process froman idea generation and selection task via a Group Support System. 126 Participants formed20 virtual teams, each completing a 20 minute task, followed by a questionnaire abouttheir experience. The study validated a proposed scale for a new construct and testedseveral hypotheses using a structural equation model. While the new construct ofperceived instrumentality did not directly influence satisfaction with outcome, satisfactionwith process was significantly influenced by perceived task enjoyment and interface designaesthetics. Scholars of GSS meeting satisfaction are therefore advised to include hedonicconstructs in their research models. The study’s qualitative data also indicated that aGSS designed to support evaluability within the group could lead to increased satisfactionwith process within the majority of group members.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Increasing globalization has made virtual teamwork quite common in business organizations, government agencies, andeducational institutions (Dasgupta et al., 2002; Saafein and Shaykhian, 2014; Saunders and Ahuja, 2006). Most virtual team-work, however, is still conducted through email, chat, or teleconferencing (Quan-Haase et al., 2005) rather than takingadvantage of specialized e-collaboration technologies, called group support systems (GSS). GSS have been available sincethe late eighties, but their acceptance by managers and end-users in organizations has been surprisingly low (Dennis andReinicke, 2004; Roszkiewicz, 2007). GSS structure interaction processes within groups in such a way as to optimize problemformulation, idea generation and evaluation, decision-making, and consensus building (Gallupe et al., 1988). Typically, a GSS

544 A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558

is used to aid in electronic brainstorming and voting (Srite et al., 2007) and for relatively short-term tasks in collocated orvirtual teams (Leinonen et al., 2005). This study focuses on virtual teams, defined as self-managed groups of geographicallydispersed knowledge workers formed ad hoc to perform an information-processing task via electronic communication media(Curs�eu et al., 2008).

Two factors – increased productivity and group member satisfaction – are considered key predictors of success in virtualteamwork (Kahai et al., 2003). Yet, when it comes to GSS use, productivity gains have rarely been accompanied by increasesin participant satisfaction (Brown et al., 2010). Kerr and Murthy (2004), for instance, examined satisfaction from a realisticbusiness-consulting task requiring idea generation and evaluation. They found that participants in GSS groups were less sat-isfied with their team experience and group output compared to participants in non-supported, face-to-face groups. EvenGSS that have been judged by end-users to be useful and easy to use – the two key determinants of technology acceptance(Venkatesh, 2000) – have left these same users feeling dissatisfied from the GSS meeting (Briggs et al., 2006).

Although satisfaction has been examined in many GSS studies (Fjermestad and Hiltz, 2000) there is a dearth of researchon what actually causes various types of satisfaction in GSS meetings (Brown et al., 2010). Dennis et al. (2003) proposed amodel to integrate GSS within the technology acceptance model (TAM), but satisfaction was not included as a dependentvariable. Other studies of end-user satisfaction mainly utilize measures of system characteristics and information content(Wixom and Todd, 2005), neglecting the social psychology aspects of teamwork. To our knowledge, the most rigorouslytested model of satisfaction as it applies to GSS is the satisfaction attainment theory (SAT) (Briggs et al., 2006; Reinig,2003). SAT operationalizes meeting satisfaction as a positive affective arousal with respect to meeting outcome and meetingprocess, and posits this response to be a function of perceived net value of goal attainment (PGA) from the meeting. We ob-serve a weakness to this model, however, in its limited ability to guide researchers and practitioners to successfully hypoth-esize about the effects of various technological or task structures on meeting satisfaction. Specifically, PGA is a high orderconstruct in the context of group meetings, where individual and group goals are quite ambiguous. The main theoreticalobjective of this study, therefore, is to test SAT by replacing PGA with antecedents shown to have explanatory value inthe formation of user attitudes as per the TAM approach.

This objective is driven by two research questions. First, what role could a social utility motive play in forming satisfactionwith outcome (SO) in a GSS meeting? A construct called perceived instrumentality is operationalized and tested as an ante-cedent to SO. Second, what role do hedonic motives play in forming satisfaction with process (SP) in a GSS meeting? Per-ceived task enjoyment is tested as an antecedent to SP, and two system quality characteristics – perceived interfacedesign aesthetics and perceived ease of use – are tested as determinants of perceived task enjoyment.

To carry out this investigation, an experimental study was conducted with twenty virtual teams. Each team met online fora 20-min idea generation and selection task. All 126 participants used an originally developed GSS for this purpose and com-pleted a survey about their experience. A structural equation modelling approach, using partial least squares, was then ap-plied to test several hypotheses. Qualitative data from the session transcripts and two open-ended survey questions wasanalysed as well.

The rest of this paper is structured as follows. In Section 2 we provide some background on idea generation and evalu-ation tasks and on perceived goals in teamwork contexts. Section 3 presents the research model and hypotheses, while Sec-tion 4 outlines the methodology, including the experimental design, task, subjective and objective measures, and technologyused. Section 5 provides descriptive statistics and the results from the measurement and structural models. The qualitativedata from the study and the quantitative findings and implications are discussed in Section 6. Section 7 concludes with thestudy’s limitations and recommendations for future research.

2. Background

2.1. Idea generation and evaluation

Most group tasks include some form of idea generation, development, evaluation, and selection (McGrath, 1984). Brain-storming, in particular, has become the most prominent method of idea generation in organizations (Dennis and Williams,2003). It is guided by four principles: (i) do not criticize others’ ideas; (ii) include wild or unusual ideas; (iii) generate asmany ideas as possible, and (iv) build and expand on others’ ideas (Michinov, 2012). However, in a typical brainstormingsession at knowledge-based organizations – such as management consultancies, design firms and advertising agencies –generating a large numbers of raw ideas is rarely the ultimate goal. Instead, team members strive to produce a limited num-ber of good ideas to select for further development and implementation (Rietzschel et al., 2006). Some form of peer evalu-ation is therefore included near the end of most meetings (Kerr and Murthy, 2004).

Evaluation of ideas can be about their quality or quantity. In most lab studies of group brainstorming, evaluability isquantitative and relates individual or group productivity rate to some subjective or objective standard. Shepherd et al.(1996), for instance, publicly displayed a line graph showing the group’s real-time productivity rate in comparison to somemythical average. Similarly, Jung et al. (2005) embedded a bar chart in the GSS interface displaying the idea submission ratesof each team member. These interventions boosted participants’ productivity in both studies, although in the latter studysome group members started to submit lower quality ideas after realizing the performance feedback was merely quantita-tive (Jung et al., 2005).

A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558 545

The few empirical studies of qualitative evaluability in GSS have relied on rather artificial interventions. Connolly et al.(1990) varied evaluative tone in the GSS interface, as confederates typed critical or positive comments about the ideas ofother (real) group members. Kerr and Bruun (1983) manipulated perceived relative ability of individual participants, by tell-ing them in a pre-test briefing session that they were either the least or most capable from the group. In the current study,which is more concerned with quality rather than quantity of ideas, evaluability occurs by way of in-group voting for thebest ideas using the GSS. Our motivation has been to explore a realistic and practical way of improving group work, bydeploying a novel yet simple GSS interface that managers and systems designers could easily test in the field. The handfulof studies that do explore innovative GSS interface design focus on improving cognition and creativity (e.g. Barone andCheng, 2004; Javadi et al., 2013; Jung et al., 2005) but to our knowledge none has included group member satisfaction asa dependent variable.

2.2. Perceived goals in group work

A goal is an outcome that an individual wishes to achieve. Clearly, we feel satisfaction from achieving such outcomes, butwhat exactly do knowledge workers wish to achieve in a meeting for idea generation and selection? The answer to this ques-tion is some what elusive. From the group’s perspective, a valued outcome might be about solving the problem in a singlemeeting, or reaching consensus on a thorny issue. From the individual’s perspective, however, valued outcomes may be dif-ferent altogether. Even if we focused solely on idea generation, and assumed that group members were aware of the fourbrainstorming rules, much ambiguity still surrounds the notion of a group goal in a brainstorming session. Litchfield(2008) explains:

‘‘The first rule – to generate as many ideas as possible – seems to constitute an obvious goal for quantity; however, it may also trigger goals for matchingone’s outputs to others. The second – to avoid criticism – suggests a goal to suppress evaluation of alternatives, but it may also prompt a goal to avoidconflict. The third rule – to combine and build – relates to a still broader variety of additional goals, including paying attention to others’ contributions,integrating ideas into categories, or even competing with others to suggest the best ideas. Finally, the fourth rule – to encourage free-wheeling – is mostdirectly associated with goals for creativity, but it also might relate to other goals for personal status (e.g., to achieve status through humor or throughproviding the most creative ideas; or goals about affect (e.g., to put everyone in a good mood with some funny ideas)’’ (Litchfield, 2008, p. 651).

The ambiguity of goals in brainstorming is also revealed by Goldenberg et al. (2013). Their study found a major differencein session outcomes between groups given a version of the brainstorming rules that emphasized free-wheeling and those thatwere given a version that emphasized the building upon others’ ideas.

3. Research model

3.1. Satisfaction attainment theory

Satisfaction attainment theory (SAT) frames meeting satisfaction as an emotion: an affective arousal with a positivevalence on the part of a participant toward a meeting, generating a good feeling about a particular group session (Briggset al., 2006). Based on goal setting theory, SAT posits perceived goal attainment (PGA) as the causal construct (see Fig. 1).The theory also posits a relationship between satisfaction with outcome and with process. Specifically, SAT claims that par-ticipants are likely to attribute any feelings they have about the outcome to what they experienced as a process. While most

Fig. 1. Satisfaction attainment theory.

546 A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558

researchers agree that measuring meeting satisfaction should include items for both outcome and process, not all agree withthe direction of the relationship between these two dependent variables (Green and Paul, 2003). We discuss this further inthe hypothesis section, which introduces constructs (see Fig. 2) to be tested in place of PGA.

3.2. Instrumentality and satisfaction with outcome

People generally compare themselves to others, mostly in an attempt to arrive at positive self-evaluation and self-esteemmaintenance. Not surprisingly, many studies of group work show how conditions of enhanced identifiability and evaluabilityinduce social comparison and upward matching behaviour (Jung et al., 2005; McLeod, 2011; Michinov, 2012). Increased ef-fort on collective tasks is also a function of one’s belief that his or her performance is indispensable in obtaining the valuedgroup goal. The psychological mechanism underlying such ‘‘social’’ indispensability can be explained by instrumentality x va-lue models (see Hertel et al., 2008; Karau and Williams, 1993; Kerr and Bruun, 1983). Indeed, a seminal study of brainstorm-ing sessions at the product design firm IDEO notes how these sessions serve as a type of ‘‘status auction’’ among designersand engineers (Sutton and Hargadon, 1996).

Based on these findings, the current study proposes the term perceived instrumentality (INS) as a latent construct to bedefined as: the degree of importance that a group member ascribes to his or her performance toward achieving valued groupoutcomes at the end of a meeting. Six items are developed and tested in this study, and are given in Appendix A.

Note that in the context of GSS meetings for idea generation and selection, an instance of exceptionally high instrumen-tality is likely to come at the expense of others’ instrumentality. Obviously not everyone’s idea will see the light of day. One’srelatively low performance, in fact, may actually help the high performer’s instrumentality to stand out. The complexities ofgroup work dynamics and process gains or losses with respect to inferior or superior members are beyond the scope of thispaper (see Brown and Paulus, 1996; Hertel et al., 2008). Nevertheless, we can assume that low performers lose in social com-parison and social indispensability processes, negatively affecting mood and self-worth (Kemmelmeier and Oyserman,2001).

The role of one’s performance instrumentality becomes clearer in the idea selection phase. The popular Lost at Sea task,for instance, requires group members to rank-order their individual preferences on ‘‘what fifteen artefacts could assist withsurvival if stranded on a life raft in the ocean’’. Since individual preferences are likely to vary, the final list that the grouparrives at is bound to cause different affective responses with team members (Reinig, 2003). Reinig (2003) calls this ‘‘relativeindividual goal attainment’’ (RIGA), or the degree to which the collective group product matches one’s preference. An objec-tive measure, RIGA was positively related to reported satisfaction with outcome (Reinig, 2003).

Sufficient argumentation has been given to suggest that performance instrumentality could be a salient motive to theaverage participant in an idea generation and selection task, and thus perceived as a valued goal in itself. This leads toour first hypothesis:

H1. Perceived instrumentality (INS) will be positively related to satisfaction with outcome (SO) in a GSS meeting

As in Reinig (2003), we also examine the objective instrumentality of each participant. (The measures are discussed inSection 4.4.) It stands to reason that Objective INS should correlate with perceived instrumentality (INS); hence our nexthypothesis:

H1b. Perceived instrumentality (INS) will be positively related to objective instrumentality (Objective INS) in a GSS meeting

3.3. Task enjoyment and satisfaction with process

So far the discussion has centred on performance as much as it is instrumental to achieving valuable outcomes (group orindividual). Engaging is such performance is driven by utilitarian motives, but hedonic motives can also play a role. Specif-ically, task enjoyment is performance of an activity for no apparent reinforcement other than the process of performing the

UtilitarianHedonic

PerceivedEnjoyment

PerceivedInstrumentality

PerceivedAesthetics

PerceivedEase of Use

Task

System

Fig. 2. Proposed constructs to test in place of perceived goal attainment.

A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558 547

activity per se (Venkatesh, 2000). Information Systems researchers have traditionally regarded user enjoyment on the job aswasteful (Dennis and Reinicke, 2004), but some have recently argued for a less normative view of work (Malhotra et al.,2008). Combining goals, challenge, and feedback may produce an energizing mixture, notes Zhang (2008), one that is rem-iniscent of flow – a state of such complete absorption in the activity that one would even perform it at a cost (Chang, 2013). Ina study of a solitary decision support system (Hess et al., 2006), the effects of task involvement explained as much as 60% ofthe variance in satisfaction with process (SP). (Task involvement and enjoyment are both intrinsic motivation constructs.) Alarge number of TAM studies (Koufaris, 2002; Park et al., 2014; Venkatesh, 2000) have shown perceived enjoyment to be astrong predictor of attitudes toward using a technology. The construct has been operationalized by the literature in one oftwo ways: enjoyment with respect to the task (Koufaris, 2002) or to the system (Chin and Gopal, 1995). In the latter – one ofthe very few explorations of enjoyment in GSS tasks – the authors found perceived system enjoyment to account for 15% ofthe intention to adopt the GSS. In the current study we measure task-related rather than system enjoyment, as the two otherconstruct (discussed next) represent system perceptions. Despite these differences, based on strong effects of perceived taskenjoyment (ENJ) exhibited in the above-mentioned studies, we hypothesize that:

H2. Perceived task enjoyment (ENJ) will be positively related to satisfaction with process (SP) in a GSS meeting

3.4. System quality and the hedonic path

We now move to GSS quality perceptions, or more specifically, perceptions of the graphical user interface. The importanceof a well-designed interface has been demonstrated for online consumer contexts, such as websites (Traktinsky, 2004). Itsrole in GSS, however, has been overlooked. This is understandable, given the utilitarian nature of GSS. Even so, the interfaceof most GSS has been criticized as being unreliable, unimaginative, and awkward (Greenberg, 2007). GSS are typically de-signed to support rich functionality, often at the expense of aesthetics or ease of use. GroupSystems, for instance, has longbeen the leading developer of GSS for both industry and lab settings (Austin et al., 2006). Their electronic brainstormingmodule, note Briggs et al. (2006, p. 49) ‘‘boasts 19 configurable features, for a total of 524,288 possible combinations, whichtake up to a year to master.’’

Perceived aesthetics of the system interface is one hedonic dimension found to play a role in consumer technology (Zhangand Li, 2004). The construct refers to the emotional appeal of the graphical user interface, or the degree to which the latter isconsidered pleasing to the eye. This is primarily determined by the choice of colors, shapes, typography, and layout (van derHeijden, 2004). Cyr et al. (2006) examined the role of perceived interface design aesthetics (DES) in forming attitudes towarda mobile browsing service task, and included perceived task enjoyment (ENJ) alongside perceived usefulness in theirinvestigation. The strongest relationship found was for DES ? ENJ (.55), explaining as much as 43% of the variance in ENJ.No GSS study to date has tested a theoretical model with a design-related construct; the following hypothesis aims to fillthis gap:

H3. Perceived design aesthetics (DES) will be positively related to perceived task enjoyment (ENJ) in a GSS meeting

GSS field studies based on TAM include a construct called relative advantage (Chin and Gopal, 1995). The latter refers tothe perceived advantages of the (new) GSS in comparison to the system already in use. Given our student population andeducational settings, measuring relative advantage is clearly irrelevant. (For the same reason the current study omits thekey TAM construct of perceived usefulness, defined as the belief that using a technology will enhance an individual’s job per-formance.) The current study does include, however, perceived ease of use (EOU) as the second system quality construct inour model. EOU – or the extent to which using the technology is perceived to be free of effort – has been found to be criticalin both utilitarian and hedonic systems (van der Heijden, 2004; Venkatesh, 2000). For a mobile browsing task, for instance,DES was positively related to EOU (Cyr et al., 2006). Although EOU is a utilitarian construct, it is influenced by the design ofthe graphical user interface, for it is these graphics themselves that are the tools to be handled by the user. Although no GSSstudy to our knowledge has examined the relationship between these two constructs, we proceed to hypothesize that:

H3b. Perceived interface design aesthetics (DES) will be positively related to perceived ease of use (EOU) in a GSS meeting

Finally, the connection of EOU to SP should be easier to see. Consider the notion of affordances, which are actionable prop-erties between an object and an actor. ‘‘When easily perceived,’’ notes Zhang (2008, p. 145), ‘‘these affordances allow actorsto take actions that may satisfy certain needs. . .. We feel interested, attentive, and engaged. We enjoy the process.’’ Indeed,EOU is a process-expectancy variable (Koufaris, 2002), and so is SP, as it clearly refers to the effectiveness of the tools andprocedures used in the meeting (see items in Appendix A). We therefore hypothesize:

H4. Perceived ease of use (EOU) will be positively related to satisfaction with process (SP) in a GSS meeting

The hypotheses are graphically shown in Fig. 3. Dashed lines indicate relationships to be examined for possiblesignificance.

548 A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558

4. Methodology

This study can be classified as a free simulation experiment (Jenkins, 1985). In such experiments researchers design aclosed setting to mirror the real world and yield values of the independent variables reflecting the natural range of subjects’experiences (Straub, 1989).

4.1. Sample

Participants were recruited from a large undergraduate Business class typical of most North American universities. (Allsubjects were given course credit for participation.) A group size of seven was considered the most representative ofvirtual teams engaged in an idea generation and selection task (Fjermestad and Hiltz, 2000). The group sessions were syn-chronous, as members interacted via their PC web browsers from different computer labs, all equipped with a high speedInternet connection. Subjects were randomly assigned to groups, forming 20 ad hoc virtual teams with group size from sixto eight. Participation was identified, as each group member typed a username in the GSS login screen. The mean age ofsubjects was 21.8, ranging from 19 to 23. Of all 126 subjects, 73 were male. All groups had at least one member of theopposite sex.

4.2. Task

An original task was created to cover both generative (idea generation) and decision-making (idea selection) processes(McGrath, 1984). Advertising the University’s undergraduate Business program to high school seniors was considered themost appropriate and representative topic of teamwork in knowledge-based organizations (Kerr and Murthy, 2004). The taskwas described as follows:

Imagine that [the University’s] Business program has been losing applications from high school seniors. Your team shouldpropose ideas for advertising our program in the mass transit. The session will have three phases: brainstorming ideas,distilling these to one per member, and voting for your team’s three best ideas.

Fig. 3. The proposed research model.

A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558 549

Group members met online for 20 min, going through the following three phases. First, they generated ideas in a commonchat dialog for 10 min. In the second phase, which took 8 min, each participant was assigned a separate text box to type inand develop his or her preferred idea. The third phase was preference-ranking (voting), and took 2 min to complete. Hereeach group member voted anonymously for their top three most preferred ideas from the pool of seven (six or eight inthe case of different group sizes). The voting scheme, as shown and explained in Appendix B, led to a clear visualizationof where each individual idea stood in terms of ranking. In line with a realistic marketing scenario, group members wereclearly instructed that only the top three individual ideas as voted within the group would count and be entered in the com-petition. A reward structure was adopted (as in Kahai et al., 2003) whereby $30 (per member) was awarded to the team withthe most creative and workable ideas, as judged by two creative directors from local advertising agencies.

4.3. Technology

An original web-based GSS was developed for this study. The design objective was to support the task structure alreadyoutlined, as well as to further heighten conditions of identifiability and evaluability. In this endeavour we drew on group-ware studies indicating the benefits of showing a fully-integrated view of the information (Barone and Cheng, 2004) as wellas on principles of social translucence (Ding et al., 2011; Ivanov and Cyr, 2006). In socially translucent group environmentsparticipants share visibility of their interactions and an understanding of how the spatial nature of the setting enables a pro-cess of social computation. Key to our design was using social proxies to identify all contributions, partitioning individualcontribution to give a sense of ownership and evaluability in Phase 2, and the graphical voting system in Phase 3 that furtherenhanced evaluability. The actual GSS interface is shown and described in Appendix B.

4.4. Measures

A questionnaire was administered online after task completion, asking subjects to rate statements on the various con-structs in this study. Appendix A shows the instrument, including construct definitions and items used. At the end of thesurvey, one open-ended optional question asked participants how they felt about the session overall, and a second one que-ried which aspects of the GSS interface participants liked.

The study also included an objective measure of instrumentality (Objective INS). This sought to ascertain the validity ofperceived instrumentality (INS) as a subjective measure. (The seminal study by Connolly et al. (1990) had found some dis-crepancy between objective and subjective measures of group effectiveness.) Operationalizing Objective INS in an idea gen-eration and selection task is far from straightforward. We can assume that team members, whose developed ideas make upthe final list of three, would feel significantly more instrumental compared to those whose ideas did not make the cut. Yet, aswe have already discussed in Section 2, team members can also be instrumental in the initial brainstorming phase by inspir-ing or building on another’s idea that does end up selected. There is also the feeling of acting as team leader via encouragingremarks. All these could be instances of instrumentality, but they require in-depth discourse analysis to quantify. For prac-tical purposes, we merely looked at whether one’s final idea was ranked in the top three (i.e. whether it qualified or not) butalso included the word count for each member from Phase 1. We ended up dropping this variable, when it did not lead to anyimproved correlation over using just the idea ranking indicator. Therefore, the final and sole indicator of Objective INS wasbased only the voting results. Even so, there were two ways this could be counted: either a binary measure of qualified (1) vs.not qualified (0) or a more exact measure based on the actual number of votes per idea. The latter could range from zero to21 (if all seven team members, including the author, voted for one idea). We tried this measure as well, but again, found noadvantage to using one than the other. Finally, Objective INS was operationalized as High (1) and Low (0) based on whetherone’s final idea had qualified or not qualified.

5. Results

5.1. Descriptive statistics

An unpaired, two-sample t-test was conducted to check for significant differences in the construct composite means re-ported by: (i) qualified vs. not qualified members, and (ii) males vs. females. These are shown in Table 1, and will be dis-cussed in Section 6. No significant differences in responses were found in terms of group size.

The experimental conditions of in-group evaluability were expected to create maximum variability of perceived instru-mentality (INS). This was successful, as the t-test showed a very significant difference (p < 0.001) in reported INS betweenqualified and non-qualified members. A word count of the brainstorming discussion per group was also conducted as a basicmeasure of participation. It ranged from approximately 700–1100. Individual participation ranged from as little as three to335 words, with a mean of 111. This will be discussed in Section 6.1.

5.2. Measurement model

All measures except for perceived instrumentality (INS) have been validated in prior studies. Developing measures for anew construct starts off with content validity, drawing representative items from a universal pool (Cronbach, 1971). In the

Table 1Descriptive statistics showing the construct means.

INS ENJ EOU DES PGA SO SP

All (126) 4.93 5.3 5.7 5.1 5.30 5.31 5.27SD 1.2 1.1 1.2 1.2 0.99 1.3 1.2Qualified (49) 5.36*** 5.54 5.8 5.26* 5.48* 5.45* 5.41*

Not qualified (77) 4.57 4.92* 5.6 4.76 5.07 5.08 5.08Male (73) 4.90 5.06 5.76 4.85 5.11 5.24 5.07Female (53) 4.97 5.54** 5.65 5.22* 5.46* 5.35 5.51*

Note: SD = Standard Deviation. INS = perceived instrumentality; ENJ = perceived task enjoyment; EOU = perceived ease of use; DES = perceived interfacedesign aesthetics; PGA = perceived goal attainment; SP = satisfaction with process; SO = satisfaction with outcome.* p < 0.05.** p < 0.01.*** p < 0.001.

550 A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558

case of INS, each item was phrased after considering relevant terminology from a number of prior studies (cited inSection 3.2).

An exploratory factor analysis (EFA) was then conducted to test the measurement properties of INS along with the SO andSP dependent variables and the PGA causal construct. An EFA identifies the underlying latent factors explaining the patternof correlations within a set of measurement items (Gefen et al., 2000). Once this data reduction method identifies a smallnumber of factors that explain most of the variance in the set (typically that number with an Eigenvalue exceeding 1.0)the loading pattern of the measurement items is revealed in the statistical output (Hair et al., 1995). An optional second stagemay rotate the matrix creating orthogonal factors with minimized high loadings of the items on other factors (Chin et al.,2003). Results of our EFA with Varimax rotation using SPSS appear in Table 2, revealing four distinct factors. All items loadedheavily on the factor they were supposed to, and did not load heavily on the remaining factors. An item is considered to loadhighly if its coefficient is above 0.6, and does not load high enough if the coefficient is below 0.4 (Hair et al., 1995). All itemloadings met the cut-off criteria. Finally, reliability in EFA (also called internal consistency) is measured with Chronbach’s a,and should exceed the .70 threshold recommended by Nunnally (1978). All four constructs exhibited values in the .9 range,indicating very high reliability.

Next, a confirmatory factor analysis (CFA) was performed to test construct validity. CFA differs from EFA in that the num-ber of factors and loading pattern of items is specified in advance (Gefen et al., 2000). For this stage we employed the partialleast squares (PLS) method (Chin et al., 2003), as it simultaneously models the measurement and structural paths. As in theEFA, all scale items should first load significantly on their hypothesized latent constructs and meet the criteria of 0.6 (Hairet al., 1995). This was achieved, as Table 3 shows. Construct validity in CFA is assessed by computing the average varianceextracted (AVE) for all scale item loadings, which should be above .75. As shown in Table 4, AVEs for all constructs wereabove .75, which means that the items used to measure the constructs account for more than 75% of their respective vari-ance. At the same time low correlations should be exhibited between items of different constructs, indicating discriminantvalidity (Straub, 1989). Recommended here is examining the ratio of the square root of AVE for each construct, which shouldbe larger than the correlations among the constructs (Fornell and Larcker, 1981). As shown, all figures (in bold) exceed the offdiagonal inter-construct correlations, indicating internal consistency (composite reliability).

Table 2Results of the exploratory factor analysis, with items from the original model and the newly proposed construct.

Item Factor 1 Factor 2 Factor 3 Factor 4 Cronbach’s a

INS-1 .814 .080 .141 .259 .936INS-2 .805 �144 .018 .316INS-3 .885 .050 .114 .156INS-4 .867 .162 .099 .049INS-5 .869 .157 .152 .047INS-6 .847 .256 .120 .038PGA-1 .144 .306 .281 .811 .909PGA-2 .273 .269 .303 .742PGA-3 .197 .452 .352 .647PGA-4 .231 .322 .314 .673SP-1 .074 .310 .793 .320 .928SP-2 .196 .259 .790 .324SP-3 .152 .273 .868 .145SP-4 .143 .301 .774 .268SO-1 .107 .798 .353 .310 .954SO-2 .214 .733 .340 .313SO-3 .094 .882 .279 .231SO-4 .098 .897 .255 .239

Note: EFA used Varimax rotation; Eigenvalues: 4.67, 3.71, 3.46, 2.88. N = 126. Bold shows heaviest factor loading for item.

Table 3Results of the confirmatory factor analysis, with all constructs from the proposed research model.

INS ENJ EOU DES SO SP

INS-1 .833 .359 .167 .243 .422 .381INS-2 .792 .370 .079 .197 .224 .185INS-3 .883 .351 .069 .186 .306 .284INS-4 .862 .374 .058 .221 .337 .292INS-5 .881 .430 .124 .253 .331 .345INS-6 .898 .465 .075 .248 .357 .325ENJ-1 .421 .910 .158 .523 .566 .491ENJ-2 .456 .942 .133 .469 .585 .474ENJ-3 .386 .918 .127 .487 .553 .485ENJ-4 .433 .927 .239 .524 .591 .473EOU-1 .088 .151 .946 .413 .240 .413EOU-2 .088 .165 .929 .419 .334 .458EOU-3 .134 .180 .918 .378 .311 .426DES-1 .237 .493 .450 .931 .264 .385DES-2 .256 .505 .357 .934 .324 .362DES-3 .243 .498 .411 .932 .288 .377SO-1 .269 .519 .270 .249 .889 .564SO-2 .386 .558 .158 .236 .855 .598SO-3 .322 .524 .185 .240 .861 .575SO-4 .281 .532 .199 .257 .857 .588SP-1 .267 .445 .462 .349 .698 .833SP-2 .383 .558 .405 .369 .599 .844SP-3 .322 .461 .457 .373 .633 .794SP-4 .319 .459 .411 .364 .685 .887

Bold face indicates heaviest factor loading for an item.***p < 0.001.

Table 4Construct validity.

INS ENJ EOU DES SO SP AVE CR

INS .859 .738 .944ENJ .459 .925 .855 .959EOU .112 .179 .931 .867 .951DES .265 .535 .432 .933 .870 .930SO .387 .606 .306 .318 .854 .729 .942SP .355 .508 .455 .399 .698 .908 .824 .950

Note: Square root of Average Variance Extracted (AVE) are shown in bold; CR = composite reliability. INS = perceived instrumentality; ENJ = perceived taskenjoyment; EOU = perceived ease of use; DES = perceived interface design aesthetics; SP = satisfaction with process; SO = satisfaction with outcome.

A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558 551

Finally, a significant threat to the validity of statistical results comes from common method variance (CMV). A widely-known approach for assessing CMV is Harman’s single-factor test (Podsakoff et al., 2003). The results from this test forthe current data show that just over 41% of the variance was due to a single factor.

In conclusion, the instrument employed in this study encompassed satisfactory content validity (using measures vali-dated by prior studies); satisfactory convergent validity (as seen from high item loadings); and satisfactory discriminantvalidity (as seen from low cross-loadings of factor items) with the possible exception of SO and SP, which were borderline.The newly proposed construct, INS, exhibited the highest Eigenvalue and a very high a in the EFA. INS also correlated pos-itively and significantly with the corresponding observed measure, or Objective INS.

5.3. Structural model and hypothesis testing

As in the CFA described above this phase relied on partial least squares (PLS) structural equation modelling (SEM) (Chinet al., 2003). (The SmartPLS software was used for both CFA and SEM.) PLS was chosen over the alternative LISREL, EQS, orAMOS techniques, which are more suited to theory-testing (Gefen et al., 2000). PLS is more appropriate for theory-buildingwith complex predictive models. It can be run as long as the number of observations exceeds by ten times the number ofitems in the construct with most indicators, given that no more than three independent variables impact a dependent var-iable. The SEM paths are shown in Fig. 4. As recommended by Chin et al. (2003), bootstrapping was performed to test thestatistical significance of each path coefficient using t-tests. Table 5 summarizes the results with respect to the sixhypotheses.

INS did not correlate significantly with any of the satisfaction measures. Its correlation to SO was not at all significant(.05); Hypothesis 1 was thus not supported. Hypothesis 1b examined the correlation between INS as a latent constructand the actual values of instrumentality of participants within their respective teams (Objective INS). This correlation

Fig. 4. Structural model results (⁄⁄⁄p < .001).

Table 5Hypothesis testing.

Hypothesis Path Coefficient T-value Supported

H1 INS ? SO .05 1.28 NoH1b Objective INS ? INS .58⁄⁄⁄ 7.21 YesH2 ENJ ? SP .45⁄⁄⁄ 7.19 YesH3 DES ? ENJ .47⁄⁄⁄ 6.73 YesH3b DES ? EOU .43⁄⁄⁄ 4.34 YesH4 EOU ? SP .38⁄⁄⁄ 2.45 Yes

⁄⁄⁄p < 0.001.

552 A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558

was positive and significant (0.58, with paths to all other constructs either weakly or not at all significant). Hypothesis 1bwas thus supported. Hypotheses 2 and 3 (ENJ ? SP and DES ? ENJ, respectively) tested the effects along a hedonic pathwayto SP. Both hypotheses were supported. Hypotheses 3b and 4 (DES ? EOU and EOU ? SP, respectively) tested the roles ofsystem perceptions in forming SP. Both hypotheses were supported.

PLS does not generate fit indices of the model like other SEM techniques, but calculates R2 values representing the amountof variance in each dependent variable explained by the independent variables. The variances accounted for in our modelranged from 18% to 48%, which is consistent with typical PLS results (Cyr et al., 2006). The low R2 value for EOU (.18) isunderstandable given that design aesthetics cannot possibly be a major determinant of such a utilitarian characteristic asease of use, despite the significant correlation.

ENJ also served as a dependent variable of INS and DES. Together these two significant predictors accounted for 40% ofENJ’s variance, and a separate regression showed their individual influence to be the same, at around 20%. To comparerespective variances in SP explained by EOU vs ENJ, the PLS model was also calculated without a link from EOU. This loweredthe SP variance by 8 points. We therefore conclude that approximately one third of the variance in SP is explained by ENJ.

Finally, the relationship between SO and SP turned out to be highly significant. However, this study did not hypothesize adirection due to the differing interpretations in the literature. We mentioned these in Section 3.1 and address the issue againin Section 7 as a recommendation for future research.

Table 6Interface preferences. (Refer to Appendix B for an explanation of GSS features.).

GSS feature # Representative quotes

Dice/ID 19 I liked the dice icons, that indicated the presence of a memberVoting 13 The voting part of the ideas is very excitingChat dialog 11 The chatroom of course, because that allowed us to generate more ideasGeneral EOU 10 The clear font and big window for the messagesAesthetics 5 I liked the visual look of itIdeacell 4 Instant input of headline was easy to see and use

A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558 553

6. Discussion

6.1. Qualitative data

Session transcripts and responses to the two open-ended questions were also analysed. Only 2% of respondents men-tioned issues that indicate low satisfaction with process. These were due to network connection problems or complaints thatpeers were not active enough. More than 30% of respondents explicitly indicated perceived enjoyment. A further look at thecomments by all respondents regarding the GSS interface was also informative. Six aspects of the interface emerged as ob-jects of affect, as shown in Table 6 (# referring to the number of mentions).

Also of interest were any responses that expressed instances of perceived instrumentality. These were not as numerous asthe interface or chat praises, but here are three: ‘‘Feels great my slogan was ranked first’’; ‘‘Seeing how much your idea wasaccepted’’; ‘‘I liked how it was easy to distinguish which users were making each comment’’.

All the above-mentioned qualitative data indicates that overall satisfaction resulted not only from the affective arousalthat likely stems from chatting with one’s peers, but also from engaging with the task itself, and the way it was supportedby the technology.

As the descriptive statistics showed earlier, participation in the discussions varied widely. Some of the more active mem-bers exhibited a leadership style, yet did not dominate the conversation. On a number of occasions we saw ideas that ap-peared to be very well-received and supported within the initial brainstorming (Phase 1) yet not finally selected amongthe top-three proposals (Phase 3). There were also instances when team members who barely submitted a word in Phase1 emerged with their proposals (submitted in Phase 2) ranked among the top-three (in Phase 3). These observations, sup-ported upon further inspection of the voting patterns, suggest that no overt instances of confirmation bias took place in thesesessions. Participation inequality notwithstanding, a collaborative spirit predominated in most sessions. Supportive andencouraging remarks were frequent; critical ones were rare. Fig. 5 presents a good example of social facilitation from thebrainstorming chat of one session. (Note the emergent playful banter alongside a reference to the group goal in the last line.)

The qualitative findings of this study have implications for teamwork managers as well. The majority of participantsexplicitly reported enjoyment with respect to the social proxies and layout of the GSS interface. Our analysis also lookedat the responses to the GSS-related open-ended question given by group members with high INS and Objective INS, and com-pared them to those with low INS and Objective INS. We conclude that not only the winners liked the voting system. This isalso supported by the quantitative data: there was an insignificant correlation between DES and INS. These findings suggestthat conditions of heightened evaluability and social translucence do not necessarily lead to a ‘‘zero-sum’’ game, as it were,where satisfaction increases by winners tend to be cancelled out by satisfaction decreases in losers.

6.2. Quantitative findings and implications

In this study we sought to examine the potential role that several antecedents might play in determining satisfaction withoutcome (SO) and with process (SP) in a GSS meeting. First, the INS construct exhibited good measurement properties in anexploratory factor analysis using SPSS. The six indicators of INS loaded significantly on their respective factor, yielding thehighest reliability of all four factors included in the analysis. As well, the INS construct exhibited good measurement prop-erties in the confirmatory factor analysis conducted with PLS. Also examined was the correlation between INS as a latentconstruct and the actual values of team members’ instrumentality within their respective teams, or Objective INS. This cor-relation was positive and significant.

Contrary to expectations, INS did not significantly correlate with SO, even though sufficient variability in INS as an inde-pendent variable was achieved. Why such a weak relationship between INS and SO? It could be that participants did notexplicitly and directly attribute utility to their INS, contrary to our assumptions. Nevertheless, we conducted a post hoc sta-tistical analysis to examine relationships of INS to other construct for any interaction effects. When INS was regressed on SOindividually and with no other antecedents, the resulting correlation was significant; with ENJ added as a mediator, theINS ? SO path was drastically reduced to an insignificant 0.1. It appears that ENJ fully mediated the effects from INS to SO.

The study had also hypothesized that EOU would be influenced by DES, and that EOU would influence SP. Both expecta-tions were confirmed, although the variance explained by both paths (DES ? EOU and EOU ? SP) was merely eight percent.The introduction of ENJ between EOU and SP did not reduce the latter’s relationship as much as the reduction in INS ? SO.

Fig. 5. Screenshot from one of the brainstorming chat dialogs.

554 A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558

The influence of EOU on SP, then (albeit relatively small) seems to be a more direct effect relative to INS ? SO. Such a con-clusion for a small yet robust influence from EOU to SP makes sense if we consider that a certain minimum level of usabilityis a prerequisite for a satisfying experience. This line of reasoning is also supported by the few open-ended responses indi-cating rather low SP, such as ‘‘the system kept freezing up’’.

In conclusion, all the strong correlations of ENJ noted above and supported by qualitative data imply that ENJ must haveactually caused a sizeable portion of meeting satisfaction with process, and indirectly contributed to satisfaction with out-come as well. From the qualitative data we already concluded that (i) the discussion was generally perceived as most funoverall, but also that (ii) the voting process was the most exciting part of the task. We therefore believe sufficient empiricalevidence has been provided by this study that hedonic motives play a key direct or mediating role in the formation of sat-isfaction in GSS meetings. Future GSS studies on meeting satisfaction should include, or at least control for, hedonic con-structs in their research models.

7. Limitations and future research

Outlining all the limitations of such an interdisciplinary study would require a lengthy chapter. Here we mention themost salient issues, and those that we believe could be resolved by future research. First of all, student groups cannot beexpected to have the same vested interests as real employees (de Vreede et al., 2000). Thus, despite the experimental realismin this study, we can at best generalize its findings to groups of volunteers asked to generate and select ideas on an issuerelevant to their background.

Second, we should accept that many IS research methods suffer from an inherent limitation: linear regression can onlydemonstrate that correlations are in accordance with the directions assumed by the theory. Indeed, establishing causationis difficult, as it requires demonstrating association, temporal precedence, and isolation (Gefen et al., 2000). As these are dif-ficult to show in structural models with conceptually coupled constructs, Sun and Zhang (2006) recommend using Cohen’spath analysis as an alternative. This, or other analytical techniques, may indeed be worthwhile to consider in future researchinclusive of the SO and SP constructs. According to satisfaction attainment theory (Briggs et al., 2006) participants may mis-attribute the positive affect derived from outcomes to the process instead; hence their SO ? SP hypothesis. This, however, isthe inverse to findings by Green and Paul (2003) showing that outcomes are mostly a function of process in terms of fairnessperceptions. One should also consider how temporal precedence is more likely to occur in a SP ? SO relationship. Process, byvirtue of its temporal nature, spans the entire session, whereas outcomes are not fully realized until the meeting’s end. Thiswould apply especially to voting tasks. Future studies should therefore take into consideration how the nature of the exper-imental task determines the temporal nature of process and its relationship to outcomes.

Another limitation is in that single-test lab experiments cannot accurately identify dynamic issues like learning curvesand the novelty factor (Suchman, 1987). We should also keep in mind that group outcomes can only partially be attributedto experimental conditions (Mejias, 2007). The sessions in this study generated substantial enjoyment from peer interaction,but then, socialization is obviously not the main objective in teamwork. Mukahi and Corbitt (2004), for example, found thatlevel of chatting during a GSS meeting significantly influenced outcome and process satisfaction. Future studies could alsocontrol for group cohesion, as this construct has been shown to predict SP (Chin et al., 1999). We did not include group cohe-sion, or other possible controls, in the instrument due to concerns of survey fatigue.

Perhaps a more effective remedy to group-related artefacts could be the deployment of a GSS interface that does not sup-port actual conversation. Instead, cells for exchanging brief comments as in the brain-writing technique (Ivanov and Cyr,2006; Michinov, 2012) could be arranged in place of a chat dialog window. This would resemble a collocated session withparticipants silently posting sticky notes on a wall for each other to see and comment on. Alternatively, a GSS simulatorcould also be designed and used, rather than a multi-user GSS for naturally-interacting real groups. With a GSS simulator,subjects would be recruited and tested individually, yet led to believe they were interacting with remotely-located teammembers. Exposing each participant to the same socio-technical environment may approximate the high internal validitythat is normal in studies of solitary IS use. A major challenge, of course, would be to create a realistic script that elicits

A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558 555

acceptable responses from the subject. A more feasible recommendation for a follow up study would be to repeat the exper-iment comparing the proposed GSS interface with a control version that is not designed to support identifiability and eval-uability within the group. Specifically, a 2 � 2 factorial design varying not only interface (high vs. low identifiability) but taskstructure as well (idea generation vs. idea generation and selection) could yield more reliable data.

Future studies should also better address the gender-based differences in responses (refer back to Table 1). These weresubstantial, but did not manifest in any of the PLS paths. In other words, DES and EOU perceptions of females were not morepredictive of satisfaction than was the case for males. Nor did a detailed parsing through all responses in the open-endedquestions reveal notable differences in design preferences between males and females.

Despite all these limitations, the current study attempted to bridge the gap between a normative view of job performancewith views from positive psychology. A new construct, perceived instrumentality, was conceptualized and tested in arigorous research model of latent constructs taken from the literature on technology acceptance. Infused were theories fromsocial psychology into the field of virtual meeting satisfaction, and an enriched research model based on satisfaction attain-ment theory was tested with an original group support system (GSS) based on the notion of social translucence. The mixedresults notwithstanding, the study should serve as a stepping-stone for future interdisciplinary and applied research in thefields of virtual teamwork and satisfaction.

Appendix A. Construct definitions and items

All items were rated on a 7-point Disagree-Agree Likert Scale, and appeared in the order as given below.

PGA: Perceived goal attainment: the assessment of the perceived benefits expected from attaining the goals, minus theperceived costs of this fulfillment attempt (Briggs et al., 2006).

PGA-1. Today’s session was worth the effort that I put into it.PGA-2. The things that were accomplished in today’s session warranted my effort.PGA-3. The result of this session was worth the time I invested.PGA-4. The value I received from today’s session justified my efforts.

SO: Satisfaction with outcome: the affective arousal with respect to that which was created and accomplished in themeeting (Briggs et al., 2006).

SO-1. I liked the outcome of today’s session.SO-2. When the session was finally over, I felt satisfied with the results.SO-3. I am happy with the results of today’s session.SO-4. I feel satisfied with the things we achieved in today’s session.

SP: Satisfaction with process: the affective arousal with the tools and procedures used in the meeting (Briggs et al., 2006).SP-1. I feel satisfied with the way in which today’s session was conducted.SP-2. I feel good about today’s session process.SP-3. I feel satisfied with the procedures used in today’s session.SP-4. I feel satisfied about the way we carried out the activities in today’s session.

INS: Perceived instrumentality: the degree of importance ascribed by a group member to his or her performance in themeeting toward achieving valued group outcomes. Based on Kerr and Murthy, 2004 and Hertel et al, 2008.

INS-1. My individual effort was essential to the team outcome.INS-2. My contribution in today’s task was unique.INS-3. My contribution to the team outcome was very important.INS-4. Our team’s success depended on my participation.INS-5. I was instrumental in today’s task outcome.INS-6. I played a key role in today’s task.

ENJ: Perceived task enjoyment: the degree to which the task activity is perceived to be enjoyable in its own right, apartfrom any performance consequences (Chin and Gopal, 1995).

ENJ-1. This task was exciting.ENJ-2. I had fun with this task.ENJ-3. It was cool to participate in this task.ENJ-4. I found the task enjoyable.

EOU: Perceived ease of use: the degree to which using the system is perceived to be free of effort (Venkatesh, 2000).EOU-1. The interface was easy to use for the task given.EOU-2. This was a user-friendly interface.EOU-3. My interaction with this interface was clear and understandable.

DES: Perceived design aesthetics: the visual attractiveness of the graphical user interface used for the task (van der Heij-den, 2004; Cyr et al., 2006).

DES-1. The GSS interface was attractive.DES-2. The overall look and feel of the interface was visually appealing.DES-3. The interface was professionally designed.

556 A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558

Appendix B. Screenshot from a subject’s PC showing the GSS, followed by description of the key features

Ideacells

Submitbutton

Rank plots

Die IDin chat

Votingbuttons

User’sdie ID

Die ID: After logging in the GSS with a username, participants are assigned icons in the form of colored and numbereddice. Color-coding of dice and numbering was designed to support maximum identifiability throughout all phases. Dieand username are arranged vertically top to bottom as prominent labels to individual contributions in Phase 2, which aretyped in Ideacells. Miniature versions of the die icons also identify participants in the brainstorming chat of Phase 1. Thegraphical voting system is based on the small multiples format. Grids, or Rankplots, display the preference-ranking of Phase3 as it occurred in real-time. An average rating voting system was implemented, whereby a voter has a fixed number ofscores that can be assigned to alternatives. Each team member selects their top, second-best, and third-best choices amongthe individual Ideacells, thus allocating six points (votes) each of which is represented by a single solid square. Three points(squares) go to the top choice; two go to the second, and one to the third-best choice (see voting buttons).

References

Austin, T., Drakos, N., Mann, J., 2006. Web conferencing amplifies dysfunctional meeting practices. Gartner Industry Report. Available from: <http://www.groupsystems.com>. (retrieved 10.05.13.).

Barone, R., Cheng, P., 2004. Representations for problem solving: on the benefits of integrated structure. In: Banissi, E., Börner, K., Chen, C., et al. (Eds.),Proceedings of the 8th International Conference on Information Visualization. IEEE, Los Alamitos, CA, pp. 575–580.

Briggs, R., Reinig, B., de Vreede, G., 2006. Meeting satisfaction for technology supported groups: an empirical validation of a goal-attainment model. SmallGroup Res. 37 (6), 585–611.

Brown, S.A., Dennis, A.R., Venkatesh, V., 2010. Predicting collaboration technology use: integrating technology adoption and collaboration research. J.Manag. Inf. Syst. 27 (2), 9–53.

Brown, B., Paulus, P., 1996. A simple dynamic model of social factors in group brainstorming. Small Group Res. 27 (1), 91–114.Chang, C.C., 2013. Examining users’ intention to continue using social network games: a flow experience perspective. Telematics Inform. 30 (4), 311–321.Chin, W., Gopal, A., 1995. Adoption intention in GSS: relative importance of beliefs. ACM SIGMIS Database 26 (2–3), 42–64.Chin, W., Marcolin, B., Newsted, P., 2003. A partial least squares latent variable modeling approach for measuring interaction effects: results from a Monte

Carlo simulation study and an electronic-mail emotion/adoption study. Inf. Syst. Res. 14 (2), 189–217.Chin, W., Salisbury, W., Pearson, A., Stollak, M., 1999. Perceived cohesion in small groups: adapting and testing the perceived cohesion scale in a small-group

setting. Small Group Res. 30 (6), 751–766.Connolly, T., Jessup, L.M., Valacich, J.S., 1990. Effects of anonymity and evaluative tone on idea generation in computer-mediated groups. Manage. Sci. 36,

689–703.Cronbach, L., 1971. Test validation. In: Thorndike, R. (Ed.), Educational Measurement, second ed. American Council on Education, Washington DC, pp. 443–

507.

A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558 557

Curseu, P., Schalk, R., Wessel, I., 2008. How do virtual teams process information? A literature review and implications for management. J. ManagerialPsychol. 23 (6), 628–652.

Cyr, D., Head, M., Ivanov, A., 2006. Design aesthetics leading to m-loyalty in mobile commerce. Inf. Manag. 43 (8), 950–963.Dasgupta, S., Granger, M., McGarry, N., 2002. User acceptance of e-collaboration technology: an extension of the technology acceptance model. Group Decis.

Negot. 11 (7), 87–100.De Vreede, G., Briggs, R., van Duin, R., Enserink, B., 2000. Athletics in electronic brainstorming: asynchronous electronic brainstorming in very large groups.

Proceedings of the 33rd Hawaii International Conference on System Sciences, Computer Society Press, pp. 1–10.Dennis, A., Reinicke, B., 2004. Beta versus VHS and acceptance of electronic brainstorming technology. MIS Q. 28 (1), 1–20.Dennis, A., Williams, M., 2003. Electronic brainstorming: theory, research, and future directions. In: Paulus, P., Nijstad, B. (Eds.), Group Creativity:

Collaboration through Innovation. Oxford University, pp. 160–180.Ding, X., Erickson, T., Kellogg, W., Patterson, D., 2011. Informing and performing: investigating how mediated sociality becomes visible. In: Personal and

Ubiquitous Computing. Springer.Fjermestad, J., Hiltz, S., 2000. Group support systems: a descriptive evaluation of case and field studies. J. Manag. Inf. Syst. 17 (3), 113–157.Fornell, C., Larcker, D., 1981. Evaluating structural equation models with unobserved variables and measurement error. J. Mark. Res. 18 (1), 39–50.Gallupe, R., DeSanctis, G., Dickson, G., 1988. Computer-based support for group problem-finding: an experimental investigation. MIS Q. 12 (2), 277–296.Gefen, D., Straub, W., Boudreau, M., 2000. Structural equation modeling and regression: guidelines for research practice. Commun. Assoc. Inf. Syst. 4 (7), 2–

77.Goldenberg, O., Larson, J., Wiley, J., 2013. Goal instructions, response format, and idea generation in groups. Small Group Res. 44, 227–256.Green, D., Paul, S., 2003. Member perception of procedural fairness in GDSS-enabled teams. AIS SIGDSS Pre-ICIS Workshop. Retrieved July 12, 2012,

Available from: <http://mis.temple.edu/sigdss/icis03/proceedings/DSSWorkshop03-Green.pdf>.Greenberg, S., 2007. Toolkits and interface creativity. Multimedia Tools Appl. 32 (2), 139–159.Hair, J., Anderson, R., Tatham, R., Black, W., 1995. Multivariate Data Analysis with Readings. Prentice-Hall International, Englewood Cliffs, NJ.Hertel, G., Niemeyer, G., Clauss, A., 2008. Social indispensability or social comparison: the why and when of motivation gains of inferior group members. J.

Appl. Soc. Psychol. 38 (5), 1329–1363.Hess, T., Fuller, M., Mathew, J., 2006. Involvement and decision-making performance with a decision aid: the influence of social multimedia, gender and

playfulness. J. Manag. Inf. Syst. 22 (3), 15–54.Ivanov, A., Cyr, D., 2006. The concept plot: a concept mapping visualization tool for asynchronous web-based brainstorming sessions. Inf. Visualization, vol.

5 (3). Palgrave (McMillan), pp. 185–191.Javadi, E., Gebauer, J., Mahoney, J.T., 2013. The impact of user interface design on idea integration in electronic brainstorming: an attention-based view. J.

Assoc. Inf. Syst. 14 (1). Article 2.Jenkins, M., 1985. Research methodologies and MIS research. In: Mumford, E. et al. (Eds.), Research Methods in Information Systems. Elsevier, Amsterdam,

Holland, pp. 103–117.Jung, J., Schneider, C., Valacich, J., 2005. The influence of real-time identifiability and evaluability performance feedback on group electronic brainstorming

performance. In: Proceedings of the 38th Hawaii International Conference on System Sciences, Computer Society Press.Kahai, S., Sosik, J., Avolio, B., 2003. Effects of leadership style, anonymity, and rewards on creativity-relevant processes and outcomes in an electronic

meeting system context. Leadersh. Q. 14 (4–5), 499–524.Karau, S., Williams, K., 1993. Social loafing: a meta-analytic review and theoretical integration. J. Pers. Soc. Psychol. 65 (4), 681–706.Kemmelmeier, M., Oyserman, D., 2001. The ups and downs of thinking about a successful other: self-construals and the consequences of social comparisons.

Eur. J. Soc. Psychol. 31, 311–320.Kerr, D., Murthy, U., 2004. Divergent and convergent idea generation in teams: a comparison of computer-mediated and face-to-face communication. Group

Decis. Negot. 13 (4), 381–399.Kerr, N., Bruun, S., 1983. Dispensability of member effort and group motivation losses: free-rider effects. J. Pers. Soc. Psychol. 4 (1), 78–94.Koufaris, M., 2002. Applying the technology acceptance model and flow theory to online consumer behavior. Inf. Syst. Res. 13 (2), 205–223.Leinonen, P., Jarvela, S., Hakkinen, P., 2005. Conceptualizing the awareness of collaboration: a qualitative study of a global virtual team. Comput. Support.

Coop. Work 14 (4), 301–322.Litchfield, R., 2008. Brainstorming reconsidered: a goal-based view. Acad. Manag. Rev. 33 (3), 649–668.Malhotra, Y., Galletta, D., Kirsch, L., 2008. How endogenous motivations influence user intentions: beyond the dichotomy of extrinsic and intrinsic user

motivations. J. Manag. Inf. Syst. 25 (1), 267–299.McGrath, J., 1984. Groups: Interaction and Performance. Prentice Hall, Englewood Cliffs, NJ.McLeod, P.L., 2011. Effects of anonymity and social comparison of rewards on computer-mediated group brainstorming. Small Group Res. 42 (4), 475–503.Mejias, R., 2007. The interaction of process losses, process gains, and meeting satisfaction within technology-supported environments. Small Group Res. 38

(1), 156–194.Michinov, N., 2012. Is electronic brainstorming or brainwriting the best way to improve creative performance in groups? An overlooked comparison of two

idea-generation techniques. J. Appl. Soc. Psychol. 42 (1), 222–243.Mukahi, T., Corbitt, G., 2004. The influence of familiarity among group members and extraversion on verbal interaction in proximate GSS sessions. In:

Proceedings of the 37th Hawaii International Conference on System Sciences, Computer Society Press.Nunnally, J., 1978. Psychometric Theory, second ed. McGraw Hill, New York.Park, E., Baek, S., Ohm, J., Chang, H.J., 2014. Determinants of player acceptance of mobile social network games: an application of extended technology

acceptance model. Telematics Inform. 31 (1), 3–15.Podsakoff, P., MacKenzie, S., Lee, J.-Y., Podsakoff, N., 2003. Common method biases in behavioral research: a critical review of the literature and

recommended remedies. J. Appl. Psychol. 88 (5), 879–903.Quan-Haase, A., Cothrel, J., Wellman, B., 2005. Instant messaging as social mediation: a case study of a high-tech firm. J. Comput. Mediated Commun. 10 (4).Reinig, B., 2003. Toward an understanding of the satisfaction with the process and outcomes of teamwork. J. Manag. Inf. Syst. 19 (4), 65–83.Rietzschel, E., Nijstad, B., Stroebe, W., 2006. Productivity is not enough: a comparison of interactive and nominal brainstorming groups on idea generation

and selection. J. Exp. Soc. Psychol. 42 (2), 244–251.Roszkiewicz, R., 2007. GDSS: The future of online meetings and true digital collaboration? Seybold Rep.: Analyzing Publishing Technol. 7 (1), 13–17.Saafein, O., Shaykhian, G., 2014. Factors affecting virtual team performance in telecommunication support environment. Telematics and Informatics. 31 (3),

459–462.Saunders, C., Ahuja, M., 2006. Are all distributed teams the same? Small Group Res. 37 (6), 662–700.Shepherd, M., Briggs, R., Reinig, B., Yen, J., Nunamaker, J., 1996. Invoking social comparison to improve electronic brainstorming: beyond anonymity. J.

Manag. Inf. Sci. 12 (3), 155–170.Srite, M., Galvin, J., Ahuja, M., Karahanna, E., 2007. Effects of individuals’ psychological states on their satisfaction with the GSS process. Inf. Manag. 44 (6),

535–546.Straub, D., 1989. Validating instruments in MIS research. MIS Q. 12 (2), 147–170.Suchman, L., 1987. Plans and Situated Action: The Problem of Human–Machine Communication, Cambridge, UK.Sun, H., Zhang, P., 2006. The role of moderating factors in user technology acceptance. Int. J. Human–Computer Stud. 64 (2), 53–78.Sutton, R., Hargadon, A., 1996. Brainstorming groups in context: effectiveness in a product design firm. Adm. Sci. Q. 41 (4), 685–718.Tractinsky, N., 2004. Toward the study of aesthetics in information technology. In: Proceedings 25th International Conference on Information Systems, pp.

771–780.

558 A. Ivanov, D. Cyr / Telematics and Informatics 31 (2014) 543–558

Van der Heijden, H., 2004. User acceptance of hedonic information systems. MIS Q. 28 (4), 695–704.Venkatesh, V., 2000. Determinants of perceived ease of use: integrating control, intrinsic motivation, and emotion into the technology acceptance model.

Inf. Syst. Res. 11 (4), 342–365.Wixom, B., Todd, P., 2005. A theoretical integration of user satisfaction and technology acceptance. Inf. Syst. Res. 16 (1), 85–102.Zhang, P., 2008. Motivational affordances: reasons for ICT design and use. Commun. ACM 51 (11), 145–147.Zhang, P., Li, N., 2004. Love at first sight or sustained effect? The role of perceived affective quality on users’ cognitive reactions to IT. In: Proceedings of the

25th International Conference on Information Systems, ACM, 283–295.


Recommended