Complexity of Information Systems Development Projects:
Conceptualization and Measurement Development
Weidong Xia Department of Information and Decision Sciences
Carlson School of Management University of Minnesota 321 19th Avenue South Minneapolis, MN 55455 Phone: (612) 626-9766 Emails: [email protected]
Gwanhoo Lee Department of Information Technology
Kogod School of Business American University
4400 Massachusetts Avenue, NW Washington, DC 20016-8044
Phone: (202) 885-1991 Email: [email protected]
Acknowledgments: This study was supported by research grants provided by the University of Minnesota and by the Juran Center for Leadership in Quality. The Information Systems Special Interest Group of the Project Management Institute sponsored the collection of the survey data. We thank Carl Adams, Shawn Curley, Gordon Davis, Paul Johnson, Rob Kauffman and research workshop participants at the University of Minnesota for helpful comments on earlier versions of the paper.
1
Complexity of Information Systems Development Projects:
Conceptualization and Measurement Development
Abstract
This paper conceptualizes and develops valid measurements of the key dimensions of
information systems development project (ISDP) complexity. A conceptual framework is
proposed to define four components of ISDP complexity: structural organizational complexity,
structural IT complexity, dynamic organizational complexity, and dynamic IT complexity.
Measures of ISDP complexity are generated based on literature review, field interviews, focus
group discussions and two pilot tests with 76 IS managers. The measures are then tested using
both exploratory and confirmatory data analyses with survey responses from managers of 541
ISDPs. Results from both the exploratory and confirmatory analyses support the four-
component conceptualization of ISDP complexity. The final 20-item measurements of ISDP
complexity are shown to adequately satisfy the criteria for unidimensionality, convergent
validity, discriminant validity, reliability, factorial invariance across different types of ISDPs,
and nomological validity. Implications of the study results to theory development and practice
as well as future research directions are discussed.
Keywords and phrases: Complexity; Information Systems Development Projects; Conceptual
Framework; Scale Development; Exploratory Factor Analysis; Confirmatory Factor Analysis
2
Complexity of Information Systems Development Projects:
Conceptualization and Measurement Development
Introduction
Information systems (IS) development is inherently complex because it must deal with
not only technological issues but also organizational factors that by and large are outside of the
project team’s control [13, 70, 71]. As organizations increasingly depend on IS for one-stop
customer service capabilities and cross-selling opportunities, any new development efforts must
be seamlessly integrated with other existing systems, introducing a system-level complexity that
is often constrained by the existing technology architecture and infrastructure [86]. In addition,
as both information technology and business environments are fast changing, it becomes
increasingly difficult to determine business requirements and to freeze system specifications,
making system development a progressively more dynamic and complex process [83, 92].
The complexity of IS development is manifested by the historically high failure rate of
information systems development projects (ISDPs). In the last four decades, there have been
many reports about ISDP failures in various organizations across industries (e.g., [42, 57]). The
Standish Group reports that US companies spent more than $250 billion each year in the early
1990s on IS projects, with only 16.2 percent considered successful [78]. The Standish Group’s
2001 report indicates that US companies invested four times more money in IS projects in 2000
than they did annually in the 1990s; however, only 28 percent of the projects could be considered
successful [79]. The results suggest that, although much improved, the success rate of IS
projects is still very low. As organizations increasingly invest in IS with the intention to enhance
top-line growth and bottom-line savings, IS project failures have significant organizational
3
consequences, in terms of both wasted critical resources and lost business opportunities.
Therefore, there is a strong economic incentive for companies to improve IS project
performance.
ISDP failures are not isolated incidents, but rather they recur with a great deal of
regularities in organizations of all types and sizes [28, 30, 44, 79]. Many experts believe that
ISDPs are uniquely more complex than other types of projects such as construction and product
development projects. Edward W. Felten, a computer science professor at Princeton University
was recently quoted: “A corporate computer system is one of the most complex things that
humans have ever built” ([4], p. 118). Brooks [13], in his famous book, Mythical Man-Month,
states that “software entities are more complex for their size than perhaps any other human
construct” and that “software systems differ profoundly from computers, buildings, or
automobiles” (p. 182). He contends that the complexity of software is an essential property, not
an accidental one. Such essential complexity is unique to software development because
software is invisible and unvisualizable, and is subject to conformity and continuous changes.
Brooks concludes, “many of the classical problems of developing software products derive from
this essential complexity” (p. 183).
Projects are temporary organizations within organizations [22]. Different types of
projects demonstrate different contingency characteristics that require different management
approaches. The literature on project complexity in general is still in early development. Some
researchers argue that one of the difficulties in advancing theory development on project
complexity may stem from the conventional approach of developing “one-size-fits-all” theories
(e.g., [5, 27, 77]). In a recent study, Shenhar [75] shows “one-size-does-not-fit-all” and calls for
a more contingent approach to the study of projects. Given the call for contingency theories for
4
different types of projects and the unique characteristics of ISDPs, it is necessary to develop
theories for ISDP complexity, as opposed to theories for general project complexity. By utilizing
the general project complexity concepts, research on ISDP complexity may provide new insights
that will contribute to the general project management literature.
Managing ISDP complexity has become one of the most critical responsibilities of IS
managers and executives [38, 72]. However, IS organizations have displayed great difficulties in
coping with the progressively increasing ISDP complexity [9, 61]. Before we can develop
effective strategies to control and manage ISDP complexity, it is necessary that we understand
the project characteristics that constitute ISDP complexity and are able to use those
characteristics to assess the complexity of an ISDP. However, research on ISDP complexity has
mostly been conceptual and anecdotal. The ISDP complexity construct has often been used
without precise definitions and appropriate operationalizations. To our knowledge, no
systematic frameworks and validated measurements of ISDP complexity have been reported.
This research represents a first step towards conceptualizing and developing measures of
the key dimensions of ISDP complexity. The research is conducted using a systematic four-
phase measurement development process involving multiple data sources and research methods.
The conceptual framework and the initial measures of ISDP complexity are developed based on
literature reviews, field interviews, focus group discussions and two pilot tests with IS managers.
The conceptual framework and measures are then systematically tested using survey data
provided by 541 ISDP managers. The four-component conceptualization of the ISDP
complexity construct is tested through analyzing the factor structures underlying the measures.
The measures are tested by both exploratory and confirmatory analysis techniques, each with a
random split-half sub-sample of the overall sample. A comprehensive set of measurement tests
5
are conducted on the measures, including reliability, unidimensionality, convergent validity,
discriminant validity, factorial invariance across three types of ISDPs, and nomological validity.
Using multiple data sources, multiple methods and multiple measurement validation criteria help
triangulate the analyses and enhance the quality and creditability of the results.
The development of such conceptual frameworks and measurement makes significant
contributions to both theory development and practice. For researchers, well-developed
conceptual frameworks of ISDP complexity provide the basis for consistently defining the
construct across studies so that results of different studies can be meaningfully compared.
Empirically developed theories involving ISDP complexity are not possible without valid
measures of ISDP. In addition, theoretical development in ISDP complexity will contribute to
the general project management literature by providing insights about project complexity under a
specific contingent context. For practitioners, sound conceptual frameworks provide a critical
language and lens through which managers can describe and communicate the key dimensions of
ISDP complexity. Operational indicators provide managers with the necessary tools to assess the
complexity of specific ISDPs. The ability to assess ISDP complexity is a prerequisite for
developing effective strategies to control and manage ISDP complexity.
The paper is organized as follows. The next section reviews relevant prior literature and
proposes a conceptual framework that defines the key dimensions and components of ISDP
complexity. The following section describes the four-phase measurement development process
through which the research is conducted. The results of the measurement development and
testing are then discussed. The paper concludes with discussions of the contributions and
limitations of the research as well as directions for future research.
6
Theoretical Background
In this article, IS development refers to the analysis, design and implementation of IS
applications/systems to support business activities in an organizational context. ISDPs are
temporary organizations that are formed to perform IS development work including new
applications/systems yet to be installed, as well as enhancement of existing applications/systems
[80]. New applications/systems include both in-house application/systems and off-the-shelf
packaged software.
By building on the existing literature and incorporating insights gained from ISDP
managers through field interviews and focus group discussions, we attempt to define the key
dimensions of ISDP complexity and develop operational indicators that capture the most critical
characteristics of the project that reflect the complexity of the system development process.
Recognizing that it is impossible to capture all dimensions and characteristics of ISDP
complexity, our intention is to develop a parsimonious set of measures that provide a starting
point for developing measurement theories of ISDP complexity. The literatures on task
complexity, project complexity, software complexity, and software project risk factors are
particularly relevant to our study and are thus used as the bases for developing our conceptual
framework and measures.
Task complexity
ISDPs are complex organizational tasks. The task complexity literature has identified a
variety of dimensions and task characteristics that constitute task complexity. For example,
Campbell [16] viewed complexity as a function of four task characteristics. He proposes that
task complexity increases when (1) only one path leads to goal attainment while multiple paths
7
exist, (2) multiple desired outcomes are required, (3) there exists conflicting interdependence
among paths, and (4) the connection between path activities and desired outcomes cannot be
established with certainty. Wood [93] defines three types of task complexity: component,
coordinative, and dynamic. Component complexity of a task is a function of the number of
distinct acts that need to be executed in the performance of the task and the number of distinct
information cues that must be processed in the performance of those acts. As the number of acts
increases, the knowledge and skill requirements for a task also increase simply because there are
more activities that an individual needs to be aware of and is able to perform. Coordinative
complexity refers to the nature of the relationships between task inputs and task outputs. The
form and strength of the relationships between information cues, acts, and products are aspects of
coordinative complexity. As the requirements for timing, frequency, intensity, and location of
acts become more complex, the difficulty for coordination increases. Dynamic complexity is
caused by changes in the states of the task environments.
Although not explicitly acknowledged by Wood [93], there is a hierarchical relationship
between the three types of task complexity. The overall task complexity is first determined by
the component-level complexity. Then, it is affected by the system-level complexity involving
coordinations among components. While both component complexity and coordinative
complexity describe the structural configurations of the task that are relatively static, dynamic
complexity describes the uncertainties caused by changes in the task environments. Performance
of a dynamically complex task requires knowledge about how the component and coordinative
complexities change over time. To some extent, one may view component complexity as a first
order complexity, coordinative complexity as a second order and dynamic complexity as a third
order complexity.
8
Project complexity
There seem to be no validated measures for assessing general project complexity.
Measures used in the empirical studies have mostly been project specific. It is difficult to
develop general “one-size-fits-all” complexity concept and measures that are applicable to all
types of projects. Most of the commonly identified project complexity dimensions are consistent
with those identified in the task complexity literature, albeit for different contexts and levels of
analysis.
Based on a review of the project complexity literature, Baccarini [2] defines project
complexity in terms of the number of varied elements and the interdependency between those
elements. Following this definition, he proposes two types of project complexity: organizational
complexity and technological complexity. Organizational complexity refers to the number of,
and relationships between, hierarchical levels, formal organizational units, and specializations.
Technological complexity refers to the number of, and relationships between, inputs, outputs,
tasks, and technologies. Turner and Cochrane [84] propose uncertainty as another dimension of
project complexity, which is the extent to which the project goals and means are ill defined and
are subject to future changes. Uncertainty in systems requirements/scope and uncertainty in new
information technologies are examples of goal and mean uncertainties, respectively.
By integrating the dimensions proposed by Baccarini [2] and Turner and Cochrane [84],
Williams [91] defines two distinct aspects of project complexity: structural complexity (the
underlying structure of the project) and uncertainty-based complexity (the uncertain or changing
nature of the project). He contends that uncertainty adds to the complexity of a project so it can
be viewed as a constituent dimension of project complexity. Dvir et al. [26] propose four levels
of task technological uncertainty (low, medium, high and super-high) and three levels of
9
complexity (an assembly project, a system project and an array project or program). While the
uncertainty-based complexity dimension is based on the level of technological uncertainty at the
initiation stage of the project, the structural complexity dimension is based on the scope or a
hierarchical framework of systems and subsystems.
Software complexity
Software is one of the major outcomes of an ISDP. Numerous frameworks and measures
of software complexity have been proposed in the last two decades. Among the most frequently
cited measures are the number of program statements [88], McCabe’s cyclomatic number [56],
and Halstead’s programming effort [36]. Banker and Slaughter [6] use the number of data
elements per unit of application functionality to measure the total data complexity of an
application. Tait and Vessey [81] measure system complexity in terms of the difficulty in
determining the information requirements of the system, the complexity of processing, and the
overall complexity of the system design. Meyer and Curley [58] define technology complexity
of an expert system as a composite measure of diversity of technologies, database intensity, and
systems integration effort. Based on the basic functions of an application software, the function
point analysis literature defines a complexity index that takes into considerations of the specific
complexity of each software function [31]. In addition, the complexity of the general
development environment of the application software is assessed using fourteen characteristics
of the environment. Example characteristics include complexity levels with regard to data
communication, distributed data processing, transaction rate, online update, complex processing,
multiple sites, and installation ease.
10
Our review reveals that there is a rich body of research literature on software complexity.
However, software complexity cannot be directly applied to project level complexity for the
following reasons. First, software complexity and ISDP complexity are at two different levels of
analysis. Software complexity is at software code and structure level, which is too limited to
capture the broader organizational and technological factors that constitute IS development
complexity at the project level. Second, a prerequisite for assessing software complexity is the
existence or detailed knowledge about the specific software. In the initial stage of an ISDP, the
software often does not exist and systems requirements are often poorly defined. It is difficult to
assess software complexity in the initial stage of an ISDP. However, the software complexity
literature is useful to derive some measures of the technological dimensions of ISDP complexity
for this research.
Software project risk factors
According to McFarlan [57], failure to assess and manage individual IS project risk is a
major source of the software development problems. In the risk management literature, a risk is
defined as the product of (1) the probability associated with an undesirable event and (2) the
consequences of the occurrence of this event [34]. Consistent with this general definition, IS
development risk is commonly defined as the product of the probability of the occurrence of
negative conditions or events and their consequences if they do occur [10, 18, 65, 76, 90].
Recognizing the difficulties associated with accurately estimating the probabilities of
occurrence of negative conditions/events and the consequences of project losses, researchers
have suggested alternative approaches to defining, assessing and managing IS development risk.
In lieu of assessing the probabilities of undesirable conditions/events, for example, Boehm [10]
11
recommends identifying and assessing risk factors that influence the occurrence of those
negative conditions and events. Barki et al. [7] define IS project development risk as the product
of the uncertainties associated with project risk factors and the magnitude of potential loss due to
project failure. Accordingly, once the risk factors are identified, one can then estimate IS
development risk by assessing the probabilities of their occurrence and the likely loss associated
with project failure. Following this alternative approach, researchers have generated a number of
checklists of IS project risk factors. For example, Boehm and Ross [11] provide a “top-ten”
checklist of IS project risk factors. Barki et al. [7] develop an instrument for assessing various
risk factors that can affect IT projects. Schmidt et al. [70] develop a ranked list of IS project risk
factors.
ISDP risk and ISDP complexity are two related but different concepts. IS project risk
factors are related to the probabilities of the negative conditions or events and the magnitude of
losses associated with project failure. ISDP complexity, on the other hand, refers to the
characteristics of the project that constitute difficulties for the development effort. In the context
of ISDPs, although risk factors and complexity both represent some negative connotations,
complexity tends to capture project characteristics that are inherently present in projects whereas
risk factors represent possible events or conditions that cause project failures. The two
constructs are obviously related. As the inherent project characteristics, indicators of ISDP
complexity represent the underlying factors that may drive project risks. Recognizing the
conceptual difference and relatedness between the two constructs, in this research, we use the IS
project risk factor literature as a basis for developing our frameworks and measures.
In summary, our review of the IS literature suggests that, although the ISDP complexity
construct has been frequently mentioned, there exist no well-developed frameworks that can be
12
used to delineate its conceptual meanings. No systematically validated measures for ISDP
complexity have been reported. In the general task and project literature, most studies have used
measures that are specific to the types and contexts of the tasks/projects, no general measures of
task complexity or project complexity have been reported. In the software complexity literature,
a number of frameworks and measurements have been developed, representing one of the most
advanced areas among the literatures we reviewed. The IS project risk literature has primarily
focused on identifying ranked lists of project risk factors. The various literatures provide a
useful basis for developing our frameworks and measures of ISDP complexity. Our literature
review suggests that there are a few commonly suggested dimensions and characteristics of
complexity that can be adapted for conceptualizing and measuring ISDP complexity. For
example, at the task level, complexity may consist of three hierarchical levels: component
complexity, coordinative complexity, and dynamic complexity. At the project level, complexity
can be described as either organizational or technological complexity, either structural or
dynamic complexity, and either component or system complexity.
A Conceptual Framework for ISDP Complexity
In addition to the literature review, in the last three years, we have worked closely with
the CIOs and IT project managers from 14 large US companies to gain insights on the
dimensions and measures of ISDP complexity. We also used focus group discussions with ISDP
managers to identify the most important dimensions and project characteristics that can be used
to conceptualize and assess ISDP complexity. Based on the literature review and on the insights
that we obtained from working with the CIOs, interviews and focus group discussions with ISDP
managers, we propose a conceptual framework of ISDP complexity (shown in Figure 1). The
13
framework is composed of two dimensions. The first dimension captures whether the
complexity is about the structural setup or the dynamic aspects of the project; the second
dimension captures whether the complexity is about the organizational aspects or the
technological aspects of the project. In this framework, each dimension suggests two distinct
aspects of ISDP complexity rather than being a continuum-based variable.
============================ Figure 1 about here
============================
The first dimension, structural versus dynamic complexity, is consistent with those
proposed by Wood [93], Turner and Cochrane [84], Williams [91], and Dvir [26]. Typical
ISDPs involve a number of components including the existing systems, infrastructure, new
technology, user units, stakeholders, the project team, vendors, and external service providers.
As the number of components increases, it becomes more difficult to monitor and control the
project. Relationships among these components make it even more difficult to predict the
project’s process and outcome. According to Leveson [51], the problems in building complex
systems today often arise in the interfaces between the various components such as hardware,
software, or human components.
Dynamic complexity refers to the complexity caused by changes in project components
and in their relationships. Changes may result from either the stochastic nature of the
environment or a lack of information and knowledge about the project environment. As these
changes occur, the cause and effect relationship becomes ambiguous and nonlinear. Dynamic
complexity becomes particularly relevant and critical for ISDPs because their environments, both
business and IT, are constantly changing. Conventional management methods are not adequate
14
in dealing with dynamic complexity, although they can handle structural complexity relatively
well [73].
The second dimension, organizational versus technological complexity, has been widely
accepted in the general project management literature (e.g., [2, 91]) and in the IS software
project risk factor literature [48]. Organizational factors of an ISDP include organizational
structure, business processes, organizational information needs, user involvement, top
management support, and project personnel capabilities. IT factors include not only “hard”
technological components such as hardware, software, and network, but also “soft” technological
components such as project staff’s knowledge, skills, and experiences with technologies. Meyer
and Curley [58] proposes that the technological complexity in the context of expert systems
consists of such technological variables as diversity of platforms, diversity of technologies,
database intensity, and systems integration effort. The distinction between the organizational
and technological complexity is important because they require different project capabilities,
thus have different implications for project management.
Based on the two dimensions, we define four components of ISDP complexity:
Structural Organizational Complexity (SORG), Structural IT Complexity (SIT), Dynamic
Organizational Complexity (DORG), and Dynamic IT Complexity (DIT). The SORG component
reflects the nature and the strength of the relationships between the project elements and the
organizational supporting environment, e.g., project resources, support from top management
and users, project staffing, and the skill proficiency levels of the project personnel. The SIT
component captures the coordinative complexity among the IT elements, reflecting the diversity
of user units, software environments, nature of data processing, variety of technology platform,
need for integration, and the diversity of external vendors and contractors. The DORG
15
component captures the rate and pattern of changes in the ISDP organizational environments,
including changes in user information needs, business processes, and organizational structures.
It also reflects the dynamic nature of the project’s impact on the organizational environment.
The DIT component measures the pattern and rate of changes in the IT environment of the ISDP,
including changes in IT infrastructure, architecture and software development tools. In the
following sections, we develop measures of ISDP complexity based on this conceptual
framework and validate the framework and the measures using empirical data collected from
ISDP managers.
Measurement Development Process
We used a systematic four-phase process involving a variety of methods to develop and
validate the measurement of ISDP complexity. This four-phase process is developed based on
Churchill [20] and Sethi and King [74]. As shown in Figure 2, the four phases are (1) conceptual
development and initial item generation, (2) conceptual refinement and item modification, (3)
survey data collection, and (4) data analysis and measurement validation.
============================ Figure 2 about here
============================
Phase 1 - Conceptual development and initial item generation
In this phase, the conceptual framework and an initial pool of measurement items were
first developed through literature reviews, field interviews and focus group discussions with
ISDP managers. In developing the measures, whenever possible, we adapted relevant measures
in the literature. A focus group discussion with 45 IS managers using nominal group techniques
was conducted to independently generate a ranked list of ISDP complexity items. The combined
16
pools of items were then verified and further modified through interviews with a number of IS
senior managers and ISDP project managers. By combining literature reviews with focus group
discussions and a number rounds of interviews with ISDP managers, we attempt to ensure the
face and content validity of the measures, i.e., to ensure that the measures cover the appropriate
scope/domain of ISDP complexity. A total of 30 items were generated as the initial pool of
measures. To save space, below we discuss the rationales and sources of the final 20 measures
used to capture the four components of ISDP complexity. Appendix A lists the sources of the
final 20 items.
Measures of the first complexity component, structural organizational complexity, are
generally associated with the roles of and relationships among the various stakeholders of the
project. Prior literature has identified important stakeholders in information systems
development, including top management, end users, project managers, and project staff [12, 28,
49, 66]. Five items are used to capture complexity that is related to those stakeholders. Top
management support has been considered one of the most critical success factors in information
system development [53, 82]. As ISDPs often involve conflict of interests across user units and
organizational changes, lack of sustainable commitment and support from top management
makes it difficult to achieve the project objectives. As ISDPs depend on users for requirement
determination and for effective implementation and use, user involvement is critical for avoiding
poorly defined system scope and requirements and for avoiding user resistance in making
changes that are necessary for implementing the system [37, 41, 54]. Lack of a project
manager’s control over the project is a significant aspect of project complexity that may increase
project risk [48]. Appropriately staffing projects with personnel who have the right skills and
backgrounds is a significant factor that reflects the complexity of an ISDP [7, 68, 71].
17
Measures of the second component of ISDP complexity, structural IT complexity,
captures the complexity associated with (1) the number of the components that are directly or
indirectly related to technologies and (2) the cross-functional coordination of these components.
Seven items were used to measure SIT. Two items were used to capture the complexity related
to the multiplicity or heterogeneity of the software development environments and technological
environment [58]. One item was used to assess the complexity associated with the coordination
requirements with multiple external vendors [7, 43, 70, 94]. Three items were used to describe
the complexity related to coordinations among the various components, specifically with
managing cross-functional project teams [29], coordinating multiple business units that are
involved in the ISDP [7, 43, 70] and integrating the external systems that interface with the
system under development [58]. One item was used to capture the complexity related to real-
time data processing involved in the new system [31, 62, 69].
Measures of the third component of ISDP complexity, dynamic organizational
complexity, captures the complexity associated with changes in the organization. The
interactions between information systems development and organizational changes are bi-
directional [64]. As such, three items are used to capture changes in the organizational structure
and business processes that affect business requirements. Two items are used to capture the
business changes that are caused by the information systems delivered by the ISDP [8, 71]. In
today’s hypercompetitive business environment, users’ information needs change frequently,
which require substantial coordination and rework [10, 68, 70]. Information systems and
business processes are closely interweaved in today’s organizations [17, 40]. Business process
changes increase ISDP complexity because they necessitate dynamic alignment between
business processes and information systems under development. In addition, changes in
18
organizational structure causes changes in the information flows and systems scope of the
project, which in turn increase the complexity of the ISDP.
Measures of the last component of ISDP complexity, dynamic IT complexity, captures the
complexity associated with changes in the technological environments and development tools.
Three items are used to assess DIT. One item was used to measure complexity related to
changes in IT architecture. IT architecture refers to the overall structure of the corporate
information system and consists of the applications and databases for various levels of the
organization [24, 67]. The second item measures changes in IT infrastructure. As application
systems are developed under the constraints of the existing IT infrastructure of the organization
[14, 25, 86], changes in IT infrastructure cause significant uncertainty in ISDPs. The third item
captures changes in systems development tools. Adoption of new development tools during an
ongoing ISDP causes interruptions because the developers must take time to learn the new tools
and adjust their initial analysis and design to suite the new development environments.
Phase 2 – Conceptual refinement and item modification
The framework and initial pool of 30 measures resulted from Phase 1 were refined and
modified through a sorting procedure and two pilot tests. The sorting procedure was used to
qualitatively assess the face validity and the construct validity of the initial items. Four IS
researchers with an average of eight years of IS work experience participated in the sorting
procedure. Details of the sorting procedure are provided in Appendix B. Overall, 26 measures
were retained after the sorting procedure.
To further validate the relevance, coverage, and clarity of the measurement items, we
conducted two pilot tests. The first pilot test was conducted through one-hour individual
19
interviews with four IS project managers and three IS researchers. In the interview, the
participant first filled out a questionnaire regarding the importance and relevance of each item to
ISDP complexity. They were then asked to identify items that appeared to be inappropriate or
irrelevant to ISDP complexity. Participants also made suggestions for improving the relevance,
coverage, understandability, and clarity of the items. Five items were dropped based on the
results of the pilot test. Two items were combined into a single item because of their similarity.
After refining the items, we created an online survey questionnaire using the remaining
20 items. To reveal any potential problems or issues with web-based online survey, a second
pilot test was conducted with 15 ISDP managers who had an average of seven years of
experience in ISDP management. The ISDP managers logged on to the web survey using their
individually assigned IDs and filled out the questionnaire based on their experience with the
most recently completed ISDPs they were involved in. After finishing the survey, the manager
provided suggestions for improving the content and the format of the online survey. Overall,
only minor editorial issues related to the format and wordings of the questionnaire were reported
and resolved.
Phase 3 – Survey data collections
The web-based online survey resulted from the second phase of the research process was
used to collected the large-scale data for validating the conceptual framework and the measures
of ISDP complexity. The items were randomized to minimize any bias from the survey method.
Seven-point Likert scales were used for the items measuring ISDP complexity. The source of
the survey respondents was the Information Systems Specific Interest Group of the Project
Management Institute (PMI-ISSIG) which is an international organization with about 15,000
20
members of IS project professionals. We used three criteria to select our target respondents: (1)
North American PMI-ISSIG members who (2) were project managers (not specialists such as
programmers or systems analysts), and (3) had managed a recently completed ISDP. The reason
for choosing North American members was to avoid bias and problems that might be caused by
language barriers that the non-English speaking members in the other regions might have.
The PMI-ISSIG sponsored this research by providing their membership information and
promoting the survey to their members. A PMI-ISSIG-sponsored email letter with a hyperlink to
the web-based survey was sent to the target group. To encourage participation, survey
participants were entered into a drawing to receive ten awards of a PMI-ISSIG official shirt and
forty awards of a $25 gift certificate from a well-known online store. A reminder was sent two
weeks after the initial PMI-ISSIG-sponsored email was sent out. A second email reminder was
sent two weeks later.
The total number of potential respondents was 1,740. In total, 565 responses were
received, representing a response rate of 32.5%. Twenty-four incomplete responses were
dropped, resulting in a usable sample size of 541 and a final response rate of 31.1%. Given the
nature of the survey, this response rate is relatively high. Table 1 illustrates the characteristics of
the study sample. The sample represents various industry sectors, ranging from manufacturing,
financial services, software, consulting, retailing, transportation, healthcare, to utility. On
average, companies in the sample had annual sales of $2.55 billion with 14,800 employees.
Three types of ISDPs – in-house new development, packaged software implementation, and
enhancement of existing software – were evenly represented in the sample. On average, projects
in the sample had a budget of $2.1 million, team size of 34 members, and duration of 12 months.
21
Since the sample represented a broad range of companies and projects, the findings are unlikely
to be biased by the sample.
============================ Table 1 about here
============================
Phase 4 – Data analysis and measurement validation
4.1. Data screening and descriptive analysis
The survey data were carefully screened for unusual patterns, non-response bias, and
outliers. A careful screening of the responses did not reveal any unusual patterns or careless
responses, indicating that the questionnaire’s design was appropriate and the respondents were
serious and careful in completing the questionnaires. To examine non-response bias, we
recorded the dates on which the responses were received. Comparisons of early responses and
later responses on key demographic and complexity item scores did not identify any significant
differences, indicating that response bias is unlikely to be a problem. In addition, using three
standard deviations from the mean as a benchmark, we did not find any outliers on the
complexity item scores.
4.2. Sample split
Since the framework and measures of ISDP developed in this study were new, we used a
combination of exploratory and confirmatory factor analysis methods to take advantage of the
strengths of both methods and to facilitate cross-validation. The sample was first split into two
equally-sized sub-samples using random numbers. The first sub-sample was used in the
exploratory factor analysis and the second in the confirmatory factor analysis.
22
To ensure that the two sub-samples were comparable and unbiased, we used t-tests to
examine the equality between the two sub-samples’ means on the demographic data. As shown
in Table 2, the results indicate that there were no significant differences between the two sub-
samples in company size indicated by the number of employees and annual sales, project size
indicated by the number of project members and project budget, and project duration.
============================ Table 2 about here
============================
4.3. Exploratory factor analysis
Exploratory factor analysis was conducted to examine the factor structure of the measures
and to validate the reliability and construct validity of the measures. A common factor analysis
using principle component methods with varimax rotations was conducted to determine the
factor structure of the measures. The number of factors was determined based on two criteria:
eigenvalue above 1.0 and a scree plot. The reliability of the measures is indicated by the
Cronbach’s alpha. The convergent validity and discriminant validity are indicated by the factor
structure and factor loadings of the measures.
4.4. Confirmatory factor analysis
Confirmatory factor analysis with LISREL was used to test the measures that were
resulted from the exploratory factor analysis. First, in order to investigate the appropriateness of
the measurement model structure, five alternative models were generated and compared based on
the overall goodness of model fit indexes. The five models were: (a) a null model with all
measures uncorrelated to each other, (b) all measures were loaded onto a single first-order factor,
(c) the measures were loaded onto four uncorrelated first-order factors, (d) the measures were
23
loaded onto four correlated first-order factors, and (e) there existed a second-order factor name
ISDP complexity above the four first-order factor. These five alternative models are illustrated
in Figure 3. The four first-order factors correspond to the four components of ISDP complexity
as defined in the conceptual framework. The alternative models were compared using two
groups of goodness of fit indexes. The first group of indexes, including the p-value of χ2 statistic,
the ratio of χ2 to degrees of freedom, the Goodness of Fit Index (GFI), the Adjusted Goodness of
Fit Index (AGFI), the root mean square error of approximation (RMSEA), and the standardized
Root Mean Square Residual (RMR), are absolute indexes because they are sensitive to sample
size. The second group of index, including the Comparative Fit Index (CFI), and the Normed Fit
Index (NFI), are relative indexes that are less sensitive to sample size [32, 46]. The existence of
a second-order factor is justified by applying the target coefficient [55].
============================ Figure 3 about here
============================
As a result of the model comparisons, the “best-fitted” measurement model was chosen
for further measurement validation. The unidimensionality and convergent validity of the four
latent components were assessed by specifying a single factor model for each latent variable.
The reliability was assessed by the composite reliability index that was calculated based on
factor loadings and variances [87]. The discriminant validity of the first-order factors was
assessed using the techniques suggested by Venkatraman [85] and Sethi and King [74].
4.5. Factorial invariance analysis
Factorial invariance analysis was conducted to establish the generalizability of the
measures across three types of ISDPs. After eliminating thirty data cases with missing data on
24
project type, the remaining data of the overall sample were segmented by project type. The
sample size was 195 for in-house new development projects, 173 for packaged software
implementation projects, and 143 for major enhancement of existing software, respectively. In
conducting the factorial invariance analysis, a baseline model was first established and tested for
model-data fit. A measurement model with invariance of factor loadings was then specified and
tested. If the difference in model fits between the baseline model and the model with invariance
constraints was not significant, invariance of the factorial structure of the measurement across
the three ISDP types was supported.
4.6. Nomological validity
Finally, the nomological (or predictive) validity of the ISDP complexity measure was
examined. A positive association between ISDP complexity and project duration was predicted.
In order to test this predicted relationship, a composite score was calculated for each of the four
factors based on their corresponding items. In addition, an overall ISDP complexity score was
obtained by averaging the four factor scores. Since projects in the sample were completed at the
time of data collection, project duration data was available. A path analysis was used to test
whether the relationship between ISDP complexity and project duration was positive as predicted,
if yes, it indicates the nomological or predictive validity of the ISDP complexity measures.
Results
Exploratory factor analysis
Construct validity
Exploratory factor analysis using principle component method with varimax rotation was
used to test the factor structure of the measurement. As shown in Table 3, four factors with
25
eigen values greater than one emerged from the analysis, which can be interpreted as
corresponding to the four components of ISDP complexity. A scree plot test also indicated that
the four-factor structure was reasonable. These four factors collectively explained 55.2 percent
of the variance. There were no cross-loaded items with loadings greater than 0.30. Overall,
these results provide the initial empirical support to the conceptual framework and to the
convergent and discriminant validity of the measures of the ISDP complexity.
============================ Table 3 about here
============================
Reliability
Reliability refers to the internal consistency of measurement items within each construct.
Table 4 reports the corrected item-total correlations for individual items and the Cronbach’s
alphas for the factors. All four factors had Cronbach’s alphas higher than 0.70, indicating
adequate levels of reliabilities [63]. According to Hair, et al. [35], an item is considered to have
an acceptable level of internal consistency if its corrected item-total correlation is equal to or
greater than 0.33. All measures of ISDP complexity demonstrated adequate levels of corrected
item-total correlation. Therefore, no item was eliminated based on the internal consistency
criteria.
============================ Table 4 about here
============================
Confirmatory factor analysis
Model-data fits of alternative models
The measurement of ISDP complexity was specified as a second-order model with four
first-order factors (Model 5). As discussed before, four alternative models were tested: (a) a null
26
model (Model 1), (b) a model with one first-order factor (Model 2), (c) a model with four
uncorrelated first-order factors (Model 3), and (d) a model with four correlated first-order factors
(Model 4). A model is considered to have good model-data fit if the p-value is above .05, the χ2
to degrees of freedom is smaller than 3, the GFI is above .90, the AGFI is above .80, the RMSEA
is less than .08, the standardized RMR is less than .10, the CFI is above .90, and the NFI is above
.90 [15, 19, 39, 45, 89].
As shown in Table 5, all model-fit indices of Model 4 indicate better fit than those of the
first three competing models. The significance test results shown in Table 6 indicate that Model
4 was significantly better than the first three alternative models. Therefore, the results support
the measurement model structure with four correlated first-order factors.
============================ Table 5 about here
============================
============================ Table 6 about here
============================
Table 6 indicates that the difference between Model 4 and Model 5 was not significant.
Table 5 suggests that most of the model-fit criteria of Model 5 (with a second-order model) were
as good as those of Model 4 (with four correlated first-order factors). These results warrant
further investigation of the existence of a second-order factor using the target coefficient (T)
[55]. The T coefficient can be calculated using the following formula:
T = χ2 (baseline model) / χ2 (alternative model)
A high T coefficient implies that the second-order factor does not significantly increase
χ2. Since the T coefficient between Model 4 and Model 5 is 0.98, we concluded that there
existed a second-order factor and that the second-order model best represented the data in a more
27
parsimonious way. Therefore, Model 5 was chosen as the “best-fitted” measurement model for
further validation. The second-order factor can be interpreted as an overall trait of ISDP
complexity. Figure 4 shows the results of the parameter estimations of the second-order model.
============================
Figure 4 about here ============================
Unidimensionality and convergent validity
Unidimensionality and convergent validity require that one single latent variable
underlies a set of measures [1]. To test unidimensionality and convergent validity, we generated
four first-order factor models with each corresponding to one component of ISDP complexity.
The results shown in Table 7 suggest that all four latent variables demonstrated adequate levels
of model fit. Overall, the results indicate that the measures of each of the four ISDP components
satisfy the unidimensionality and convergent validity requirements.
============================ Table 7 about here
============================
Internal consistency reliability
The composite reliability (ρc) which represents the proportion of measure variance
attributable to the underlying latent variable was calculated to assess the reliability of the
measure [87]. One of the advantages of this reliability index is that it is free from the restricted
assumption of equal importance of all indicators on which the Cronbach α is based. Following
Werts et al [87], Venkatraman [85], and Sethi and King [74], the composite reliability was
calculated from the factor loadings of each indicator and error variances using the following
formula:
ρc=(Σ λi)2 Variance(A) / ((Σ λi)2 Variance(A) + Σ θδ)
28
Values of ρc in excess of .50 indicate that the variance captured by the measures is
greater than that captured by error components, thus suggesting satisfactory levels of reliability
[3]. The results in Table 8 show that the composite reliability estimates were .74 for SORG, .78
for SIT, .81 for DORG, and .87 for DIT, respectively, suggesting that all four latent variables
have adequate levels of reliability.
============================ Table 8 about here
============================
Discriminant validity
Discriminant validity assesses the degree to which measures of different components of
ISDP complexity are unique from each other. The results of the pair-wise tests are shown in
Table 9. The results suggest that all six pairs were statistically different; indicating that the four
components of ISDP complexity demonstrated adequate levels of discriminant validity.
============================ Table 9 about here
============================
Analysis of factorial invariance
Tests of factorial invariance examines if a measure operates equivalently across different
sub-populations [15]. Analysis of factorial invariance is important for establishing the
generalizability of a measurement. The value of a measurement model is greatly enhanced if the
same factorial structure and properties can be replicated across various subpopulations [55].
ISDPs were defined in this study to include three types of system development: in-house new
development, packaged software implementation, and major enhancement of existing software.
As such, it is important to examine if the factorial structure and properties of the measure is
invariant across the three types of ISDPs.
29
Tests of factorial invariance take a hierarchical approach. First, a baseline model (Model
A) was established and tested. This baseline model was essentially the second-order model as
specified in Figure 4. However, the difference was that the second-order model was established
separately for each group. This baseline model did not have any invariance constraints across
the three types of ISDPs.
Once the baseline model was established, invariance of first-order factor loadings was
tested (Model B). In this step, the second-order factor loadings were not constrained to be
invariant. The rationale behind this approach was that tests of higher order invariance would
make sense only when there was reasonable invariance among the first-order factors [55].
Model B was compared with Model A to examine if the first-order factor loadings were
invariant. Difference in the χ2 between the two models was tested. However, χ2 tests can be so
powerful that trivial difference may lead to significant χ2 values. Therefore, other criteria such as
the ratio of χ2 to degrees of freedom and the target coefficient (T) should be also considered. The
χ2 of the baseline model serves as a target for optimum fit.
If the first-order factor loadings turned out to be invariant, a more restricted model with
invariant first- and second-order factor loadings (Model C) was tested. Again, this model was
compared with the baseline model to examine if the second-order factor loadings were invariant.
If the χ2 tests and other criteria do not indicate invariant factorial structure, it suggests that the
second-order factor loadings cannot be invariant because the first-order factor loadings are
invariant.
The factorial invariance analysis results shown in Table 10 suggest that the first-order
factor loadings were invariant because the χ2 difference was not significant. In addition, the ratio
of χ2 to degrees of freedom was reasonable and the target coefficient was very high, suggesting
30
good overall model-data fit. Similarly, the second-order factor loadings appeared to be invariant
because the χ2 difference between Model C and the baseline model was not significant. The
target coefficient was also very high, indicating good fit. In sum, we concluded that the factorial
structure of the second-order measurement model of ISDP complexity was invariant across the
three types of ISDPs. Therefore, the results provide the initial empirical evidence of the
generalizability of the measurement model across the three types of ISDPs.
============================ Table 10 about here
============================
Nomological validity
Nomological (or predictive) validity assesses if a construct measured by the new
measures is associated with other constructs whose measures are known to be valid, as the theory
would predict. In this study, we attempted to analyze the predictive validity of the ISDP
complexity measures by testing a hypothesized positive relationship between ISDP complexity
and project duration. Since our purpose was not testing theory, we provided only the necessary
justification for the hypothesized relationship without considering other constructs.
Our proposed positive relationship between ISDP complexity and project duration is
justified by the argument that project complexity imposes more workload and thus causes longer
project duration [21, 33, 59]. For example, Meyer and Utterback [59] found that technological
complexity as measured by the number of technologies in the development effort was positively
associated with absolute the development time.
Table 11 shows the results of the path analyses of the impacts of (1) overall ISDP
complexity and (2) the four components of ISDP complexity on project duration, respectively.
The results indicate that all four factors of ISDP complexity as well as the overall ISDP
31
complexity positively affected project duration. Therefore, the prediction was supported by the
data. We concluded that the measurement of ISDP complexity demonstrated adequate
nomological validity in predicting project duration.
============================ Table 11 about here
============================
Discussions and Conclusions
The results of both the exploratory and confirmatory data analyses suggest that the 20-
item measure of ISDP complexity developed in this research exhibited adequate levels of
measurement properties. The exploratory factor analysis produced a factor structure as we
hypothesized, providing an initial empirical support to our conceptualization of ISDP complexity
as a four component construct. In addition, the confirmatory factor analysis results suggest that
the hypothesized measurement model had adequate levels of goodness of fit. It also suggested
the existence of a second-order factor, which can be interpreted as the overall ISDP complexity.
The measures were shown to satisfy criteria related to unidimensionality, convergent validity,
discriminant validity, internal consistency reliability, factorial invariance across three types of
projects, and nomological validity.
Contributions to theory development and methodology
This research makes significant contributions to theoretical development. We believe
that ISDP complexity will be an important construct in the vocabulary of IS researchers and
practitioners for the following two reasons. First, an increasing portion of IS activities in
organizations are organized around projects. Therefore, projects constitute an important context
32
and a unit of analysis for research. Second, the constantly changing information technology and
business environments coupled with the growing needs for IT application integrations will cause
the level of ISDP complexity to continue to increase. Therefore, managing complexity appears
to be critical to IS success. Given this increasing significance of ISDP complexity, it is timely
and important to develop a conceptual framework and a valid, reliable measure of the construct.
Although there are other related constructs such as software complexity, general project
complexity, and task complexity, they are not substitutes for ISDP complexity. Task complexity
and project complexity are too general to tap into the unique context of ISDPs. Software
complexity is too limited and narrow to assess the various aspects of ISDP complexity. The new
measure developed in this research overcomes these limitations and covers a wide range of the
domain of the ISDP complexity construct with enhanced specificity.
In addition, by defining four distinct components of ISDP complexity, this research
enables researchers to theorize the construct more precisely. As Baccarini [2] argues, since
complexity is multi-dimensional, when referring to project complexity, it is important to state
clearly the type of complexity being dealt with. The conceptual framework and the measure of
ISDP complexity developed in this research will enable researchers to use these measures to
build and test theories that explain the determinants and impacts of ISDP complexity.
Depending on their study purposes, researchers can select either the second-order factor or the
first-order factors of ISDP complexity as focal constructs to develop theories related to ISDP
complexity.
This research employed a combination of exploratory data analysis and confirmatory data
analysis in developing and testing the measure. Such a research lifecycle consisting of both
33
exploratory and confirmatory methods ensures both the relevance and the rigor of the instrument
development process, which in turn enhances the validity of the measurement.
Practical implications
The results of our study also have important practical implications. Although the
importance of assessing and managing complexity of ISDPs has been widely recognized,
organizations are not well equipped to cope with these challenges. As Kotter [50] suggests,
managing structural and dynamic complexities has become a key responsibility of managers and
executives. As such, this research provides a much needed language and measurement tool that
managers can use to describe and communicate ISDP complexity. First, the empirically
validated four-component framework of ISDP complexity serves as a useful language for
defining and communicating ISDP complexity. Using this framework, project managers can
clearly define the specific aspects and components of ISDP complexity that they must consider
and manage. Second, the measures developed in this study can be used to assess and manage the
complexity of ISDPs in the early planning stages and during implementation. Without such an
assessment tool, it would be difficult for project managers to identify areas of concerns and take
appropriate measures. It has been found that complexity influences the selection of project
inputs including project organizational form, budget, manpower, expertise, and experience
requirements of the project team [2, 47]. Therefore, being able to accurately assess ISDP
complexity enables organizations to better plan, coordinate, and control their projects.
In addition, a valid and reliable measurement tool would allow organizations to learn
from past experiences and establish a knowledge base of organizational techniques that have
been proven to be effective in dealing with different aspects of ISDP complexity. Used together,
34
the assessment tool and the knowledge base enable organizations to develop critical capabilities
that are needed for planning and controlling their ISDPs.
The second-order factor and the first-order factors (or the four components) can serve
different purposes in practice. The second-order factor is useful for communicating the overall
level of ISDP complexity with users and business unit managers. It is also useful for overall
project planning in the early stages of project lifecycle. In contrast, the four first-factors (or the
four components) of ISDP complexity can be used to facilitate detailed assessments and
communications within the project team. They are useful for identifying specific problem areas,
thus enable the managers to strategically manage and control the most important aspects or
components of ISDP complexity during project implementations.
Limitations of the study
Some cautions should be taken when interpreting the study findings and applying the
measure developed in this research. In testing the nomological or predictive validity of the
measurement, the same respondent provided information about both the independent and the
dependent variables, which might cause potential common source biases. Since the projects in
the sample were all recently completed projects, the performance measures such as delivery
time, cost, and functionality were known and thus might have been less subjective. Future
research is needed to further test the nomological validity of the measurement using different
sources for information about the independent and dependent variables.
If independent sample sources had been used for the exploratory factor analysis and the
confirmatory factor analysis, the value of cross-validation of the measure would have been even
greater. However, split-halves from the same sample source also have advantages in that they
35
eliminate sampling errors resulted from different sample sources. In addition, the random
assignment of the data cases to the two sub-samples minimizes potential biases that might be
caused by the differences between the two sub-samples. Nevertheless, future research using
different sample sources is needed to overcome the limitations caused by the use of the same
sample source.
Directions for future research
This research represents the first step toward building theories that provide insights about
the conceptualization and measurement of ISDP complexity. The framework and the measures
developed in this study can help organizations better understand ISDP complexity and can
provide the initial tools for assessing and managing the complexity of their ISDPs. Based on this
study, future research may investigate the organizational determinants of ISDP complexity.
Organizations can then minimize unnecessary complexity and effectively manage necessary
complexity to enhance the success rate of their ISDPs, by creating effective strategies, methods,
and coping mechanisms to control and manage those organizational factors that influence ISDP
complexity.
In addition, future research may investigate the patterns through which the four
components of ISDP complexity affect such dependant variables as project success and
organizational performance. Another promising future research direction is to conduct
longitudinal studies of ISDP complexity. The levels and the impacts of ISDP complexity may
vary between different stages of a project’s lifecycle. Understanding the dynamics of ISDP
complexity can help managers cope with complexity at different points in time during their
project implementation.
36
Finally, it would be important to examine project portfolio complexity. Organizations are
most likely to run more than one ISDP at a time. Optimizing one project may create global sub-
optimization that hinders the performance of the overall project portfolio. Therefore,
understanding complexity at the portfolio level enables IT organizations to achieve efficiency
and effectiveness at the portfolio level in addition to individual project level. Our hope is that
this research serves as a starting point for stimulating researchers to develop theories for
understanding and managing ISDP complexity, and ultimately stopping the dollar drain of ISDP
failures.
37
REFERENCES
1. Anderson, J.C., and Gerbing, D.W. Structural Equation Modeling in Practice: A Review and Recommened Two-Step Approach. Psychological Bulletin, 103, 3 (1988), 411-423.
2. Baccarini, D. The Concept of Project Complexity-A Review. International Journal of Project Management, 14, 4 (1996), 201-204.
3. Bagozzi, R.P. An Examination of the Validity of Two Models of Attitude. Multivariate Behavioral Research, 16, (1981), 323-359.
4. Baker, S. Where Danger Lurks: Spam, Complexity and Piracy Could Hinder Tech's Recovery. Business Week (August 25 2003), 114-118.
5. Balachandra, R., and Friar, J.H. Factors for Success in R&D Project and New Product Innovation: A Contextual Framework. IEEE Transactions on Engineering Management, 44, (1997), 276-287.
6. Banker, R.D., and Slaughter, S.A. The Moderating Effects of Structure on Volatility and Complexity in Software Enhancement. Information Systems Research, 11, 3 (2000), 219-240.
7. Barki, H.; Rivard, S.; and Talbot, J. Toward an Assessment of Software Development Risk. Journal of Management Information Systems, 10, 2 (1993), 203-225.
8. Barki, H.; Rivard, S.; and Talbot, J. An Integrative Contingency Model of Software Project Risk Management. Journal of Management Information Systems, 17, 4 (2001), 37-69.
9. Benamati, J., and Lederer, A.L. Coping With Rapid Changes in IT. Communication of the ACM, 44, 8 (2001), 83-88.
10. Boehm, B.W. Software Risk Management: Principles and Practices. IEEE Software, 8, 1 (1991), 32-41.
11. Boehm, B.W., and Ross, R. Theory-W Software Project Management: Principles and Examples. IEEE Transactions on Software Engineering, 15, 7 (1989), 902-916.
12. Briggs, R.O.; De Vreede, G.J.; Nunamaker Jr., J.F.; and Sprague Jr., R.H. Special Issue: Information Systems Success. Journal of Management Information Systems, 19, 4 (2003), 5-8.
13. Brooks, F., P., Jr. The Mythical Man-Month. Reading, MA: Addison-Wesley, 1995. 14. Byrd, T.A., and Turner, D.E. Measuring the Flexibility of Information Technology
Infrastructure: Exploratory Analysis of a Construct. Journal of Management Information Systems, 17, 1 (2000), 167-208.
15. Byrne, B.M. Structural Equation Modeling with LISREL, PRELIS, and SIMPLIS: Basic Concepts, Applications, and Programming. Mahwah, N.J.: Lawrence Erlbaum Associates, 1998.
16. Campbell, D.J. Task Complexity: A Review and Analysis. Academy of Management Review, 13, 1 (1988), 40-52.
17. Caron, J.R. Business Reengineering at CIGNA Corporation: Experiences and Lessons Learned from the First Five Years. MIS Quarterly, 18, 3 (1994), 233-250.
18. Charette, R.N. Software Engineering Risk Analysis and Management. New York: Inter-text, 1989.
19. Chin, W.W., and Todd, P.A. On the Use, Usefulness, and Ease of Use of Structural Equation Modeling in MIS Research: A Note of Caution. MIS Quarterly, 19, 2 (1995), 237-246.
20. Churchill, G.A., Jr. A Paradigm for Developing Better Measures of Marketing Constructs. Journal of Marketing Research, 16, 1 (1979), 64-73.
38
21. Clark, K.B. Project Scope and Project Performance: The Effect of Parts Strategy and Supplier Involvement on Product Development. Management Science, 35, 10 (1989), 1247-1263.
22. Cleland, D.I., and King, W.R. Systems Analysis and Project Management. New York: McGraw-Hill, 1983.
23. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13, 3 (1989), 319-339.
24. Dickson, G.W., and Wetherbe, J.C. The Management of Information Systems. New York, NY: McGraw-Hill, 1985.
25. Duncan, N.B. Capturing Flexibility of Information Technology Infrastructure: A Study of Resource Characteristics and Their Measure. Journal of Management Information Systems, 12, 2 (1995), 37-57.
26. Dvir, D.; Lipovetsky, S.; Shenhar, A.; and Tishler, A. In Search Of Project Classification: A Non-Universal Approach to Project Success Factors. Research Policy, 27, 915-935 (1998).
27. Eisenhardt, K.M., and Tabrizi, B.N. Accelerating Adaptive Processes: Product Innovation in the Global Computer Industry. Administrative Science Quarterly, 40, (1995), 84-110.
28. Ewusi-Mensah, K. Critical Issues in Abandoned Information Systems Development Projects. Communication of the ACM, 40, 9 (1997), 74-80.
29. Faraj, S., and Sproull, L. Coordinating Expertise in Software Development Teams. Management Science, 46, 12 (2000), 1554-1568.
30. Field, T. When Bad Thing Happen to Good Projects. CIO Magazine (Oct. 15 1997). 31. Garmus, D., and Herron, D. Function Point Analysis: Measurement Practices for Successful
Software Projects: Addison-Wesley, 2001. 32. Gefen, D. It is Not Enough to be Responsive: The Role of Cooperative Intentions in MRP II
Adoption. The DATA BASE for Advances in Information Systems, 31, 2 (2000), 65-79. 33. Griffin, A. The Effect of Project and Process Characteristics on Product Development Cycle
Time. Journal of Marketing Research, 34, (1997), 24-35. 34. Haimes, Y.Y. Total Risk Management. Risk Analysis, 11, 2 (1991), 167-171. 35. Hair, J.F.; Anderson, R.E.; Tatham, R.L.; and Black, W.C. Multivariate Data Analysis with
Readings. Sydney, Australia: Prentice Hall, 1995. 36. Halstead, M.H. Elements of Software Science. New York: Elsevier North-Holand, 1977. 37. Hartwick, J., and Barki, H. Communication as a Dimension of User Participation. IEEE
Transactions on Professional Communication, 44, 1 (2001), 21-36. 38. Hopper, M.D. Complexity: The Weed that Could Choke IS. Computerworld, 30, 28 (July 8
1996), 37. 39. Hu, L.T., and Bentler, P.M. Evaluating Model Fit, In Hoyle, R.H. (ed.), Structural Equation
Modeling: Concepts, Issues, and Applications, Thousand Oaks, CA: Sage Publications, 1995, pp. 76-99.
40. Huizing, A., and Koster, E. Balance in Business Reengineering: An Empirical Study of Fit and Performance. Journal of Management Information Systems, 14, 1 (1997), 93-118.
41. Ives, B., and Olson, M.H. User Involvement and MIS Success: A Review of Research. Management Science, 30, 5 (1984), 586-603.
42. Jesitus, J. Broken Promises? Foxmeyer's Project Was A Disaster. Was The Company Too Aggressive or Was It Misled? Industry Week (Nov. 3 1997), 31-37.
39
43. Jiang, J.J., and Klein, G. Information System Success as Impacted by Risks and Development Strategies. IEEE Transactions on Engineering Management, 48, 1 (2001), 46-55.
44. Johnson, J. Chaos: the dollar drain of IT project failures. Application Development Trends, 2, 1 (1995), 41-47.
45. Jöreskog, K.G., and Sörbom, D. LISREL7: A Guide to the Program and Applications. Chicago, IL: SPSS Inc., 1989.
46. Jöreskog, K.G., and Sörbom, D. LISREL VIII User's Guide. Mooresville, IN: Scientific Software, Inc., 1993.
47. Kearney, J.K.; Sedlmeyer, R.L.; Thompson, W.B.; Gray, M.A.; and Adler, M.A. Software Complexity Measurement. Communication of the ACM, 29, 11 (1986), 1044-1050.
48. Keil, M.; Cule, P.E.; Lyytinen, K.; and Schmidt, R.C. A Framework for Identifying Software Project Risks. Communication of the ACM, 41, 11 (1998), 76-83.
49. Keil, M.; Tiwana, A.; and Bush, A. Reconciling user and project manager perceptions of IT project risk: a Delphi study. Information Systems Journal, 12, 2 (2002), 103-119.
50. Kotter, J.P. What Leaders Really Do. Harvard Business Review, 68, 3 (1990), 103-111. 51. Leveson, N.G. Software Engineering: Stretching the Limits of Complexity. Communication
of the ACM, 40, 2 (1997), 129-131. 52. Lyytinen, K.; Mathiassen, L.; and Ropponen, J. Attention Shaping and Software Risk - A
Categorical Analysis of Four Classical Risk Management Approaches. Information Systems Research, 9, 3 (1998), 233-255.
53. Markus, M.L. Power, Politics, and MIS Implementation. Communication of the ACM, 26, 6 (1983), 430-444.
54. Markus, M.L., and Keil, M. If We Build It, They Will Come: Design Information Systems that People Want to Use. Sloan Management Review (1994), 11-25.
55. Marsh, H.W., and Hocevar, D. Application of Confirmatory Factor Analysis to the Study of Self-Concept: First and Higher Order Factor Models and Their Invariance Across Groups. Psychological Bulletin, 97, 3 (1985), 562-582.
56. McCabe, T.J. A Complexity Measure. IEEE Transactions on Software Engineering, SE-2, 4 (1976), 308-320.
57. McFarlan, F.W. Portfolio Approach to Information Systems. Harvard Business Review (September-October 1981), 142-150.
58. Meyer, M.H., and Curley, K.F. An Applied Framework for Classifying the Complexity of Knowledge-Based Systems. MIS Quarterly, 15, 4 (1991), 455-472.
59. Meyer, M.H., and Utterback, J.M. Product Development Cycle Time and Commercial Success. IEEE Transactions on Engineering Management, 42, November (1995), 297-304.
60. Moore, G.C., and Benbasat, I. Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation. Information Systems Research, 2, 3 (1991), 192-222.
61.Murray, J.P. Reducing IT Project Complexity. Information Strategy: The Executive's Journal, 16, 3 (2000), 30-38.
62. Nilsen, K. Adding Real-time Capabilities to Java. Communications of the ACM, 41, 6 (1998), 49-56.
63. Nunnally, J.C. Psychometric Theory. New York: McGraw Hill, 1967. 64. Orlikowski, W.J., and Robey, D. Information Technology and the Structuring of
Organizations. Information Systems Research, 2, 2 (1991), 143-169.
40
65. Rainer, R.K.; Snyder, C.A.; and Carr, H.H. Risk Analysis for Information Technology. Journal of Management Information Systems, 8, 1 (1991), 129-147.
66. Ravichandran, T., and Rai, A. Quality Management in Systems Development: An Organizational System Perspective. MIS Quarterly, 24, 3 (2000), 381-415.
67. Richardson, G.L., and Jackson, B.M. A Principles-Based Enterprise Architecture: Lessons from Texaco and Star Enterprise. MIS Quarterly, 14, 4 (1990), 385-403.
68. Ropponen, J., and Lyytinen, K. Components of Software Development Risk: How to Address Them? A Project Manager Survey. IEEE Transactions on Software Engineering, 26, 2 (2000), 98-112.
69. Schmidt, D.C. Middleware for Real-time and Embedded Systems. Communications of the ACM, 45, 6 (2002), 43-46.
70. Schmidt, R.; Lyytinen, K.; Keil, M.; and Cule, P. Identifying Software Project Risks: An International Delphi Study. Journal of Management Information Systems, 17, 4 (2001), 5-36.
71. Scott, J.E., and Vessey, I. Managing Risks in Enterprise Systems Implementations. Communication of the ACM, 45, 4 (2002), 74-81.
72. Scott, K. Battle Complexity to Add Profitability. InformationWeek 700 (September 14 1998), 18ER.
73. Senge, P.M. The Fifth Discipline. New York: NY: Doubleday, 1990. 74. Sethi, V., and King, W.R. Development of Measures to Assess the Extent to which an
Information Technology Application Provides Competitive Advantage. Management Science, 40, 12 (1994), 1601-1627.
75.Shenhar, A.J. One Size Does Not Fit All Projects: Exploring Classical Contingency Domains. Management Science, 47, 3 (2001), 394-414.
76. Sherer, S.A. Measuring the Risk of Software Failure: A Financial Application, In Proceedings of the Tenth International Conference on Information Systems, Boston, 1989, pp. 237-245.
77. Souder, W.E., and Song, X.M. Contingency Product Design and Marketing Strategies Influencing New Product Success and Failure in U.S. and Japanese Electronics Firms. Journal of Product Innovation Management, 14, (1997), 21-34.
78. Standish Group. The Chaos Report, 1994. 79. Standish Group. The Chaos Report, 2001. 80. Swanson, E.B., and Beath, C.M. Reconstructing the Systems Development Organization.
MIS Quarterly, 13, 3 (1989), 293-307. 81. Tait, P., and Vessey, I. The Effect of User Involvement on System Success: A Contingency
Approach. MIS Quarterly, 12, 1 (1988), 91-108. 82. Thong, J.Y.L.; Yap, C.S.; and Raman, K.S. Top Management Support, External Expertise
and Information Systems Implementation in Small Businesses. Information Systems Research, 7, 2 (1996), 248-266.
83. Truex, D.P.; Baskerville, R.; and Klein, H. Growing Systems in Emergent Organizations. Communication of the ACM, 42, 8 (1999), 117-123.
84. Turner, J.R., and Cochrane, R.A. Goals-and-Methods Matrix: Coping with Projects with ill Defined Goals and/or Methods of Achieving Them. International Journal of Project Management, 11, (1993), 93-102.
85. Venkatraman, N. Strategic Orientation of Business Enterprises: The Construct, Dimensionality, and Measurement. Management Science, 35, 8 (1989), 942-962.
41
86. Weill, P., and Broadbent, M. Leveraging the New Infrastructure: How Market Leaders Capitalize on Information Technology. Boston, MA: Harvard Business School Press, 1998.
87. Werts, C.E.; Linn, R.L.; and Joreskog, K.G. Intraclass Reliability Estimates: Testing Structural Assumptions. Educational and Psychological Measurement, 34, (1974), 25-33.
88. Weyuker, E.J. Evaluating Software Complexity Measures. IEEE Transactions on Engineering Management, 14, 9 (1988), 1357-1365.
89. Wheaton, B.; Muthen, B.; Alwin, D.; and Summers, G. Assessing Reliability and Stability in Panel Models, In Heise, D. (ed.), Sociological Methodology, San Francisco, CA: Jossey-Bass, 1977, pp. 84-136.
90. Wideman, R.M. Risk Management. Project Management Journal, 14, 4 (1986), 20-26. 91. Williams, T.M. The Need for New Paradigms for Complex Projects. International Journal of
Project Management, 17, 5 (1999), 269-273. 92. Winklhofer, H. Information Systems Project Management During Organizational Changes.
Engineering Management Journal, 14, 2 (2002), 33-37. 93. Wood, R.E. Task Complexity: Definition of the Construct. Organizational Behavior and
Human Decision Processes, 37, 1 (1986), 60-82. 94. Wozniak, T.M. Significance vs. Capability: "Fit for Use" Project Controls, In American
Association of Cost Engineers International, Dearborn, MI, 1993, pp. A.2.1-8.
42
Tables and Figures
Table 1. Characteristics of the Overall Study Sample (n=541)
Characteristics of organizations Industry Consulting Finance/Insurance Government Healthcare Manufacturing Retail Software Telecom/network Transportation Utility Other Company annual sales Less than $100 million $100 million - $1 billion Over $1 billion Number of employees Less than 1,000 1,000 – 10,000 Over 10,000
6.3% 20.6%
9.2% 5.9%
13.7% 5.3% 9.7% 5.0% 4.0% 7.4%
12.9%
26.0% 31.2% 42.8%
26.6% 40.5% 32.9%
Characteristics of projects Type of project In-house new development Packaged software implementation Enhancement of existing software Number of project members Less than 10 10 – 50 Over 50 Project budget Less than $100,000 $100,000 – 1 million Over $1 million Project duration Less than 6 months 6 – 12 months Over 12 months
38.1% 33.9% 28.0%
25.0% 55.4% 19.6%
17.5% 41.8% 40.7%
24.8% 40.9% 34.3%
Table 2. Comparability Test of the Two Randomly-split Sub-samples (n1=270, n2=271)
t-test for Equality of Means Variable Sub-sample Mean t p
1 16,041 Number of employees 2 13,604
0.932 0.352
1 2.48 billion Annual sales (dollars) 2 2.63 billion
-0.291 0.771
1 33 Number of project members 2 35
-0.501 0.616
1 2.19 million Project budget (dollars) 2 2.06 million
0.301 0.764
1 363 Project duration (days) 2 351
0.517 0.605
43
Table 3. Exploratory Analysis – Factor Structure and Loadings (n1=270)
Item Factor 1 (SIT)
Factor 2 (DORG)
Factor 3 (DIT)
Factor 4 (SORG)
ISDPC16 .787 ISDPC 25 .729 ISDPC 30 .693 ISDPC 23 .618 ISDPC 19 .583 ISDPC 11 .566 ISDPC 4 .538 ISDPC 12 .823 ISDPC 1 .766 ISDPC 26 .747 ISDPC 24 .692 ISDPC 14 .602 ISDPC 7 .881 ISDPC 17 .873 ISDPC 22 .844 ISDPC 27 .763 ISDPC 6 .701 ISDPC 29 .629 ISDPC 9 .623 ISDPC 21 .619 Eigenvalue 4.421 2.761 2.084 1.772
% of variance 22.106 13.806 10.422 8.860
Note: 1. Items in the questionnaire were randomly ordered to avoid biases. The item labels in this table reflect the order in which they appeared in the questionnaire.
2. Factor loadings less than 0.30 are not shown. The results indicate that there were no items with cross-factor loadings greater than 0.30.
44
Table 4. Exploratory Analysis - Reliability (n1=270)
Factor Item Corrected item-total correlation
Cronbach’s α
ISDPC 29 .445 ISDPC 6 .458 ISDPC 21 .436 ISDPC 27 .555
SORG
ISDPC 9 .424
0.71
ISDPC 19 .442 ISDPC 11 .425 ISDPC 16 .640 ISDPC 23 .486 ISDPC 4 .374 ISDPC 25 .588
SIT
ISDPC 30 .574
0.78
ISDPC 12 .667 ISDPC 1 .642 ISDPC 26 .576 ISDPC 14 .496
DORG
ISDPC 24 .572
0.81
ISDPC 22 .728 ISDPC 7 .846
DIT
ISDPC 17 .833
0.90
Table 5. Confirmatory Analysis - Model Fits of Alternative Models (n2 =271)
Criteria Threshold Model 1 Null model
Model 2 One-factor
Model 3 Four first-order
factors (Uncorrelated)
Model 4 Four first-order
factors (Correlated)
Model 5 Second-order model with four first-order
factors
χ2 2653.01 1802.06 395.39 335.45 341.31
d.f. 190 170 170 164 166
χ2 / d.f. (<5.0) 13.96 10.60 2.33 2.05 2.06
GFI (>.90) 0.50 0.60 0.87 0.89 0.89
AGFI (>.80) 0.45 0.51 0.84 0.86 0.86
RMSEA (<.08) 0.219 0.189 0.070 0.062 0.063
RMR (<.10) 0.22 0.16 0.11 0.067 0.071
CFI (>.90) 0.00 0.34 0.87 0.90 0.90
NFI (>.90) 0.00 0.31 0.80 0.82 0.82
45
Table 6. Confirmatory Analysis - Differences between Alternative Models (n2 =271)
Model 2 &
Model 1
Model 3 &
Model 2
Model 4 &
Model 3
Model 5 &
Model 4
Difference in χ2 850.95 1406.67 59.94 5.86
Difference in d.f. 20 0 6 2
Significance level p <.01 p <.01 p <.01 n.s.
Table 7. Confirmatory Analysis - Unidimensionality/Convergent Validity (n2 =271)
Factor No. of indicators χ2 d.f χ2/d.f. GFI AGFI RMR CFI
SORG 5 16.34 5 3.27 0.98 0.93 0.040 0.95
SIT 7 82.84 14 11.83 0.92 0.84 0.065 0.88
DORG 5 36.41 5 5.28 0.95 0.85 0.054 0.92
DIT a 3 33.16 19 1.75 0.97 0.94 0.041 0.98 Note: (a) This model is saturated because the number of indicators is 3. Therefore, the fit indexes are not available. Fit indexes for this factor were produced from a two-factor model including DIT and SORG. Table 8. Confirmatory Analysis - Composite Reliability ρc (n2 =271)
Factor No. of indicators ρc
SORG 5 0.736
SIT 7 0.784
DORG 5 0.812
DIT 3 0.869 Note: ρc=(Σ λi)2 Variance(A) / ((Σ λi)2 Variance(A) + Σ θδ)
46
Table 9. Confirmatory Analysis - Discriminant Validity of Four First-Order Factors (n2 =271)
χ2 Statistic
Construct Pair ML Estimate Ø t-value Constrained
model (df) Unconstrained model (df)
Difference
SORG - SIT 0.02 0.20 183.73 (54) 148.22 (53) 35.51**
SORG - DORG 0.11 1.59 129.11 (35) 95.25 (34) 33.86**
SORG - DIT 0.37 3.67** 49.74 (20) 33.16 (19) 16.58**
SIT - DORG 0.20 2.47* 190.83 (54) 159.63 (53) 31.20**
SIT - DIT 0.22 2.84** 135.81 (35) 115.43 (34) 20.38**
DORG - DIT 0.23 3.30** 91.35 (20) 65.34 (19) 26.01** Note: * -- p<.05, ** -- p<.01
Table 10. Confirmatory Analysis - Invariance of the Second-Order Model across Three
Types of ISDP (n =511)
Model description χ2 (d.f.) χ2 / d.f.
χ2 difference with Model A T
Model A: Baseline model with no invariance constraints across three types of ISDPs
914.20 (498) 1.84 – 1.00
Model B: Model A with invariant items loadings to the first-order factors
950.01 (530) 1.79 35.81 (32),
n.s. 0.96
Model C: Model B with invariant structural coefficients between first- and second-order factor
950.01 (538) 1.77 35.81 (40),
n.s. 0.96
47
Table 11. Nomological Validity – Regression Analysis Results (n = 467)
Relationship Adjusted R2 Beta
Model I 0.128
Project duration =
Overall ISDP complexity 0.361**
Model II 0.126
Project duration =
Structural organizational complexity + 0.155**
Structural IT complexity + 0.156**
Dynamic organizational complexity + 0.122**
Dynamic IT complexity 0.142** Note: * -- p<.05, ** -- p <.01
Table B1. Results of the Sorting Procedures
ACTUAL CATEGORY TARGET
CATEGORY SORG SIT DORG DIT Unclear Total Target % SORG 33 11 44 75
SIT 8 36 44 82 DORG 3 17 20 85
DIT 12 12 100 Total item placement: 120 Hits: 98 Overall Hit Ratio: 82%
48
Table C1. Covariance matrix of the (total) sample
ISDPC 19 5.293
ISDPC 14 0.481 2.969
ISDPC 29 0.582 0.267 3.179
ISDPC 6 0.531 0.359 1.065 4.561
ISDPC 11 1.063 0.594 0.224 0.000 4.184
ISDPC 16 1.472 0.560 0.124 0.164 1.288 3.728
ISDPC 23 0.994 0.561 0.047 0.220 1.187 1.240 3.117
ISDPC 21 0.198 0.021 1.014 1.357 -0.012 -0.312 0.010 3.639 ISDPC 24 0.966 1.577 0.605 0.380 0.422 0.522 0.597 0.406 4.471
ISDPC 4 0.710 0.505 -0.342 0.058 0.655 0.833 1.179 -0.195 0.548 2.478
ISDPC 25 1.492 0.587 0.275 0.217 1.108 2.605 1.032 -0.120 0.627 0.863 3.800
ISDPC 27 0.571 0.093 1.456 1.407 0.111 0.039 0.120 1.280 0.296 -0.257 0.197 3.184
ISDPC 9 0.125 -0.042 0.980 0.818 0.000 -0.134 -0.062 1.330 0.331 -0.551 -0.090 1.063 3.027
ISDPC 30 1.617 0.381 0.084 0.253 1.192 1.636 1.335 0.009 0.597 0.796 1.681 0.106 -0.125 3.377
ISDPC 12 0.228 1.442 0.362 0.314 0.349 0.222 0.603 0.241 1.651 0.171 0.369 0.187 0.185 0.350 3.567
ISDPC 1 0.204 0.743 0.262 0.262 0.060 0.154 0.178 0.152 1.942 0.175 0.461 0.264 0.223 0.343 1.921 2.832
ISDPC 26 0.252 1.011 0.337 0.532 0.304 0.136 0.558 0.420 1.450 0.119 0.290 0.501 0.302 0.458 2.096 1.543 3.212
ISDPC 22 0.692 0.114 0.409 0.128 0.473 0.414 0.266 0.225 0.571 -0.055 0.580 0.214 0.210 0.594 0.639 0.658 0.688 2.690
ISDPC 7 0.796 0.052 0.711 0.480 0.444 0.372 0.316 0.578 0.695 -0.043 0.664 0.644 0.561 0.775 0.570 0.737 0.582 2.009 3.219
ISDPC 17 0.768 0.109 0.694 0.605 0.344 0.430 0.450 0.499 0.616 -0.057 0.616 0.582 0.463 0.767 0.658 0.705 0.671 1.892 2.635 3.259
ISDPC19 ISDPC14 ISDPC29 ISDPC6 ISDPC11 ISDPC16 ISDPC23 ISDPC21 ISDPC24 ISDPC4 ISDPC25 ISDPC27 ISDPC9 ISDPC30 ISDPC12 ISDPC1 ISDPC26 ISDPC22 ISDPC7 ISDPC17
49
Organizational Structural Organizational Complexity(SORG)
Dynamic Organizational Complexity (DORG)
Technological Structural IT Complexity (SIT)
Dynamic IT Complexity (DIT)
Structural Dynamic
Figure 1. A Conceptual Framework of ISDP Complexity
50
Figure 2. A Four-Phase Process of Measure Development and Validation
Literature review - Frameworks - Existing measures
Field interviews (12) - New measures - Insights
Focus groups (45) - Nominal group process - New measures
Sorting procedure Qualitative assessment of construct validity (4)
Pilot test 1 Assessment of content validity (7)
Pilot test 2 Refinement and test of online survey (15)
Online survey of select PMI-ISSIG members (541 valid responses)
Split the sample into two random sub-samples (n1=270 and n2=271) Test comparability of the sub-samples (t-test on key sample characteristics)
4.2 Sample split
Exploratory factor analysis
- Convergent validity - Discriminant validity Reliability (Cronbach’s alpha) MTMM correlation analysis
4.3. Exploratory analysis Test model-data fits of alternative models Parameter estimates of selected “best” model Convergent validity/unidimensionality Discriminant validity Reliability
4.4. Confirmatory analysis
Test invariance/equivalence of measurement model (factorial structure and loadings) across three types of ISDPs
4.5. Factorial invariance analysis
Phase 1 – Conceptual development and initial item generation
Assessment of non-response bias Screening for outliers
4.1 Data screening and descriptive analysis
Test relationships between ISDP size, complexity and performance measures
4.6. Test of nomological validity
Phase 2 – Conceptual refinement and item modification
Phase 3 – Survey data collection
Phase 4 – Data analysis and measurement validation
Note: Numbers in parentheses indicate number of project managers involved.
51
(a) Model 1: The Null Model
(b) Model 2: One First-Order Factor Model
(c) Model 3: Four First-Order Factor Model (uncorrelated)
(d) Model 4: Four First-Order Factor Model (correlated)
(e) Model 5: The Second-Order Factor Model
Figure 3. Alternative Models Tested in the Confirmatory Analysis
Item 1
ISDP Complexity
SORG
SIT
DORG
DIT
Item 2
Item 6 Item 7
Item 13 Item 14
Item 18 Item 19
......
......
Item 1 Factor1
Item 2
Item 3
Item 4
Item 5
Item 6 ….
ISDP Complexity
Factor2
Factor3
Factor4
Factor5
Factor6
Item 1
Item 2
Item 3
Item 4
Item 5
Item 6….
Item 1 SORG
SIT
DORG
DIT
Item 2
Item 6 Item 7
Item 13 Item 14
Item 18 Item 19
......
......
Item 1SORG
SIT
DORG
DIT
Item 2
Item 6Item 7
Item 13Item 14
Item 18Item 19
......
......
52
Figure 4. Parameter Estimates of the Second-Order Model (Model 5, n2 =271)
SORG
ISDPC29
ISDPC6
ISDPC21
ISDPC27
ISDPC9
ISDPC19
ISDPC11
ISDPC16
ISDPC23
ISDPC4
ISDPC25
ISDPC30
ISDPC14
ISDPC24
ISDPC12
ISDPC1
ISDPC26
ISDPC22
ISDPC7
ISDPC17
SIT
DORG
DIT
0.61
0.45 0.65 0.70 0.57
0.44 0.41 0.78 0.55 0.46 0.78 0.64
0.48
0.61 0.82 0.73 0.74
0.70 0.94 0.84
0.30
0.37
0.37
0.76
0.63
0.80
0.58
0.52
0.67
0.80
0.83
0.39
0.70
0.79
0.39
0.59
0.77
0.63
0.32
0.47
0.45
0.51
0.12
0.30
ISDP Complexity
53
Appendix A. Measures and their Reference Sources
ISDP Components Item Item description (reference)
ISDPC6 The project manager did not have direct control over project resources ISDPC9 Business users provided insufficient support and involvement [43, 48, 70] ISDPC21 There was no sufficient commitment/support from the top management [7, 43,
48, 52, 70] ISDPC27 There was no sufficient/appropriate staffing for the project [10, 48, 52, 68, 70]
SORG
ISDPC29 The project personnel did not have required knowledge/skills [7, 8, 43, 48, 52, 68, 70, 71]
ISDPC4 The project team was cross-functional [7, 8] ISDPC11 The system involved real-time data processing [31] ISDPC16 The project involved multiple software environments [58] ISDPC23 The project involved coordinating multiple user units [8, 43, 70] ISDPC25 The project involved multiple technology platforms [58] ISDPC30 The project involved a lot of integration with other systems [7, 8, 58]
SIT
ISDPC19 The project involved multiple external contractors and vendors [7, 43, 70, 94]
ISDPC1 The end-users' organizational structure changed rapidly ISDPC12 The end-users' business processes changed rapidly ISDPC14 Implementing the project caused changes in the users' business processes [8,
71] ISDPC24 Implementing the project caused changes in the users' organizational structure
[8, 71]
DORG
ISDPC26 The end-users' information needs changed rapidly [10, 68, 70]
ISDPC7 IT architecture that the project depended on changed rapidly [70] ISDPC17 IT infrastructure that the project depended on changed rapidly [71] DIT ISDPC22 Software development tools that the project depended on changed rapidly
Note: The items without references were obtained from field interviews and focus group discussions with IS project managers
54
Appendix B. Sorting Procedure and Results
A sorting procedure was used to qualitatively assess the face validity and the construct
validity of the initial 30 items that were generated in Phase 1 of the research process. Four IS
researchers with an average of eight years of IS work experience participated in the sorting
procedure. Each item in the initial pool was printed on a 3 5-inch index card. In the sorting
procedure, each judge was asked to carefully read the card and place it in one of the four
components of ISDP complexity. An additional category, “too ambiguous/unclear” was
included so that the judges could put a card into that category if they felt the card did not seem to
belong to any of the four pre-defined categories. Prior to actually sorting the cards, the judges
read a standard set of instructions. To make sure that the judges understood the sorting
procedure, they did a sorting exercise with the well-known 12-item ease of use and usefulness
instrument [23]. All judges completed this sorting exercise successfully, indicating they had
clear understanding of the sorting procedure and were able to do it appropriately. Each judge
then individually sorted the ISDP complexity item cards. After completing the sorting procedure,
they explained why they sorted cards into the “too ambiguous/unclear” category, if any.
Following Moore and Benbasat [60], we calculated the overall item placement ratio.
This ratio represents how well the judges were able to sort the items into the target constructs. In
total, the judges classified 98 items into the target categories and 22 items into other categories
(shown in Table B1), resulting in an overall placement ratio of 82%. This indicates that the
items were, in general, being placed as they were intended to be. Four items in the SORG
category were commonly placed in the “ambiguous/unclear” category and were dropped. As a
result, 26 items remained after the sorting procedure.