+ All Categories
Home > Documents > A CRITICAL ASSESSMENT OF THE CAPABILITIES OF...

A CRITICAL ASSESSMENT OF THE CAPABILITIES OF...

Date post: 06-Mar-2018
Category:
Upload: trinhdan
View: 219 times
Download: 5 times
Share this document with a friend
37
A CRITICAL ASSESSMENT OF THE CAPABILITIES OF FIVE MEASURES FOR ICT DEVELOPMENT Robert J. Kauffman Director, MIS Research Center, and Professor and Chair [email protected] Ajay Kumar Doctoral Program [email protected] Information and Decision Sciences Department Carlson School of Management, University of Minnesota Minneapolis, MN 55455 Last revised: March 18, 2005 _____________________________________________________________________________ ABSTRACT A variety of single-item composite indices for measurement of information and communication technologies (ICTs) at the country level are currently used to gauge the extent of technological development in developing countries. However, the values that are obtained from these measures typically are not comparable for the purpose of development and growth policy analysis. To remedy this, in 2003 the United Nations World Summit on Information Society called for the creation of an ICT Development (Digital Opportunity) Index. In this article, we survey and critique existing approaches for measurement of ICTs at the country level and identify five streams of measures that inform our efforts to identify the appropriate bases for the design of more effective metrics. They include: discrete non-economic measures, economic measures, technology adoption and diffusion measures, single-item index measures and digital divide measures. We suggest that the overall impact of ICTs can be accounted for in terms of four distinct dimensions: economy-related, society-related, knowledge-related and the temporal dimension. Based on the state of diffusion of ICTs, we categorize single-item index measures in terms of those that measure ICT-readiness, those that assess ICT intensity, and those that gauge the impacts of ICTs. We identify a number of issues and concerns that need to be addressed relative to the single-item index measures. Our evaluation concludes that there is no single-item index that will be able to measure the overall impact of ICTs. As a result, we recommend that the object of an ICT Development Index should be measurement of the impacts of ICTs. Keywords and phrases: Adoption, diffusion, digital divide, economic perspectives, e-readiness, ICTs, information technologies, measurement, single-item index, technology policy. ______________________________________________________________________________ Acknowledgments. We thank Shu-Chun Ho and Angsana Techatassanasoontorn for helpful discussions of related issues on the internationational diffusion of ICTs and cross-national IT value assessment. We also appreciated input at an early stage of the development of this paper from students in the IDSc 8511 Doctoral Seminar at the Carlson School of Management, University of Minnesota in Fall 2004. Ajay Kumar acknowledges fellowship support from the Graduate School of the University of Minnesota.
Transcript
Page 1: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

A CRITICAL ASSESSMENT OF THE CAPABILITIES OF FIVE MEASURES FOR ICT DEVELOPMENT

Robert J. Kauffman

Director, MIS Research Center, and Professor and Chair [email protected]

Ajay Kumar Doctoral Program

[email protected]

Information and Decision Sciences Department Carlson School of Management, University of Minnesota

Minneapolis, MN 55455

Last revised: March 18, 2005 _____________________________________________________________________________

ABSTRACT A variety of single-item composite indices for measurement of information and communication technologies (ICTs) at the country level are currently used to gauge the extent of technological development in developing countries. However, the values that are obtained from these measures typically are not comparable for the purpose of development and growth policy analysis. To remedy this, in 2003 the United Nations World Summit on Information Society called for the creation of an ICT Development (Digital Opportunity) Index. In this article, we survey and critique existing approaches for measurement of ICTs at the country level and identify five streams of measures that inform our efforts to identify the appropriate bases for the design of more effective metrics. They include: discrete non-economic measures, economic measures, technology adoption and diffusion measures, single-item index measures and digital divide measures. We suggest that the overall impact of ICTs can be accounted for in terms of four distinct dimensions: economy-related, society-related, knowledge-related and the temporal dimension. Based on the state of diffusion of ICTs, we categorize single-item index measures in terms of those that measure ICT-readiness, those that assess ICT intensity, and those that gauge the impacts of ICTs. We identify a number of issues and concerns that need to be addressed relative to the single-item index measures. Our evaluation concludes that there is no single-item index that will be able to measure the overall impact of ICTs. As a result, we recommend that the object of an ICT Development Index should be measurement of the impacts of ICTs.

Keywords and phrases: Adoption, diffusion, digital divide, economic perspectives, e-readiness, ICTs, information technologies, measurement, single-item index, technology policy. ______________________________________________________________________________ Acknowledgments. We thank Shu-Chun Ho and Angsana Techatassanasoontorn for helpful discussions of related issues on the internationational diffusion of ICTs and cross-national IT value assessment. We also appreciated input at an early stage of the development of this paper from students in the IDSc 8511 Doctoral Seminar at the Carlson School of Management, University of Minnesota in Fall 2004. Ajay Kumar acknowledges fellowship support from the Graduate School of the University of Minnesota.

Page 2: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

1

INTRODUCTION

The issue of the measurement of information and communication technologies (ICTs) at the

country level and the technological readiness of nation states has been attracting the attention of

researchers and policymakers worldwide, as innovative new technologies such as digital wireless

telephony and the Internet become more widely diffused. As defined by the World Bank’s

(2003) “Comprehensive Development Framework,” ICTs consist of hardware, software,

networks and media for collection, storage, processing, transmission, and presentation of

information (including voice, data, text and images). 1 This simple and unconstrained definition

of ICTs encompasses the oldest as well as the newest ICTs, and, as a result, promotes the widest

participation of countries across the globe. However, an appropriate metric for gauging the

impacts of ICTs needs to be robust enough to account for the diversity of nations, as well as the

“digital divides” which exist among nations in their usage of ICTs. Many different ICT

technologies and applications create a variety of kinds of value. Different measurement

approaches have tried to capture the different aspects of ICTs. These measurements have not

been comparable for the most part, leading to a lack of clarity on how ICTs should be measured.

Part of this confusion arises from a lack of integration of different measures in a single

framework.

Recognizing an urgent need for improving the measurement capabilities for ICT investment,

adoption and impact, in December 2003 the United Nations (UN) World Summit on the

Information Society (2003) called for the development of an ICT Development (Digital

Opportunity) Index. An extract of this UN working group’s decision is as follows:

1 The Comprehensive Development Framework is a process developed by World Bank to facilitate poverty reduction in countries. It emphasizes the interdependence of development, in terms of social, structural, human, governance, environmental, economic, and financial elements. For additional details, the interested reader should see World Bank (2005).

Page 3: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

2

“A realistic international performance evaluation and benchmarking (both qualitative and quantitative), through comparable statistical indicators and research results, should be developed to follow up the implementation of the objectives, goals and targets in the Plan of Action, taking into account different national circumstances.

(a) In cooperation with each country concerned, develop and launch a composite ICT Development (Digital Opportunity) Index. It could be published annually, or every two years, in an ICT Development Report. The index could show the statistics while the report would present analytical work on policies and their implementation, depending on national circumstances, including gender analysis.

(b) Appropriate indicators and benchmarking, including community connectivity indicators, should clarify the magnitude of the digital divide, in both its domestic and international dimensions, and keep it under regular assessment, and tracking global progress in the use of ICTs to achieve internationally agreed development goals, including those of the Millennium Declaration.”

Consequently, in June 2004 during the 11th United Nations Conference on Trade and

Development (UNCTAD), an international, multi-stakeholder partnership for the purposes of

measuring ICT development was launched (UNCTAD, 2004). 2 However, most international

agency policymakers, government officials and informed academicians believe that there are no

effective composite measures for ICTs.

Measurement studies on the impacts of ICTs at the macro-level have the potential to play an

important role in the formulation of effective policies in any country’s development plan. To

this end, our paper addresses the following key research questions:

• What are the different approaches that are currently in use for measuring various aspects

of ICTs at the country level? What are their strengths and weaknesses? What role can

the theory of measurement play in their evaluation?

• Do different measurement approaches have a common thread going through them? If so,

2 The partners in this initiative include the International Telecommunication Union (ITU), the Organization for Economic Cooperation and Development (OECD), UNCTAD, and the United Nations Educational, Scientific and Cultural Organization (UNESCO) Institute for Statistics. The partners are also composed from the UN Regional Commissions (UNRC). These include the Economic Commission for Latin America and Caribbean (UNECLAC), the Economic and Social Commission for Western Asia (UNESCWA), the Economic and Social commission for Asia Pacific (UNESCAP), the Economic Commission for Africa (UNECA), and the Information and Communication Technologies (UN ICT) Task Force. The World Bank is also an active member.

Page 4: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

3

what common properties of ICTs underlie the different approaches to measurement?

• What should be the objectives of the UN’s ICT Development (Digital Opportunity)

Index? Do the currently available indices meet the appropriate objectives?

• What other issues arise with respect to the current measurement approaches, especially

the single-item index measures of ICTs at the country level?

We expect that it will be important to fully think through the associated measurement issues

related to ICTs. Beyond the well known maxim that “what can’t be measured can’t be

managed,” there is the more critical issue of ensuring that the focus of efforts to improve

country-level technology infrastructure and development be properly directed, so that the

available resources are effectively applied, and the desired impacts are achieved. Of specific

concern is the contrast between the role of quantitative and objective information, and subjective

and qualitative information regarding ICTs—as well as what Churchman (1959) calls “precise”

and “vague” information. For example, the nature and scope of the digital components of

economies are matters of interest to all nations, as they devote increasing shares of their scarce

resources to ICTs for which current output measures are poor. Economic information relative to

ICTs is essential for effective investment and business decisions that are based on such

information, so governments can formulate sound monetary policies or tax proposals.

(Brynjolfsson and Kahin, 2000).

With this brief introduction to set the tone of our inquiry, the remainder of the paper is

organized as follows. In the second section we discuss several perspectives that are relevant to

measurement issues as they pertain to ICTs. The third section discusses the existing literature

related to different measurement approaches for ICTs, and identifies several different

measurement index types. In the next section we raise issues and concerns related to single-item

Page 5: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

4

composite index measures, and make a focused assessment of how to go about creating effective

measures for ICT impacts. We conclude with a statement of the main contributions to practice in

measurement and policymaking, as well as to the community of interests represented in

universities and research agencies.

THE DEVELOPMENT OF ICT MEASURES: PRELIMINARY PERSPECTIVES

Most observers would agree that the relevant measurements of ICTs need to be taken over

time, to provide a full enough picture of their evolution to support in-depth understanding of how

the growth and development outcomes in an economy were obtained. In addition, they would

recognize the importance of different aspects of the ICTs, as a basis for different impacts. With

these thoughts in mind, we discuss some preliminary perspectives that we believe are applicable

in the development of measures for ICTs at the country level.

Three Basic Dimensions of ICTs: Economy, Society and Knowledge

Considering the range of measurement approaches that are possible, it is appropriate to

formulate a basis for understanding how to evaluate the different ones that are currently used. In

our view, ICTs have three inherently different but related basic dimensions: an economic

dimension, a social dimension and a knowledge dimension. These dimensions seem to us to be

the most obvious when we consider their role in evaluating ICT impacts.

The Economic Dimension (Dimension 1). Over the years, there has been a significant

debate on the “productivity paradox” and the extent to which investments in ICTs impact

economic measures in the economy. 3 The economic dimension of ICT impacts is the most easily

3 The impact of ICT on productivity has been studied by a number of researchers. Please refer to Brynjolfsson and Yang (1996) for a comprehensive survey of the literature up to the mid-1990s. The international dimension of the productivity is discussed by Dewan and Kraemer (1998). For a recent view of the literature that considers IT value in e-business environments, the interested reader should see Mahmood, Kohli and Devaraj (2004).

Page 6: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

5

agreed upon of any that may compete for the attention of government officials and policymakers.

There also is widespread acceptance of the fact that ICTs cause impacts which can be measured

in relatively straightforward economic terms. For example, Gurbaxani and Whang (1991)

reported that the use of ICTs leads to a reduction in the cost of communication, and

Brynjolfsson, et al. (1994) have shown that ICT-driven automated assembly-line technologies

may lead to smaller firms. 4

The Social Dimension (Dimension 2). The social dimension of ICT impacts is also widely

recognized. One way by which ICTs improve quality of life is through the convenience of the

online services which they can support. The sorts of benefits that are ascribed to ICTs include

increased quality, variety, customer service, speed and responsiveness, as discussed by

Brynjolfsson and Yang (1996). For example, Banker and Kauffman (1988) refer to the

convenience afforded by twenty-four hour ATMs as a quality improvement. ICTs also extend

the reach of health, education and public information to otherwise inaccessible far-flung areas,

leading to the improvement of the quality of life in these areas. Such impacts of ICTs on the

quality of life are suggestive of the social dimension of ICTs.

The Knowledge Dimension (Dimension 3). There are also knowledge dimension of ICT

impacts. The World Bank (2004b) uses an approach called the four pillars of knowledge

assessment method at the country-level, which is representative of the dimension that is mapped

out here. Berndt and Malone (1995) have argued that government agencies need to spend more

effort measuring new forms of value created by ICTs, such as capabilities for knowledge

4 The economic impacts of ICTs are many and wide-ranging, and we do not have sufficient space to cover the details here in exhaustive fashion. The main point that we would like to make is that ICTs have economic impact in multiple functional disciplines, as well as in society in general. For additional background, we encourage the interested reader to look at the following references: Srinivasan et al. (1994) for electronic data interchange and logistics impacts, Lehr and Lichtenberg (1998) for government services impacts, Menon (1999) for ICT impacts analysis in the healthcare industry, Rao (2004) for coverage of ICT impacts in microfinance, Zhu and Kraemer (2002) for manufacturing sector impacts, and a second paper by Zhu and Kraemer (2005) for retail industry impacts.

Page 7: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

6

creation. The knowledge potential associated with ICTs has facilitated unforeseen progress in

science, as is evidenced in frontier areas such as biotechnology, bioinformatics, space and

astronomy, and the nuclear and nano-sciences.

The Statistical Indicators for the New Economy (SINE) Program of the Statistical Office for

the European Communities (2000) classifies indicators of “New Economy” into four areas: the

technology domain, the industry domain, the economy domain and the social domain. This

classification maps fairly well into the three dimensions of ICTs that we discussed earlier: the

technology indicators correspond to knowledge dimension; the industry and economy indicators

map to the economic dimension; and the social indicators related to social dimension. A

conclusion that might be drawn from these observations is that a complete measurement

approach for ICTs ought to involve all three dimensions: economy, society and knowledge.

Interrelationships among the Three Dimensions

The three dimensions of ICTs—economy, society and knowledge—are closely interrelated.

As discussed above, ICTs have direct impacts on each of these dimensions. The direct impact of

ICTs on one dimension also is likely to be reflected as indirect impacts on other dimensions,

because of the extent to which the three dimensions are interrelated. For example, ICTs that lead

to greater productivity create an economic impact. But higher productivity further leads to the

availability of additional resources for investment in R&D, and this enables innovation and

technological change which will impact the knowledge dimension. In addition, the availability of

surplus resources may also lead to expenditures on leisure and an improvement in the quality of

life. This is an impact in the social dimension. Overall, we believe that direct impacts of ICTs

Page 8: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

7

on any one dimension will create indirect impacts on the other two dimensions. 5 (The

interrelationships of ICT impacts is depicted in Figure 1.) A complete measurement approach

for ICT impacts, therefore, must incorporate direct and indirect impacts. 6

Figure 1. A Representation of Three Dimensions of ICTs: Economy, Society, Knowledge

The Temporal Dimension of ICT Measurement: A Stage-Based Diffusion Perspective

As with the measurement of work group, organizational, industry and market performance,

the assessment of the performance of ICTs at the country level requires the consideration of the

temporal dimension. Measurement of ICTs should permit relevant, accurate, cost-effective and

meaningful assessment of the extent to which ICTs evolve over time in a country, affecting other

aspects of the country, its economic growth and development, and its technological readiness to

take additional steps forward. We depict this with three stages of diffusion of ICTs at the

5 The UN Development Programme conceptualizes ICTs as impacting economic growth and social advancement (UNDP, 2001). Our conceptualization expands on the UNDP’s perspective to show a more complete set of interrelationships among the economic, social and knowledge dimensions of ICTs. 6 We further expect that it will be important to refer to the relevant literature to determine how to measure some of the other sub-dimensions that map into economy, society and knowledge, as a means for effective assessment.

Economic Dimension

Society Dimension

Knowledge Dimension

Resources for social

development

Better human capital

Resources for R&D

Productivity growth, employment, trade, etc.

Advances in medicine

agriculture, telecomm, etc.

ICTs Better human capital

Page 9: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

8

country level of analysis, in the context of an S-curve. See Figure 2 below. 7

Figure 2. Measuring Different Diffusion Stages of ICTs

Time

Indi

cato

rs

Stage 1

Stage 2

Stage 3

ICT Readiness

ICT Intensity

ICT Impacts

Source: UNCTAD (2001). UNCTAD refers to these stages in the context of e-commerce only. However, these stages are applicable to any ICT technology.

The three stages of ICT evolution are as follows:

• The ICT Readiness Stage (Stage 1). Stage 1 is called the ICT readiness stage

(UNCTAD, 2001). When a technology is new to a country or a region, the readiness of

its people to adopt it is a crucial issue. The related measures also must capture readiness

in terms of a country’s businesses, infrastructure and economy. The kinds of

measurement that needs to be supported are primarily exploratory inquiry and

preliminary assessment, as well as comparative analysis.

• The ICT Intensity Stage (Stage 2): Stage 2 is called the ICT intensity stage (UNCTAD,

7 For additional research that adopts this general point of view, the reader should see the doctoral dissertation research of Angsana Techatassanasoontorn (2006). She employs a adopted a state-based approach to the measurement and estimation of the cross-national diffusion of digital wireless phones, using data from 48 countries during the 1992 to 1999 time period. In lieu of using conceptual boundaries for operationalizing the definitions of the different states or stages, her approach is to formalize them using regularity boundaries. A regularity boundary can be established based on the empirical regularities of the adoption data around a specific point in time, when different kinds of technology adoption behaviors are observed (e.g., adoption in the presence of no prior adoption, adoption in the presence of growing externalities, mature adoption with knowledge of the full extent of adoption, etc.). Additional details also can be found in Kauffman and Techatassanasoontorn (2005a and 2005b).

Page 10: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

9

2001). As adoption increases, the intensity of adoption, the intensity with which ICT-

related activities (such as business-to-consumer and business-to-business e-commerce

and e-government initiatives, etc.) are undertaken becomes more relevant. The

measurement of ICT is likely to be of interest when ICTs are becoming more prevalent in

a country, but have not yet reached the point where they are fully diffused and adopted,

and when there are beliefs that a digital divide has been created. The kind of

measurement activities that are likely to be appropriate in this stage will be formalized

inquiry and main effects analysis. They will involve enhancing measurement

effectiveness, assessing key impacts, and conducting issue-specific sensitivity analysis, as

well as forecasting future ICT impacts, based on within-country over-time and between-

country contemporaneous-time comparative analysis.

• The ICT Impacts Stage (Stage 3). Stage 3 is the ICT impacts stage (UNCTAD, 2001).

This stage reflects the maturation of ICT investment and implementation activities. It is a

stage when it becomes possible to gauge the range of impacts that ICTs can have on

national economies and the business activities being carried out in those countries. The

measurement approaches that are most often observed will be refinements to formal

analysis, as well as retrospective analysis. The measurement of ICT impacts is likely to

be of greatest interest in countries where ICTs are very well developed, but it is unclear

whether the environment is obtaining their greatest advantages.

These stages of diffusion help us to gauge the different measurement needs for ICTs. Based

on our representation of the S-curve-shaped diffusion of ICTs, we also have classified the

metrics for ICTs in terms of the three observed stages. The reader should note that it is

important to consider our classification as representative rather than definitive. They reflect our

Page 11: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

10

interpretation based on exploratory analysis of the measurement approaches that we identified in

this research. We expect that there will be natural overlaps that occur between the stage

descriptions (ICT readiness and ICT intensity, and ICT intensity and ICT impacts, for example),

and the associated measurement approaches.

A SURVEY OF EXISTING MEASUREMENT APPROACHES FOR ICT

How well developed are the measurement approaches that one can use to gauge the extent to

which the readiness, intensity and impacts of ICT can be identified? We identify five different

streams of measurement for ICTs at the country level. The first category relates to discrete, non-

economic measures for ICTs. In this category, the measures are specified based on data that are

collected with respect to quantitative physical parameters, such as the number of telephone lines,

televisions, Internet users, or personal computers, etc. This group of measures captures

information about variables that do not have immediate economic content. The second category

consists of measurements of economic measures related to ICTs, which are based on studies of

different aspects of the economy. Some studies measure technology in a manner that is primarily

related to growth, productivity, investment and employment are included in this category of

measures. The third covers technology adoption and diffusion. The studies characterize effects

from differential availability and adoption of ICT products and services. The fourth relates to

single-item index measures of ICTs, and the fifth includes digital divide measures.

Discrete Measures

Discrete measures involve measurement of quantifiable physical entities associated with

ICTs. Examples of such measurements are the number of telephone lines, the number of

personal computers, and economic measures, such as tariffs for telephone calls, investments in

ICTs, and export of ICT goods and trade in ICT services. Discrete indicators capture valuable

Page 12: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

11

raw data. There are hundreds of discrete indicators on which ICT-related issues can be measured

by national statistical organizations. Global comparisons are somewhat more difficult to

accomplish on the basis of country-level or regional-level discrete indicator measures, however.

They often will not be able to provide effective assessment out of their own national or regional

context. The main issues relate to standards for data definitions, and the lack of standardization

in the process of measurement. As a result, the degree of comparability of data across countries

may not be perfect. However, some useful metrics can be based on these discrete indicators.

To give the reader an idea of the broad applications of discrete measures, we present

information on some of the key organizations that use them. (See Table 1.)

Table 1. Organizations Using Discrete Indicators of ICTs ORGANIZATIONS MAIN MEASUREMENTS International Telecommunications Union (ITU, 2003)

80 sets of telecommunication indicators covering telephone network size and dimension, mobile services, quality of service, traffic, staff, tariffs, revenue and investment.

World Bank (2004a) 800 indicators. 24 indicators in “Information and Technology” and 11 “Communications.” Indicators such as exports in high technology products, health indicators, educational indicators, gender equality-related indicators under other categories.

OECD Communication Outlook (OECD, 2003)

Data on communications sector and on policy frameworks used in Organization for Economic Cooperation and Development (OECD) countries.

PingER Project (Stanford Linear Accelerator Center)

Measurements of Internet performance by a set of specified monitoring centers across the world.

Netcraft SSL Server Survey Measures of secure servers in the global networking context. Internet Systems Consortium (ISC, 2004)

Tracks growth of Internet domain hosts globally on a biennial basis.

National Telecom and Information Admin (NTIA, 2002)

Tracks computer, Internet and broadband usage in the United States

Economic Measures

The common economic factors of interest with regard to the measurement of ICT are

productivity, growth, trade and employment. Many studies have tried to establish a clear

relationship between investments in ICTs and productivity without success, but no effective

measurement of this relationship has been accomplished. This is partly due to the lack of

standards for data collection to support accurate productivity assessment—even among the most

Page 13: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

12

industrialized countries. Obtaining such data from developing countries is even more difficult.

So comparing the effectiveness of developing and developed countries’ ICTs has been hard.

Another measurement that is associated with technology investment is the effect of

computers on the measured growth of gross domestic product (GDP), using the Contributions to

Percent Change Methodology (CPCM) developed by the Bureau of Economic Analysis of the

United States (Moulton, 2000). CPCM correlates the direct contribution of final sales of

computers to real GDP growth. In the absence of productivity data, CPCM is an indicator of the

impact of computers on growth in GDP. However, Moulton’s study is limited to computers and

does not touch upon other ICT investments, and so it too has only limited applicability.

Apart from productivity, the impacts of ICT investments on growth, trade, and employment

are widely recognized. However, several measurement issues arise when attempts are made to

compare ICT investment across countries. In many countries, for example, expenditures on ICT

products are considered as investments only if the products can be physically isolated. As a

result, when ICT investments are made in equipment, most the expenditures expensed as a form

of intermediate consumption. This implies that ICT investment may be underestimated. With

varying measurement practices in different countries, the bias created may differ depending on

how intermediate consumption and investment are treated in each country’s accounts. Second,

treating expenditures on software as a capital expenditure in the national account of a country is

a recent phenomenon even in developed countries. Many other countries are yet to make the

necessary amendments in their accounting practices in this regard. This also limits opportunities

for effective comparison by measuring software expenditures. The third issue relates to the use

of different price indices for valuing GDP in different countries. This is particularly important

for products such as computers, for which quality also has changed significantly in a short time.

Page 14: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

13

If we take the improved performance into account, computer prices have fallen very rapidly. To

account for this quality improvement, some countries apply hedonic measures, while others do

not, resulting in biases in comparisons (OECD, 2002).

The comparison of ICT trade statistics suffers from a lack of data. This is because ICT goods

and services do not have the same level of detailed classification in international trade as is

available for other industrial goods and services. Thus, ICT sector exports and imports have to be

derived based on the definition of “ICT sector” that is used. But this definition also varies from

country to country and limits comparability. Further, data for both imports and exports for

individual countries include imported goods that are subsequently re-exported. This also may

influence indicators of countries relative trade performance (OECD, 2002).

Various studies have found that investment in ICTs have an unclear impact on employment.

This becomes even more apparent in context of outsourcing. Katz and Krueger (1999) show that

employment of more educated workers and non-production workers increased rapidly within

industries and businesses in the U.S. during the 1980s and the 1990s. In addition, the increasing

utilization of more-skilled workers has a strong positive correlation with capital intensity and the

introduction of new technologies (Doms, et al. 1997). Based on his review of fifty years of

employment statistics, Katz (2000) concludes that in the United States, the high relative demand

for skills has been driven by skill-biased technological changes ranging from electrification to

computerization. A limitation in the cross-national measurement of impacts on employment is

the lack of an internationally agreed upon list of ICT-related occupations. 8

Measures for Technology Adoption and Diffusion

We next summarize the findings of a number of technology adoption and diffusion studies at

8 An international classification of occupations exists, based on the International Standard Classification of Occupations of the International Labor Office (ISCO 88), but it has not been widely implemented.

Page 15: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

14

country-level. (See Table 2.)

Table 2. Studies Relating to ICT Adoption and Diffusion KEY STUDIES FINDINGS Caselli and Coleman (2001)

Imports of computer equipment as determinant of computer technology adoption associated with: human capital, manufacturing trade openness with OECD nations, high investment rates, good property rights protection, a small share of agriculture in economy, size of government in GDP, and share of manufacturing impacting computer adoption.

Chinn and Fairlie (2004)

Counts of PC and Internet use per capita depend on per capita income, telephone infrastructure, regulatory quality (especially market-friendly policies).

Dasgupta et al. (2001) Internet hosts per telephone mainlines: positively related to policy and urban percentage population; negatively related to baseline value; not related to per capita income.

Dewan et al. (2004) Digital divide for IT penetration/capita and IT penetration/GDP narrowing over time and successive technology generations. IT penetration positively correlated with income. Other demographics have different effects at different stages of IT adoption.

Gruber and Verbove (2001)

Internet use is correlated to GDP per capita, access cost, competition, education, English proficiency.

Kauffman and Techatassanasoontorn (2005a)

Mobile-commerce diffusion as affected by country characteristics, digital and mobile industry characteristics, and regulatory policies. Factors driving diffusion from introduction to partial diffusion state are different from those driving diffusion from partial diffusion to maturity state.

Norris (2000) New media index (for information over Internet use) and old media index (radio, TV sets, newspaper readership) are highly correlated.

Oxley and Yeung (2001)

Internet host penetration positively associated with physical communication infrastructure, rule of law, credit card use, but negatively associated with telephone services.

Pohjola (2003) Determinants of real spending on computer hardware per capita and computer use are Level of income, relative price of computers and stock of human capital.

Quibria et al. (2002) Number PCs and Internet use per capita depend on GDP, education levels, infrastructure. Robinson and Crenshaw (2002)

Internet penetration driven by development level, political freedom, educational attainment. Foreign direct investment as a percent of GDP has positive effect on Internet capacity and negative effect of economic disarticulation.

Wallsten (2003) Internet users and Internet hosts positively correlated with regulation of ISPs and final user price.

Wolcott et al. (2001) The GDI Framework posits six dimensions: connectivity infrastructure, organizational infrastructure, sophistication of use, pervasiveness, geographic dispersion and sectoral absorption for determining Internet diffusion, and an open-ended list of indicators.

Wong (2001) ICT adoption measured by telephone main lines, personal computers and Internet use related to GDP per capita, and competitiveness.

These studies identify the factors relevant to adoption of ICTs in a country. We note that the

factors are from each of the economic, social and knowledge dimensions. Apart from identifying

factors which affect technology adoption, these studies also demonstrate the use of different

frameworks for ICT diffusion. The important models in this regard are Rogers’ (1985) classical

diffusion of innovations model, Kwon and Zmud’s (1987) six-stage model for the organizational

innovation implementation process, Bass’ (1969) model of product diffusion, and Tornatzky and

Page 16: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

15

Fleischer’s (1990) technological, organizational and environmental context framework.

Single-Item Index Measures

An early paper by Edgeworth (1925) offers a definition of an index number as “a number

[that] shows by its variations the changes in a magnitude which is not susceptible either [to]

accurate measurement itself [or] to direct valuation in practice” (UNCTAD, 2003, p. 16). We

define a single-item index measure of ICT as a measure using two or more items that are

combined to indicate an ICT variable which does not provide a meaningful measurement by

itself. The variable, in our case, may relate to ICT readiness, intensity or impacts. A single-item

index measure will provide a score for the variable being measured, by combining different

items, based on some underlying theoretical framework. Single-item index measures are the

basis for benchmarking and monitoring progress over time from a policy perspective. There are

only a few academic papers on the measurement of ICTs on single-item index measures (e.g.,

Corrocher and Ordanini, 2002; Wolcott et al., 2001). Press (2000, p. 5) points out that single

indicators have limitations. An index of multiple measures “may be more robust than a [single]

indicator in measuring a qualitative concept” (UNCTAD, 2003, p. 16.)

Most research on country-level ICT measurements relates to technology adoption and

diffusion, and to economic factors. The work on single-item index measures has come largely

from international organizations. These include the World Society on Information Systems

(WSIS), UNCTAD, OECD and ITU, among others. Approached thoughtfully, this kind of

benchmarking is a useful input to policy analysis. It can allow more informed and insightful

study of policy and, ultimately, in promoting better, faster and more effective ICT development

(UNCTAD, 2003). In the stages that we described in the prior section of this paper, single-item

index measures must address the unavoidable need of multiple uses in typical real world settings,

Page 17: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

16

involving assessments for policymaking purposes.

Single-item index measures can be constructed to focus on certain issues to a greater extent

than others (e.g., the growth of the Internet in a country or the extent to which private homes are

connected, as opposed to the degree of Internet shopping use or capital investments by e-

commerce and bricks-and-clicks firms, or the price of sixty minutes of digital wireless phone

services vs. the number of landlines for traditional telephony). Moreover, we also expect that

some natural single-item index measures will measure somewhat less tangible aspects of some

indicators. For example, it may be difficult to concretely characterize the extent of ICT

infrastructure with one dimension in terms of something like the degree of ICT infrastructure

development.” Instead, the degree of ICT infrastructure development may be measured over

different indicators, including phone lines, broadband capacity, PCs in use, number of Internet

hosts, and other ICT infrastructure-related indicators. Indeed, some natural indicators are

intangible, unobservable and do not have standard measurement units. Measures for

improvements in the quality of life due to ICTs are an example. Thus, we see that single-item

index measures will have to contend with difficulties that are both obvious and significant.

Agencies that are engaged in the monitoring of international development and economic

growth in different countries and regions around the world have evolved many single-item

indices to measure aspects other than ICTs. The most successful of these single-item index

measures convey useful methodological information in the development of single-item index

measures of ICT. We cite two examples in this regard. One of the most well known measures of

national development is the Human Development Index (HDI), which was first proposed by the

United Nations Development Programme in 1990 (UNDP) (2004). The HDI measure integrates

social indicators and economic indicators to measure the overall development of a country. The

Page 18: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

17

indicators used are: life expectancy; the adult literacy rate; the combined primary, secondary and

tertiary gross enrollment ratio; and per capita GDP based on purchasing price parity.

Another index that is used at the country level is the technology achievement index (TAI),

which was proposed in a recent Human Development Report by the UNDP (2001). TAI

aggregates information on the performance of a country in creating and diffusing technology,

and in building a human skills base. This single-item index measure gauges national achievement

on the basis of four factors. The first factor is technology creation, measured by the per capita

number of patents granted to residents, and by receipts of royalties and license fees from abroad

per capita. The second factor is the diffusion of recent innovations, measured by the number of

Internet hosts per capita, and the share of medium and high-technology exports as a percentage

of total goods export. The third factor is the diffusion of older innovations, measured by per

capita telephones (including both landlines and cellular phones) and electricity consumption per

capita. Last is human skills, as measured by a combination of the mean number of years of

schooling in the population age fifteen and above, and the gross tertiary science enrollment ratio.

There are a number of single-item index measures that treat ICT readiness, a category

represented by many assessment tools, which have been developed, applied and tested. These fit

best with the ICT readiness stage (Stage 1) in Figure 1. However, these have been used for

countries that are technologically developed to the extent that they are probably already in the

ICT intensity or ICT impacts stage (Stages 2 and 3 in Figure 1). One possible justification for

this is the rapid change of technology. A country will often be in the intensity stage or the

impacts stage for one technology but may only be in readiness stage for another.

ICT Readiness-Related Single-Item Index Measures

An example of an ICT readiness-related single-item index measure is the network readiness

Page 19: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

18

index (NRI). This was developed by the Center for International Development (Kirkman et al.,

2002; Datta and Jain, 2004) at Harvard University, which reported on its estimation for 75

countries in 2002-2003, and 102 countries in 2003-2004. Network readiness is defined as a

nation’s or a community’s degree of preparation to participate in and benefit from ICTs. The

2003-2004 version of NRI includes three components: environment, readiness and usage.

Environment is further divided into equally weighted sub-indices: the market environment sub-

index (based on nine indicators), political and regulatory environment sub-index (seven

indicators) and infrastructure environment sub-index (five indicators). Readiness is similarly

subdivided into the individual readiness sub-index (ten indicators), the business readiness sub-

index (five indicators) and the government readiness sub-index (three indicators). Usage is also

formed based equal weights of an individual usage sub-index (four indicators), a business usage

sub-index (three indicators) and a government usage sub-index (two indicators).

Since 2000, the Economist Intelligence Unit (EIU) (2003) also has developed its annual “e-

readiness ranking” for sixty of the world’s largest economies. The ranking measures the extent

of a country’s e-business environment based on a collection of factors that indicate how

amenable a market is to Internet-based opportunities. Nearly 100 quantitative and qualitative

economic criteria, organized into six distinct categories, are included. The assessment

characterizes a country’s technology infrastructure, its general business environment, the degree

to which e-business is being adopted by consumers and companies, the social and cultural

conditions that influence Internet usage, and the availability of services to support e-businesses.

The EIU (2004) also uses another composite measure called e-learning readiness. It

indicates a country’s ability to produce, use and expand Internet-based learning—both informal

and formal—at work, at school, in government and throughout society. The related

Page 20: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

19

measurement matrix includes nearly 150 qualitative and quantitative criteria divided into four

categories: education, industry, government and society. Each of these categories is further

divided into four components: connectivity (the quality and extent of Internet infrastructure),

capability (a country’s ability to deliver and consume e-learning, based on literacy rates, and

trends in training and education), content (the quality and pervasiveness of online learning

materials) and culture (behaviors, beliefs and institutions that support e-learning development

within country). This index primarily targets the knowledge dimension of ICTs.

Another ICT readiness index measure, the digital access index (DAI), was developed by the

ITU (2003), and measures the overall ability of individuals in a country to access and use new

ICTs. It was estimated for 178 countries in 2003. This index is built around five main factors:

infrastructure, affordability, knowledge, quality and usage. These factors are measured for eight

economy, society and knowledge indicators: broadband subscribers per 100 inhabitants;

Internet users per 100 inhabitants, fixed telephone subscribers per 100 inhabitants; mobile

cellular subscribers per 100 inhabitants; Internet access as percentage of gross national income

(GNI) per capita; adult literacy; combined primary, secondary and tertiary school enrollment

level; and international Internet bandwidth per capita. 9

The World Times information society index (Welch, 1999) captures infrastructural readiness

for information societies in terms of computer infrastructure, Internet infrastructure, social

infrastructure and information infrastructure, with an additional 23 underlying variables. This

index has been calculated for 55 richest countries. The index uses indicators for economy,

society and knowledge, representing all three of our primary dimensions. 9 A minimum and maximum value is assessed for each indicator and the countries are ranked on these maximum-minimum values. If the maximum value for broadband subscribers per 100 inhabitants is 30 and the minimum is zero, then a country with 20 broadband subscribers per 100 inhabitants will be given a score of 0.667. The scores on all indicators are summed up based on the weights specified to obtain an overall index score.

Page 21: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

20

The knowledge economy index (KEI) is a result of “Knowledge Assessment Method” devised

by World Bank (2004b), for assisting countries to analyze their capabilities for participating in

the knowledge revolution. It focuses on those areas of the economy and society that directly

benefit from knowledge and learning. KEI is defined as the average of the performance scores

of a country based on four knowledge economy pillars: economic incentive regime, education,

innovation, and ICTs. The scores for each of these pillars are based on pillar-defining variables.

Overall the KEI construct uses 76 underlying structural and qualitative variables.

Another representative composite index measure is the executive index of the Massachusetts

Innovation Economy (Massachusetts Technology Collaborative, 2003). It tracks nine industry

clusters and seventeen economic indicators that benchmark the state of Massachusetts’s strengths

and weaknesses against six other leading technology states in the United States. It measures

progress in terms of three key factors: results (outcomes for people and business job growth),

rising average wages and export sales, innovation process (dynamic interactions that translate

resources into results-idea generation, commercialization, entrepreneurship, and business

innovation) and resources (inputs to the innovation economy including human resources,

technology capital, and infrastructure and other investments). The indicators are primarily from

the economy and knowledge dimensions, with somewhat greater representation of the latter.

In July 2001, the Ministry of the Information Industry of the People’s Republic of China

launched of China’s IT development: the national informatization quotient (NIQ) (Jin and

Chengyu, 2002). NIQ is a composite index based on twenty indicators in six dimensions. The

dimensions are: development and application of information resources, information network

construction, application of information technologies, information industry development, human

resources of informatization, and environment for informatization development. The different

Page 22: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

21

dimensions are weighted based on expert opinions. The framework has been applied to measure

the informatization of several different Chinese provinces. The index uses indicators from all

three dimensions: economy, society and knowledge.

Index Measures for the Digital Divide

The next group of index measures that we consider in this section is group of index measures

for the digital divide. A digital divide is a gap describing different levels of intensity of use of IT

between two groups in a country, or between two countries. Since most discussions of the digital

divide have focused on the social and economic context, these indices tend to overlook the

knowledge dimension. The related index measures typically represent the Intensity Stage in

Figure 1, but due to the overlaps that we discussed earlier, we still expect to countries in all

stages that are experiencing digital divide problems.

Orbicom, the Network of UNESCO Chairs in Communications, advocates a framework for

measuring digital divide that develops concepts such as information density and information use

(Sciadas, 2003). Information density refers to a country’s ICT-related capital and labor stocks

and is indicative of productive capacity. Information use refers to the consumption of ICT-

related outputs. The aggregate of the two represents is the information state of a country. The

capital and labor measures involve eight and four indicators each, while ICT consumption

involves ICT uptake and ICT intensity of use. The former is measured with four indicators and

the latter with three indicators. The information state index measure metric has been applied to

measure the information states of 139 countries, representing all stages of technology diffusion.

Selhofer and Husing (2002) suggest a method of measuring digital divide on an aggregate

level by defining a digital divide index (DDI) which focuses on disadvantaged groups of society.

They identify likely knowledge gaps that are associated with the digital divide, and study four

Page 23: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

22

socio-economic factors: gender, age, income and education. DDI is defined as a weighted sum

of the following indicators, percentage of computer users, percentage of computer users at home,

percentage of Internet users and percentage of Internet users at home. Since it focuses on

computer and Internet use, it does not encompass the diversity of ICT applications across

countries. The framework has not been globally applied, probably due to its limited usability.

Corrocher and Ordanini (2002) propose a digitization model for digital divide. A composite

digitization index is based on six factors, each with sub-indicators: markets, diffusion,

infrastructures, human resources, competitiveness and competition. Principal component

analysis is used to aggregate the indicators; however, the method has not been widely applied.

Another framework is due to the MOSAIC Group, as part of the “Global Diffusion of the

Internet (GDI) Project” (Wolcott, et al., 2001). The GDI framework consists of six discrete-

valued dimensions: pervasiveness, geographic dispersion, sectoral absorption, connectivity

infrastructure, organizational infrastructure, and sophistication of use. The framework has been

applied to 25 countries across the three stages—readiness, intensity and impacts—and provides

useful information on the digital divide, as well as on the adoption and diffusion of ICTs.10 (For

this reason, we also included it in Table 2, which emphasizes adoption and diffusion measures.)

DISCUSSION: LIMITATIONS OF THE ICT MEASURES

All the measurement approaches referred to in the previous section can be mapped onto the

four-dimensional framework discussed earlier. These dimensions define a “complete” set of

properties of ICTs. A “complete” approach to measurement, thus, will involve specifying values

for all of the dimensions. We note that most measurement approaches measure one or more of 10 The analysis framework does not provide a ranking. Rather, the framework supports an analysis of the diffusion of the Internet within a country. Different countries' diagrams can be superimposed onto each other, but cross-country comparisons are not made on the basis of quantitative information alone.

Page 24: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

23

these dimensions, but not all four.

Through our analysis of the various measurement approaches, we have been able to note

some of their strengths and weaknesses. In particular, we noted that there are limitations

associated with the discrete indicator measures, the technology adoption and diffusion measures,

and the measures of economic factors. They focus on one or more aspects of ICTs, but are

unable to measure and effectively represent the richness of ICTs in all their key dimensions. For

the most part, the discrete indicator measures represent raw data which needs to be converted

into information. To achieve an understanding of the impacts of ICTs requires measurement

methodologies that go beyond the facts, and reveal why certain kinds of impacts are occurring.

Our assessment of the non-composite measures also suggests that they will have limited

utility, especially the measures for technology adoption and diffusion. Traditional studies of

technology diffusion typically stop when a user adopts a single technological innovation (e.g.,

Grigorovici et al., 2002). But most ICTs do not represent a single innovation. Instead, they

represent a cluster of related technologies. So using a single variable does not capture the

richness of what is happening. So, even though these studies do establish a link between ICT

adoption and the factors that influence adoption, they do not provide an answer to how strong the

links are, diminishing their value relative to policy making. In spite of these limitations,

diffusion studies nevertheless provide valuable theoretical foundations and an empirical basis for

identifying factors which affect the adoption of the various innovations that underlie the ICTs.

The factors identified by these studies also are relevant in the development of composite

measures of ICT intensity. The biggest limitation of the economic measures, however, is their

frequent use in isolation, which leads to the omission of social and knowledge dimensions.

The different single-item index measures, discussed in the previous section, focus on

Page 25: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

24

different attributes of ICTs, not just a single attribute. The choices of factors and indicators

contained in these measures vary. For example, the number of indicators used ranges from eight

for the digital access index (ITU, 2003 and 2004) to more than 100 for the e-readiness index of

the EIU (2003). Such a large number of indicators are likely to lead to overlapping information

that would be hard to reconcile in the context of econometric models and estimations for the

purposes of associational and causal explanations. A useful factor analysis could be done to

isolate the key constructs and to identify those indicators that are redundant.

Since the single-item index measures aim to measure unobservable constructs and quantities,

reliability and validity are important tests which they should meet. In standard social sciences

research methodology (e.g., Trochim, 2005), reliability informs us about how likely the same

result is achieved or how likely the result will be repeated if the same scale is used to

measurement. Validity tests whether the measurement reflects what it is meant to reflect, and is

of three types: content validity, external validity and construct validity. However, lack of

commonality in the choice of factors and indicators among the different single-item index

measures may be due to the lack of a common conceptual basis. It will raise question about both

reliability and validity. For example, the factors used by the network readiness index are

environment, readiness and usage (Datta and Jain, 2004), while those used by the digital access

index are infrastructure, affordability, knowledge, quality and usage (ITU, 2003). These are

obviously quite different from each other. The problem is further exacerbated when some metrics

define their own factors of measurement, making it difficult to relate these constructs to any of

the constructs that are used in other metrics. Two examples are the informatization quotient of

Jin and Chengyu (2002), and the info-state of Sciadas (2003). Unfortunately, the available

Page 26: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

25

literature on these metrics does not provide any indications that these measures have effectively

met the criteria of appropriate reliability and validity tests.

The temporal dimension affects ICT measurements, particularly in the ICT-impacts stage.

(See Stage 3 of Figure 2). Prior research at the organizational level has shown that, because of

the unusual complexity and novelty of IT, firms may require some experience before becoming

proficient. Curley and Pyburn (1982), Brynjolfsson and Hitt (1996), and Loveman (1994) also

found lagged IT impacts in organizational productivity. These results are at the organizational

level, where we observe the accumulation of individual effects, which lead to aggregate level

impacts. However, none of the existing metrics take this into account.

The method adopted for summation of different indicators also impacts the overall value of

the single-item index. A single-item index integrates different kinds of indicators, including

qualitative parameters with quantitative parameters, and hard data with soft data. The national

informatization quotient (NIQ) Jin and Chengyu, 2002) uses a subjective expert evaluation for

determining the weights of the different factors which are summed to give the final index value.

The network readiness index (NRI) Datta and Jain, 2004) integrates the three sub-indices, with

each getting an equal weight. Each indicator within a sub-index is also given equal weight.

However, since the number of indicators for different sub-indices is different, the effective

weight which an indicator has on the overall index varies.

Another relevant issue is whether the different factors being summed are all indicators that

relate to the same measurement dimension. Different approaches have been adopted in this

regard. Some metrics adopt linear summation. Other authors disagree, especially when linear

summation is used. The kind of example that is question is whether the effect of women’s

participation in the economy is additive to the density of installed base of telephones or to the

Page 27: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

26

“brain drain.” Selhofer and Husing (2002) use a time-and-distance method for integrating

different factors, while Mozaic’s GDI framework identifies a six dimensional framework on

which the measurement is built (Wolcott et al., 2001).

The selection of indicators also needs to account for differences in measurement practices in

different countries across the globe. While the statistical agencies in developed countries have

developed systems to collect ICT-specific discrete indicators, the developing countries are

lagging behind in this effort, and the lag varies from one country to another. This hampers

accuracy and comparability of measurement. To illustrate this point, we note some specific

examples that relate to the two indices: the national informatization quotient (NIQ) (Jin and

Chengyu, 2002) and the network readiness index (NRI) (Datta and Jain, 2004). According to Jin

and Chengyu (2002), NIQ uses proportion of investment in information by industry to the total

investment by industry as one of the indicators. Data availability and measurement of ICT

investment vary across countries, especially for the measurement of software investments,

whether value deflators and quality adjustments are applied, and other issues that raise questions

about measurement consistency (OECD, 2002).

The NIQ index also uses an indicator for value-added by the information industry to GDP.

Examining the real contribution of ICT to value-added would require an analysis based on

measurement of deflated prices of the associated outputs. But this is problematic in the case of

ICTs, since the measurement of prices of ICT-related outputs is complicated by significant

quality improvements. Second, measuring output in the telecommunication industry alone is

also problematic. Some countries use consumer price indices for phone rates to as value-added

deflators. Others use physical quantity indices for calls, telexes and other services to measure

Page 28: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

27

volume changes in output. And still other countries use composite index of producer price

indices for the relevant components (OECD, 2002).

Another indicator used by NIQ is proportion of R&D expenditure of information industry to

total R&D expenditure (Jin and Chengyu, 2002). Since ICT R&D expenditures are never

separately identified in any country’s accounts, its estimation always must be done using

approximations. Besides, major weaknesses exist with data on R&D services which are not

adequately differentiated in many countries. For example, some of the indicators used by the

NRI (Datta and Jain, 2004) also lead to comparison problems for network readiness. For

example, ICT manufactured exports and ICT service exports require export-related data for the

ICT sector of a country. However, existing classifications do not permit exact export-related

data to be extracted. Further, NRI (Datta and Jain, 2004) also uses “households online” as one of

its indicators. But existing statistics on ICT use by households may run into problems of

international comparability because of the different structural composition of households across

countries (OECD, 2002).

NRI also uses firm-level technology adoption as one of its indicators. Technology diffusion

varies with business size and industry, so that the indicators based on the overall proportion of

the businesses using a technology can give rise to misleading international comparisons (Datta

and Jain, 2004). NRI’s use of the utility patents indicator also is likely to create difficulties for

cross-national comparisons (Datta and Jain, 2004). Patent-related indicators have some

weaknesses. For instance, many inventions are not patented and the propensity to patent differs

across countries and industries. Another drawback is related to differences in patent regulations

among countries that hamper comparability. Also, the distribution of patents according to their

value is skewed: many patents have no commercial application (and hence little value), while

Page 29: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

28

only a few have great value (OECD, 2002). Measurement of e-commerce used in this index

introduces complexities like the transaction-multiplier effect, noted by Mesenbourg (2001). The

e-commerce measurement figures, as a result, are pessimistic: all the transactions are not

completely accounted for (Grigorovici et al., 2002).

We also note that the existing single-item index measures focus on measuring ICT-readiness

(Stage 1 in Figure 2) and digital divides (which relate to Stages 1 and 2 in Figure 2). There is no

composite single-item index measure which measures ICT impacts (Stage 3 of Figure 3). One of

the biggest concerns of policy makers is optimal allocation of limited resources. This requires

estimation of returns on a given allocation of resources for ICT development which, in turn,

demands measurement of ICT-related impacts. The need for having an index for measuring

impacts is not specific to ICTs. In other domains, indices such as the Human Development Index

(HDI) and the Technology Achievement Index (TAI) measure the impacts of ICTs. Furthermore,

the UN WSIS declaration of December 2003 stated that the goal was to be able to track global

progress in the use of ICTs to achieve internationally agreed upon development goals, including

those of the Millennium Declaration. To achieve this goal, an appropriate measure could only be

one which measures the impacts of ICTs on development.

Even though there is significant overlap in Stages 1, 2 and 3 of ICT diffusion, i.e., at some

given time a country may need to measure ICT readiness, ICT intensity, ICT impacts or any

combination of these. But an index that is constructed to primarily measure ICT readiness or

ICT intensity should not be expected to measure ICT impacts. For example, the ICT readiness

index measures inputs to the system. However, an appropriate measure of ICT impacts ought to

be based on the output of the system—just the opposite end of the production spectrum. For

example, France has been slow to move away from legacy EDI systems in business-to-business

Page 30: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

29

procurement, but it has high per capita users of computers (Brousseau and Chaves, 2004). This

characteristic would reflect in the measurement of ICT impacts, and not in ICT readiness or ICT

intensity measures. Another reason why the measurement of ICT readiness and ICT intensity

indices is unlikely to be helpful relative to the assessment of ICT impacts is because these

indices exclude the production aspects of ICTs in the economy. According to Wong (2001), in

1996 about 57% of global component production, 63% of consumer electronics production, 51%

of control and instrument electronics production, 48% of electronic data processing, and 32% of

telecommunications equipment was produced in Asia. Another example is India, for which the

contribution of software and services exports to overall “invisible receipts” (unmeasured trade)

was around 73% in 2003-2004. Such production-related facts are important elements of the

overall picture that characterizes ICT impacts.

Considering that there is no single-item index measure of ICT impacts and that the existing

measures cannot be used for this purpose, our conclusion is that the object of measurement for

the ICT Development Index should be to measure the impact of ICTs on the development of the

country. More importantly though, we recognize through this research that our quest for

effective measures of ICT impacts ought to focus on multi-dimensional measures, and not single-

item measures.

CONCLUSIONS

Measurement of value of IT at the firm level is challenging in itself. Measuring the value of

ICTs at country level is even more demanding. As a result, a number of measurement approaches

have been used for measuring ICTs at country level. This article has provided an overview of

the different approaches and synthesizes them on the basis of a broad four-dimensional

framework. To best of our knowledge, no prior work has attempted to achieve this broad

Page 31: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

30

synthesis. We believe that our overview will aid in developing improved metrics in future.

The measurement of ICT readiness, intensity and impacts at the macro-level of a country has

been neglected in IS research, even though it is very useful for the purposes of national and

international policy making. Most of the academic work in the area has been to measure

economic factors, and adoption and diffusion-related factors. Relatively little attention has been

given to single-item index measures for ICTs. As a result, unlike single-item index measures

developed in other fields, such as public health, financial accounting, and corporate finance,

single-item index measures for ICTs tend to be insufficiently grounded in theory. This provides

important opportunities for academic research.

Our present research points to two essential features that characterize ICT measurement at

the macro-level. First, the measurement needs appear to vary over time, depending on when

measurement is made and the location of the technology on the adoption and diffusion curve.

This suggests the need for different measures for different stages of ICT adoption. Second, at any

stage of ICT adoption, a complete representation of ICTs is possible on three inter-related

dimensions—economy, society and knowledge—and consequently any composite index measure

needs to account for all three dimensions.

Our review points out some gaps that exist related to single-item index measures for the

impacts of ICTs. While there are many measures for ICT readiness, measurement of ICT

impacts has been neglected. Considering that impact measurement is of prime interest to policy

makers, we stress the need to develop better measures for ICT impacts. Indeed, empirical

validation of existing measures, development of scientifically rigorous metrics for ICT-impacts

measurements, and estimating the predictive value of these index measures in assessing growth

and development are some of the areas of research that our investigation has opened up.

Page 32: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

31

REFERENCES Banker, R.D., and Kauffman, R.J. 1988. “Strategic Contributions of Information Technology: An Empirical Study of ATM Networks.” In J. DeGross and M. Olson (Eds.), Proceedings of the Ninth International Conference on International Systems, Minneapolis, MN, 141-150.

Bass, F.M. 1969. “A New Product Growth Model for Consumer Durables.” Management Science, 15(5), 215-227.

Berndt, E.R. and Malone, T.W. 1995. “Information Technology and the Productivity Paradox: Getting the Question Right.” Guest Editor’s Introduction to Special Issue, Economics of Innovation and New Technology, 3(3-4), 177-182.

Brousseau, E., and Chaves, B. 2004. “Diffusion and Impact of E-Commerce: The French.” Working Paper, Center for Research on Information Technology and Organizations, University of California, Irvine, CA. Available on Internet at crito.uci.edu/pubs/2004/franceGECIII.pdf. Last accessed on March 13, 2005.

Brynjolfsson, E., and Kahin, B. 2000. “Introduction.” In E. Brynjolfsson, and Kahin, B. (Ed.), Understanding the Digital Economy, MIT Press, Cambridge, MA.

Brynjolfsson, E., and Hitt, L. 1996. “Paradox Lost? Firm-Level Evidence on the Returns to Information Systems Spending.” Management Science, 42(4), 541-559.

Brynjolfsson, E. and Yang, S. 1996. “Information Technology and Productivity: A Review of Literature.” Advances in Computers, 43, Academic Press, New York, NY, 179-214.

Brynjolfsson, E., Malone, T.W., Gurbaxani, V., and Kambil, A. 1994. “Does Information Technology Lead to Smaller Firms?” Management Science, 40(12), 1628-1644.

Caselli, F. and Coleman, W. J. 2001. “Cross-Country Technology Diffusion: The Case of Computers.” National Bureau of Economic Research, Cambridge, MA.

Chinn, M.D., and Fairlie, R.W. 2004. “The Determinants of the Global Digital Divide: A Cross-Country Analysis of Computer and Internet Penetration.” Discussion Paper No. 881, Economic Growth Center, Yale University, New Haven, CT. Available on the Internet at papers.ssrn.com/sol3/papers.cfm? abstract_id=510182. Last accessed on March 13, 2005.

Churchman, C.W. 1959. “Why Measure?” Chapter 4 in C.W. Churchman and P. Ratoosh (Eds.), Measurement, Definitions and Theories, John Wiley and Sons, New York, NY.

Corrocher, N., and Ordanini, A. 2002. “Measuring the Digital Divide: A Framework for Analysis of Cross-Country Differences.” Journal of Information Technology, 17(1), 9-19.

Curley, K.F., and Pyburn, P.J. 1982. “Intellectual Technologies: The Key to Improving White-Collar Productivity.” Sloan Management Review, Fall 1982, 31-39.

Dasgupta, S., Lall, S., and Wheeler, S. 2001. “Policy Reform, Economic Growth, and the Digital Divide: An Econometric Analysis.” Working paper, Development Research Group, World Bank, Washington, DC.

Datta, S., and Jain, A. 2004. “The Networked Readiness Index 2003-2004: Overview and Analysis Framework”, Center for International Development, Harvard University, Cambridge, MA. Available on the Internet via www.weforum.org/pdf/Gcr/GITR_2003_2004/Framework_Chapter.pdf. Last accessed on March 17, 2005

Dewan, S., Ganley, D., and Kraemer, K.L. 2004. “A Country-Level Analysis of the Digital Divide:

Page 33: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

32

Measurements and Determinants.” Journal of the Association of Information Systems, forthcoming. An earlier version of this paper was presented at “The Impact of the Digital Divide on Management and Policy: Determinants and Implications of Unequal Access to Information Technology,” 2005 MISRC/CRITO Symposium on the Digital Divide, Management Information Systems Research Center, Carlson School of Management, University of Minnesota, Minneapolis, MN August 28-29.

Dewan, S., and Kraemer, K.L. 1998. “International Dimensions of the Productivity Paradox.” Communications of the ACM, 41(8), 56-62.

Doms, M., Dunne, T., and Troske, K.R. 1997. “Workers, Wages, and Technology.” Quarterly Journal of Economics, 112(1), 253-290.

Economist Intelligence Unit. 2003. “The 2003 E-Readiness Rankings.” White Paper, The Economist Group in association with IBM. Available on the Internet at graphics.eiu.com/files/ad_pdfs/eReady_2003. pdf. Last accessed on March 13, 2005.

Economist Intelligence Unit. 2004. “The 2003 E-Learning Readiness Rankings.” EIU Executive Briefing, The Economist Group, February 23. Available on the Internet at eb.eiu.com/index.asp?layout= show_article&search_text=E+learning+readiness&article_id=296936829. Last accessed on March 13, 2005.

Edgeworth, F.Y. 1925. “The Plurality of Index Numbers.” Economic Journal, 35, 379-388.

Grigorovici, D.M., Schement, J.R., and Taylor, R.D. 2002. “Weighing the Intangible: Towards a Framework for Information Society Indices.” Working Paper, E-Business Research Center Working Paper, Pennsylvania State University, University Park, PA. Available on the Internet via www.smeal.psu.edu/ebrc/publications/res_papers/. Last accessed on March 13, 2005.

Gruber, H., and Verbove, F. 2001. “The Diffusion of Mobile Telecommunications Services in the European Union.” European Economic Review, 45(3), 577-589.

Gurbuxani, V., and Whang, S. 1991. “The Impact of Information Systems on Organizations and Markets.” Communications of the ACM, 34(1), 59-73.

Internet Systems Consortium (ISC). 2004. “Internet Domain Survey.” Technical Report (CD or DVD), Redwood City, CA. Available on the Internet at www.isc.org/index.pl?/ops/ds/. Last accessed on March 13, 2005.

International Telecommunications Union (ITU). 2004. World Telecommunication Indicators Database, 8th Edition. Geneva, Switzerland. Available on the Internet via ITU Electronic Bookshop at www.itu.int/ITU-D/ict/publications/world/world.html. Last accessed on March 13, 2005.

International Telecommunication Union (ITU). 2003. World Telecommunication Development Report 2003: Access Indicators for the Information Society, 7th Edition. Geneva, Switzerland. Available on the Internet at www.itu.int/ITU-D/ict/publications/wtdr_03/. Last accessed on March 13, 2005.

Jin, J., and Chengyu, X. 2002. “The Digital Divide in Terms of National Informatization Quotient: The Perspective of Mainland China.” In Proceedings of the International Conference on Digital Divides: Technology and Politics in the Information Age,” Hong Kong Baptist University, Hong Kong, SAR, China.

Katz, L.F. 2000. “Technological Change, Computerization and the Wage Structure.” Introduction to E. Brynjolfsson, and B. Kahin (Eds.), Understanding the Digital Economy, MIT Press, Cambridge, MA.

Katz, L.F., and Krueger, A.B. 1999. “The High Pressure U.S. Labor Market of the 1990s.” Working Paper No. 416, Industrial Relation Section, Princeton University, Princeton, NJ. Available on the Internet at www.irs.princeton.edu/pubs/pdfs/416.pdf. Last accessed on March 13, 2005.

Kauffman, R.J., and Techatassanasoontorn, A.A. 2005a. “International Diffusion of M-Commerce: A

Page 34: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

33

Coupled-Hazard State-Based Approach.” Information Technology and Management (forthcoming). Also appeared in the Proceedings of the Fall 2002 INFORMS Conference on Information Systems and Technology, San Jose, CA, November 2002.

Kauffman, R.J., and Techatassanasoontorn, A.A. 2005b. “Does One Standard Promote Faster Growth? An Econometric Analysis of the International Diffusion of Wireless Technology.” Working paper, Carlson School of Management, University of Minnesota, Minneapolis, MN.

Kirkman, G. S., Osorio, C. A., and Sachs, J. D. 2002. “The Networked Readiness Index: Measuring the Preparedness of Nations for the Networked World.” Chapter 2 in Information Technologies Group (Eds.), Readiness for a Networked World: A Guide for Developing Countries, Center for International Development, Harvard University, Cambridge, MA.

Kwon, T.H., and Zmud, R.W. 1987. “Unifying the Fragmented Models of Information Systems Implementation.” In R.J. Boland Jr., and R.A. Hirschheim (Eds.), Critical Issues in Information System Research, John Wiley and Sons, New York, NY, 227-251.

Lehr, W., and Lichtenberg, F.R. 1998. “Computer Use and Productivity Growth in U.S. Federal Government Agencies, 1987-92.” Journal of Industrial Economics 46(2), 257-279.

Loveman, G.W. 1994. “An Assessment of the Productivity Impact of Information Technologies.” In T.J. Allen and M.S. Scott Morton (Eds), Information Technology and the Corporations of the 1990s: Research Studies, Oxford University Press, New York, NY, 84-110.

Mahmood, M. A., Kohli, R., and Devaraj, S. 2004. “Measuring the Business Value of Information Technology in E-Business Environments,” Journal of Management Information Systems, 21(1), 11-16.

Massachusetts Technology Collaborative. 2003. “Executive Index of the Massachusetts Innovation Economy,” Westborough, MA. Available on the Internet at 64.233.167.104/ search?q=cache: fQiIBz61YPUJ:www.cityofboston.gov/bra/gbtf/documents/MTC%2520IndexInnovEconomy2003.pdf+%E2%80%9CExecutive+Index+of+the+Massachusetts+Innovation+Economy%E2%80%9D&hl=en, and via the website of the Massachusetts Technology Collaborative, www.masstech.org. Last accessed on March 13, 2005.

Menon, N. 1999. The Impact of Information Technology: Evidence from the Healthcare Industry, Garland Publishing, New York, NY.

Mesenbourg, T.L. 2001. “Measuring Electronic Business: Definitions, Concepts and Underlying Measurement Plans,” United States Census Bureau, Washington, DC. Available on the Internet at www.census.gov/epcd/www/ebusines.htm. Last accessed on March 13, 2005.

Moulton, R.M. 2000. “GDP and the Digital Economy: Keeping up with the Changes.” In E. Brynjolfsson, and B. Kahin (Eds.), Understanding the Digital Economy, MIT Press, Cambridge, MA.

National Telecommunications and Information Administration (NTIA). 2002. “A Nation Online,” U.S. Department of Commerce, Economics and Statistics Administration, National Telecommunication and Information Administration, Washington, D.C. Available on the Internet via www.ntia.doc.gov/ntiahome/ dn/html/anationonline2.htm. Last accessed on March 13, 2005

Norris, P. 2000. “The Worldwide Digital Divide: Information Poverty, the Internet and Development,” Working paper, John F. Kennedy School of Government, Harvard University, Cambridge, MA. Also presented at the 2000 Annual Meeting of the Political Studies Association of the United Kingdom, School of Economics and Political Science, London, UK, April 10, 2000. Available on the Internet at 64.233.167.104/search?q=cache:hjstjZ4YPSAJ:ksghome.harvard.edu/~.pnorris.shorenstein.ksg/acrobat/psa2000dig.pdf+%E2%80%9CThe+Worldwide+Digital+Divide:+Information+Poverty,+the+Internet+and+Development%22&hl=en. Last accessed on March 13, 2005.

Organization for Economic Cooperation and Development (OECD). 2003. “OECD Communications

Page 35: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

34

Outlook, 2003 Edition.” White Paper, OECD Code 932003021P1, Directorate for Science, Technology and Industry, OECD Publications, Paris, France. Available on the Internet via www.oecd.org/document/ 32/0,2340,en_2649_34223_2514080_1_1_1_1,00.html. Last accessed on March 13, 2005.

Organization for Economic Cooperation Development (OECD). 2002. “Measuring the Information Economy.” White Paper, OECD Publications, Paris, France. Available on the Internet at www.oecd.org/ dataecd/16/14/1835738.pdf. Last accessed on March 13, 2005.

Oxley, J.E., and B. Yeung. 2001. “E-Commerce Readiness: Institutional Environment and International Competitiveness.” Journal of International Business Studies, 32(4), 705-723.

Pohjola, M. 2003. “The Adoption and Diffusion of ICT Across Countries: Patterns and Determinants.” Chapter 4 in D. C. Jones (Ed.), The New Economy Handbook, Elsevier Science and Technology Books, Amsterdam, Netherlands.

Press, L. 2000. “The State of the Internet: Growth and Gaps.” Paper presented at the The Internet Global Summit: Global Distributed Intelligence for Everyone, 10th Annual Internet Society Conference, Yokohama, Japan, July 18-21. Available on the Internet at www.isoc.org/inet2000/ cdproceedings/8e/8e_4.htm. Last accessed on March 13, 2005.

Quibria, M.G., Ahmed, S.N., Tschang, T., and Reyes-Macasaquit, M. 2002. “Digital Divide: Determinants and Policies with Special Reference to Asia.” ERD Working Paper Series No. 27, Asian Development Bank, Manila, Philippines.

Ragupathi, W, and Tan, J. 2002. “Strategic IT Applications in Health Care,” Communications of the ACM, 45(2). 56-62.

Rao, M. , March 15, 2004. “The Impacts of ICTs in Microfinance.” EM Wires. Available on the Internet at www.electronicmarkets.org/files/cms/67.php. Last accessed on March 17, 2005.

Robinson, K.K., and Crenshaw, E.M. 2002. “Cyber-Space and Post-Industrial Transformations: A Cross-National Analysis of Internet Development.” Social Science Research, 31(3). 334-363.

Rogers, E.M. 1985. Diffusion of Innovations, Free Press, New York, NY.

Sciadas, G. (Ed.) 2003. “Monitoring the Digital Divide … and Beyond.” Technical Report, Orbicom, Orbicom International Secretariat, Montreal, Canada. Available on the Intenret at www.orbicom.uqam.ca/ projects/ddi2002/2003_dd_pdf_en.pdf. Last accessed on March 13, 2005.

Selhofer, H., and Husing, T. 2002. “The Digital Divide Index: A Measure of Social Inequalities in the Adoption of ICT.” Working paper, Statistical Indicators Benchmarking the Information Society (SIBIS), Information Society Programme, European Commission. Available on the Internet at www.empirica.biz/ sibis/files/Huesing_Selhofer_DDIX_2002.pdf. Last accessed on March 13, 2005.

Srinivasan, K., Kekre, S., and Mukhopadhyay, T. 1994. “Impact of Electronic Data Interchange Technology On Jit Shipments.” Management Science, 40(10), 1291-1304.

Statistical Office of the European Communities (EUROSTAT). 2000. “SINE: Statistical Indicators for the New Economy.” Available on the Internet via epp.eurostat.cec.eu.int/portal/page?_pageid= 1090,1137397&_dad=portal&_schema=PORTAL. Last accessed on March 13, 2005.

Techatassanasoontorn, A. 2006. “The State-Based and Contagion Theories of Technology Diffusion.” Unpublished doctoral dissertation, Carlson School of Management, University of Minnesota, Minneapolis, MN. Tornatzky, L.G. and Fleischer, M. 1990. The Processes of Technological Innovation, Lexington Books, Lexington, MA.

Trochim, W. M. Research Methods Knowledge Base, 2nd Edition. Atomic Dog Publishing, Cincinnati,

Page 36: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

35

OH, 2001.

United Nations Conference on Trade and Development (UNCTAD) 2003. “Information and Communication Technology Development Indices.” United Nations Conference on Trade and Development Report, UNCTAD/ITE/IPC/2003/1. Available on the Internet at www.unctad.org/ en/docs/iteipc20031_en.pdf. Last accessed on March 13, 2005.

United Nations Conference on Trade and Development (UNCTAD). 2004. “UNCTAD XI Multi-Stakeholder Partnerships.” Note, UNCTAD Secretariat, TD/400, May 21. Available on the Internet at www.unctad.org/en/docs/td400_en.pdf. Last accessed on March 13, 2005.

United Nations Conference on Trade and Development (UNCTAD). 2001. Electronic Commerce and Development Report 2001. United Nations, New York, NY and Geneva, Switzerland. Available on the Internet at r0.unctad.org/ecommerce/docs/edr01_en/edr01_en.pdf. Last accessed on March 13, 2005.

United Nations Development Programme (UNDP). 2004. Human Development Report: Cultural Liberty in Today’s Diverse World. S. Fukuda-Parr (Ed.), Human Development Research Office. Available on the Internet at hdr.undp.org/reports/global/2004/. Last accessed on March 13, 2005. Also published by Oxford University Press, New York, NY.

United Nations Development Programme (UNDP). 2001. Human Development Report: Making New Technologies Work for Human Development. S. Fukuda-Parr (Ed.), Human Development Research Office. Available on the Internet at hdr.undp.org/reports/global/001/en/. Last accessed on March 13, 2005. Also published by Oxford University Press, New York, NY.

Wallsten, S. 2003. “Regulation and Internet Use in Developing Countries.” World Bank Policy Research Working Paper No. 2979, World Bank, Washington, DC. Available on the Internet at papers.ssrn.com/sol3/papers.cfm?abstract_id=366100. Last accessed on March 13, 2005.

Welch, W.H. 1999. “The Information Society Index: Emerging Virtual Have and Have Not Countries.” Presentation Slides, Stanford University, December 1. Available on the Internet at www.stanford.edu/class/las194/WebPages99/WWelchPres/. Also available via the World Paper, www.worldpaper.com. Last accessed on March 13, 2005.

Wong, P. 2001. “ICT Production and Diffusion in Asia.” Discussion Paper No. 2001/8, World Institute for Development Economics Research, United Nations University, Tokyo, Japan.

Wolcott, P., Press, L., McHenry, W., Goodman, S.E., and Foster, W. 2001. “A Framework for Assessing the Global Diffusion of the Internet.” Journal of the Association of Information Systems, 2(6). Available on the Internet at mosaic.unomaha.edu/2001_GDI_Framework.htm. Last accessed on March 13, 2005.

World Bank. 2005. “Ten Things You Should Know about CDF.” Available on the Internet at web.worldbank.org/WBSITE/EXTERNAL/PROJECTS/STRATEGIES/CDF/0,,contentMDK:20072662~menuPK:60746~pagePK:139301~piPK:139306~theSitePK:140576,00.html. Last accessed on March 17, 2005.

World Bank. 2004a. World Bank Development Indicators 2004. Annual Report, Washington DC. Available on the Internet at publications.worldbank.org/ecommerce/catalog/product?item_id=631625. Last accessed on March 13, 2005. Also WDI Online Individual Subscription at publications.worldbank. org/ecommerce/catalog/product-detail?product_id=631625&.

World Bank. 2004b. “Knowledge Assessment Methodology (KAM) Home Page.” Washington, DC. Available on the Internet at info.worldbank.org/etools/kam2005. Last accessed on March 13, 2005.

World Bank. 2003. “ICT and MDGs: A World Bank Group Perspective,” World Bank Global ICT Department, Washington, DC, December. Available on the Internet at info.worldbank.org/ict/assets/ docs/mdf_Complete.pdf. Last accessed on March 17, 2005.

Page 37: A CRITICAL ASSESSMENT OF THE CAPABILITIES OF …misrc.umn.edu/workingpapers/fullpapers/2005/0506_031805.pdf · A CRITICAL ASSESSMENT OF THE CAPABILITIES ... development in developing

36

World Summit on Information Society (WSIS). 2003. “Plan of Action.” Document WSIS-03/GENEVA/DOC/5-E, International Telecommunications Union (ITU) Secretariat, Geneva, Switzerland. Available on the Internet at www.itu.int/dms_pub/itu-s/03/wsis/doc/S03-WSIS-DOC-005!!PDF-E.pdf. Last accessed on March 13, 2005.

Zhu, K., and Kraemer, K.L. 2002. "E-Commerce Metrics for Net-Enhanced Organizations: Assessing the Value of E-Commerce to Firm Performance in the Manufacturing Sector," by Kevin Zhu and Ken Kraemer, Information Systems Research, 13(3), 275-295.

Zhu, K., and Kraemer, K.L. 2005. "Post-Adoption Variations in Usage and Value of E-Business by Organizations: Cross-Country Evidence from the Retail Industry. Information Systems Research, 16(1), forthcoming.


Recommended