+ All Categories
Home > Documents > Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and...

Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and...

Date post: 06-Aug-2020
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
17
Methodoiogicai Issues in iS Research Methodological Issues in Experimental IS Research: Experiences and Recommendations By: Sirkka L. Jarvenpaa Gary W. Dickson Gerardine DeSanctis Department of Management Sciences University of Minnesota Minneapolis, Minnesota Abstract Within the iast ten years the use of experimentai methodology in information systems (iS) research has substantiaiiy increased. However, despite our experience with experimentation, studies continue to suffer from methodoiogicai problems. These probiems have ied to an accumuiation of contacting resuits in several areas of iS research. Moreover, future research studies wiii i<eep producing contradictory results uniess researchers begin to answer questions of fas/r and measurement vaiidity before reporting their experimentai findings. This article discusses common methodoiogicai probiems in experimentai iS studies and, through a description of a series of graphics experiments at the University of Minnesota, iiiustrates the particuiariy acute probiem of iow internai vaiidity. Suggestions are offered to experimentai iS researchers on how some of these common probiems can be aiieviated or even avoided, particuiariy in studies on the use of manageriai graphics. Keywords: MIS research, research methodology, com- puter graphics ACM Categories: A.O, H.1.0, H.4.2, 1.3.0. Introduction Experimental research has become one of the most popular forms of information systems (IS) research. The popularity and viability of an experimental approach was undoubtedly given impetus by the "Minnesota Experiments" [14] which were performed throughout the early 1970s. The researchers at Minnesota found the experimental methodology to be useful in investigating cause and effect relationships between attributes of the decision maker, the nature of the decision environment, the characteristics of an information system, and decision performance. Since the mid-seventies an increasing number of researchers have adopted the experimental approach in accumulating knowledge on the design, development, and management of information systems. Despite this surge of interest in experimentation during the past 10 years, the research suffers from methodological weak- nesses, particularly from the problems of reliability and internal validity. Reliability refers to errors in measurement, whereas internal validity deals with improper manipulation of experimental treatments. The existence of these problems has led to conflicts in reported results [e.g., see 13, 22, 25]. To illustrate the situation consider one area of IS research that is struggling due to conflicting results from experimental studies — the study of information presentation formats. Several researchers have generated results that sug- gest that graphs are no better than tables in presenting information [19, 26, 27, 34, 36, 51]. On the other hand, several experiments have provided evidence that graphics are not only preferred by managers but also lead to better decision performance in some situations [2, 18, 38,53,54]. As a further illustration, three studies using interpretation accuracy as a dependent variable were in disagreement. Feliciano, et ai. [17] concluded that graphs are easier to inter- pret than tables. Lusk and Kersnick [28] found the reverse, and later Tullis [48] demonstrated that there was no difference in interpretation accuracy between graphs and tables. What is happening here? It is the authors' con- tention that there are a variety of factors leading to this confusing situation. One important prob- MIS Quarterly/June 1985 141
Transcript
Page 1: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodoiogicai Issues in iS Research

Methodological Issuesin Experimental ISResearch: Experiencesand Recommendations

By: Sirkka L. JarvenpaaGary W. DicksonGerardine DeSanctisDepartment of Management

SciencesUniversity of MinnesotaMinneapolis, Minnesota

AbstractWithin the iast ten years the use of experimentaimethodology in information systems (iS) research hassubstantiaiiy increased. However, despite ourexperience with experimentation, studies continue tosuffer from methodoiogicai problems. These probiemshave ied to an accumuiation of contacting resuits inseveral areas of iS research. Moreover, future researchstudies wiii i<eep producing contradictory resultsuniess researchers begin to answer questions of fas/rand measurement vaiidity before reporting theirexperimentai findings. This article discusses commonmethodoiogicai probiems in experimentai iS studiesand, through a description of a series of graphicsexperiments at the University of Minnesota, iiiustratesthe particuiariy acute probiem of iow internai vaiidity.Suggestions are offered to experimentai iS researcherson how some of these common probiems can beaiieviated or even avoided, particuiariy in studies on theuse of manageriai graphics.

Keywords: MIS research, research methodology, com-puter graphics

ACM Categories: A.O, H.1.0, H.4.2, 1.3.0.

Introduction

Experimental research has become one of themost popular forms of information systems (IS)research. The popularity and viability of anexperimental approach was undoubtedly givenimpetus by the "Minnesota Experiments" [14]which were performed throughout the early1970s. The researchers at Minnesota found theexperimental methodology to be useful ininvestigating cause and effect relationshipsbetween attributes of the decision maker, thenature of the decision environment, thecharacteristics of an information system, anddecision performance. Since the mid-seventiesan increasing number of researchers haveadopted the experimental approach inaccumulating knowledge on the design,development, and management of informationsystems. Despite this surge of interest inexperimentation during the past 10 years, theresearch suffers from methodological weak-nesses, particularly from the problems ofreliability and internal validity. Reliability refersto errors in measurement, whereas internalvalidity deals with improper manipulation ofexperimental treatments. The existence of theseproblems has led to conflicts in reported results[e.g., see 13, 22, 25].

To illustrate the situation consider one area of ISresearch that is struggling due to conflictingresults from experimental studies — the study ofinformation presentation formats. Severalresearchers have generated results that sug-gest that graphs are no better than tables inpresenting information [19, 26, 27, 34, 36, 51].On the other hand, several experiments haveprovided evidence that graphics are not onlypreferred by managers but also lead to betterdecision performance in some situations [2, 18,38,53,54]. As a further illustration, three studiesusing interpretation accuracy as a dependentvariable were in disagreement. Feliciano, et ai.[17] concluded that graphs are easier to inter-pret than tables. Lusk and Kersnick [28] foundthe reverse, and later Tullis [48] demonstratedthat there was no difference in interpretationaccuracy between graphs and tables.

What is happening here? It is the authors' con-tention that there are a variety of factors leadingto this confusing situation. One important prob-

MIS Quarterly/June 1985 141

Page 2: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Mettiodologicat Issues in IS Researott

blem is that a variety of tasks are used acrossexperimental studies, and it very likely is inap-propriate to compare results in one task environ-ment with those in another. A second problem isthat the quality of information presentationacross different experiments varies greatly. Onestudy, for example, may use high qualitygraphics whereas another employs graphics ofquestionable quality. Differences in subjectcharacteristics, experimental conditions, set-tings, and design likewise make conclusionsabout the performance of various presentationmedia difficult to draw. However, the mainreason for inconsistent results, as argued in thisarticle, is that the results being reported arebased upon experiments in which internal vali-dity is questionable. In other words, theexperimenter has not carefully insured that theoutcomes being observed are truly a function ofexperimental manipulation of the independentvariables, or that the experiments employedadequately and accurately measure what isintended. Seldom do researchers present thereader with any discussion of how the task andmeasurement devices were tested.

Problems in ISExperimental ResearchTaylor and Benbasat [47] describe a series ofmethodological problems plaguing one area ofexperimentally based IS research — the role ofcongitive style in decision making. These pro-blems, which generally apply to much IS experi-mental research, are: (1) lack of theories forguiding research efforts, (2) proliferation ofmeasuring instruments, many of which lackreliability and validity, and (3) inappropriateresearch designs. In addition to these we haveanother problem which involves (4) inconsis-tency in the task that serves as the basis for anexperiment. Since the resolution of all four ofthese problems is a prerequisite for advances inscientific endeavor, we will explore the nature ofeach problem and introduce some potentialremedies. Special attention will be given to thelast problem. Some of the discussion is uniqueto research in managerial graphics, but much ofit is generally applicable to experimental ISresearch.

This article is directed toward informationsystems researchers in general, but may be ofparticular interest to those contemplatingexperimental research in the study of mana-gerial graphics. We will use the latter area, onein which we are initiating a program of research,to illustrate our position. The article will usethree experiments as the basis for argumentsconcerning the methodological issues, par-ticularly those involving internal validity. Undernormal circumstances the reader might expectto see the results of such experiments as thebasis for anywhere from one to three papers. Weare not choosing this approach since we ques-tion the validity of the results we would be report-ing, and it is our feeling that this would simplyadd to the confusing situation now present in theliterature. We begin by discussing the commonproblems of experimental validity, highlightingthe problems of task design and measurement.We then describe the process we are employingto address issues of internal validity inexperimental studies of managerial graphics.The article concludes by providing some cau-tions and guidelines for experimental ISresearchers.

Lack of underlying theory

As is true in other types of experimental ISresearch, computer graphics research lacks anyadequately developed theoretical basis. Theabsence of theory contributes to the currentstate of inconclusive results in the literaturebecause researchers lack a common ground fordeveloping experimental hypotheses and inter-preting results. The current trend in graphicsresearch has been to perform one-shot, ad hocstudies without any significant effort to buildupon the work of others and achieve a state ofrelatedness among studies. It is the authors'opinion that only through streams of directedresearch can investigators achieve an accep-table level of understanding of phenomena andultimately formulate an underlying theory. Thefirst step toward directed or "programmatic"research is the building of a framework thatdefines the boundary for research to be con-ducted. As Jenkins states,".. .we need pro-grams of research in MIS — programs ofresearch based on a framework or frameworks,that provide a useful structure and focus onmanageable subsets of the MIS field" [25, p. xi].

142 MIS Quarterly/June 1985

Page 3: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodological Issues in IS Research

Jenkins further points out that research underdirected programs should be (1) purposeful —not random, (2) cumulative and self-correcting— studies should build on the previous work,and (3) replicabie — assumptions, tools, andprocedures are clearly and explicitlycommunicated.

Proliferation of measuringinstruments

Another factor that has contributed to weak andinconclusive results is the use of a great numberof differing measuring instruments, many ofwhich may have problems with reliability andvalidity. Research on presentation modes cer-tainly suffers from this condition. For example,the research dealing with memory for informa-tion has used both recall (e.g., [31. 51]) andrecognition measures (e.g., [20]). As a furtherillustration, "interpretation accuracy" has beenmeasured by the accuracy with which datavalues displayed in different formats can beestimated [11, 12, 35], by the magnitude oferrors found in interpretation tasks [5], by thepercentage of correct responses to questionsranging from simple retrieval to complex deci-sion making [48], by observing a trend in thepresented data [42], and by comparing points orreading single points [43]. The use of differentmeasures, even on the same construct vari-ables, inevitably causes incomparable resultsand, therefore, leads to research labelled"conflicting."

Furthermore, the literature on presentationmodes does not usually indicate whether instru-ments have been tested for adequate reliabilityand validity. In order to accumulate knowledgebased on sound scientific inquiries, the relevan-cy or validity of any instrument must be assuredbefore relationships between measures of inde-pendent and dependent variables can beassessed. However, it must be acknowledgedthat the development of relevant and valid mea-surements is a very difficult process. Toillustrate this point consider the measurement ofa simple construct such as forecast accuracy ina situation in which subjects are asked toforecast three periods beyond a set of historicaldata. How is accuracy to be measured? By theaverage total forecast error? By the average of

the absolute value of the forecast error? By themean square error? Should any, a subset, or allof these measures be applied to each periodforecast, all periods, or to only the first period(accuracy of forecasting the first periodinfluences subsequent forecasts)? How doesone compare a forecast based upon tabular data(which is precise) with graphical data (which,because of plotting inaccuracy, may be approx-imate)? Does one measure of forecast error do abetter job in terms of "getting close" to theactual value? If so, what defines close? The pro-blem is that this illustration is one of the simplestthat we have confronted. Many of the measure-ment situations encountered can be signifi-cantly more complex and confusing.

Again, it is the authors' opinion that only througha program of research which is based upon stan-dard tests can investigators hope for com-parable studies. A set of measuring instruments,applicable and easily adaptable to a largenumber of experiments, must be developed.The construction of a set of good measuringinstruments is a learning process which islengthy, costly, and realistically feasible onlywhen costs can be spread over several studies.Moreover, a research program provides a set-ting where the testing for reliability and validity ofmeasuring instruments is almost a naturalby-product of the studies themselves.

Inappropriate research designs

Research designs are considered inappropriatewhen (1) they do not address an important prob-lem in the field, or (2) they are lacking in variousforms of experimental control. In the area ofgraphics and information presentation somestudies have restricted their measurements toconstructs that have no direct link to decisionmaker productivity (e.g., user preference). Suchinquiries do little to address the central issue ofhow to more effectively use computer graphicsin IS decision making. Beyond the problem ofirrelevant dependent variables, many experi-ments have been highly simplistic and includedonly one kind of independent variable. Studieswhich have examined two or more variables andtheir interactions are almost nonexistent. Forexample, at least 10 studies have compared barcharts with circle diagrams, and another 10 have

MIS Quarterly/June 1985 143

Page 4: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodological Issues in IS Research

compared monochrome with color visuals. Nostudy, however, has examined both graph typeand color within the same experiment.

The second problem with research designs hasbeen the laci< of control. In particular, the controlof variables which affect decision making, otherthan the presentation format, has been rudimen-tary in many experimental studies. Such studiesraise questions about whether findings are iessa function of presentation format than of otherfactors such as the situation, task content, per-sonal background, etc. The condition of the taskoverwhelming the manipulation of presentationvariables wiil be aptiy demonstrated in thediscussion of our series of experiments.

Two issues related to the iack of experimentalcontrol are: (1) non-equivaiency of stimulusmaterials, and (2) subjects' greater familiaritywith one presentation medium over another.Quality as weii as content differences in thetabular and graphical presentation materialprobably have a significant impact on perfor-mance. The greater familiarity of subjects withtabular reports as contrasted to graphicalreports may give the former a natural advantagewhich, in turn, may be moderated by the task[19].

To alleviate the research design problemsaddressed above, we suggest the following:

— Researchers should move away fromunivariate designs toward multivariatedesigns.

— Decision maker productivity, as opposedto viewer preference, needs further studyas a dependent variable.

— To avoid confounding results, researchersshould try to measure or control factorsthat are known from previous research toinfluence decision performance.

— Researchers should take proper actions toverify that the quality of their graphs areclose to the quality of their tables or, asives [24] suggests, to at least make surethat both represent the highest qualitypossible.

— Researchers should clearly document thecriteria used in generating the stimulusmaterial, as weii as any tradeoffs theyhave made that may have caused sub-stantial content differences between

presentation media.— Training and learning effects of different

media should be either manipulated orcontrolled so as to avoid unwantedfamiliarity biases with tabular versusgraphical presentation.

Diversity of experimental tasks

The experimental task refers to the activity inwhich a subject is asked to participate in thecourse of an experiment. Task pertains not onlyto what the subject actually does, but also to thecontext, or surrounding environment in whichthe activity occurs. Graphics researchers haveperformed their studies in a multitude of taskenvironments ranging from the employment ofgraphics in tracking military flight paths [21] tothe use of graphics in aiding trust investmentmanagers in day-to-day decisions [18]. The useof diverse and often unrelated and incom-parable task situations makes the integration offindings across studies difficult because sub-jects' performance may be more a consequenceof the task environment than of the use ofgraphics. In fact, there is a wide agreement thatthe characteristics of the task in which the sub-ject is involved is a prime determinant of humandecision making [16, 32, 41].

Several experiments have provided confirmingevidence that the effectiveness of the displayformat is highly dependent on, or sensitive to thetask at hand [1, 5, 7, 42, 43, 50, 54]. Thus, thetype of presentation mode may have a relativelysmall effect on decision making performancecompared to the task or task context. If this istrue, then the results from many studies may beinterpreted solely as a function of the task.Future research efforts will keep producing con-tradictory results unless researchers developsome type of taxonomy of tasks and start inter-preting the results within the taxonomy. Thedevelopment of a generally applicable tax-onomy of tasks is a major research endeavorwhich can be achieved only through a long-termresearch program. A stream of studies isneeded which consider characteristics of taskssuch as complexity, task content, task difficulty,and task attributes (e.g., interpretation accur-acy, trend spotting).

144 MIS Quarterly/June 1985

Page 5: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodological Issues in IS Research

In our research we have confronted a relatedproblem which is insufficient subject under-standing of the experimental task. This problemleads to situations in which investigators are notable to determine what subjects are actuallyresponding to in the experimental setting.Although this problem has been mentioned withregard to experimental IS studies using games,in which the decision environment can becomehighly complex (e,g,, see [39]), it has notreceived the attention it deserves. It is our con-tention that much of the experimental ISresearch is plagued by this difficulty. What thismeans is that many results (or alternatively, non-results) are presented that are nothing morethan the generation of random error, or "arti-facts" of an experimental exercise. In thesecases, the setting or task used for the researchis not internally valid. This is a subject whichdeserves further elaboration.

On the issue of internal validity

Internal validity is in contrast only to externalvalidity and not construct, content, or face valid-ity. Cook and Campbell define internal validityas "the approximate validity with which we inferthat a relationship between two variables iscausal or that the absence of a relationshipimplies the absence of cause" [8, p, 37], Thus,the lack of internal validity results in the inabilityto make any statements about cause and effectrelationships and thereby invalidates theexperiment.

In order to avoid internal validity problems,researchers should examine the cognitive pro-cesses used by some of the subjects in perform-ing an experimental task. Task analysis involvesseparating task performance into its com-ponents so that subjects' psychological pro-cesses are revealed as they perform a task. Pro-cess tracing and debriefing of subjects aremethods used to conduct task analysis. Theseprocedures provide the investigator with vitalinformation about the complexity and difficultyof a task and facilitate discovery of the methodsand reasoning subjects use in a task. Unless aresearcher is certain as to what subjects areresponding to in the task the findings may beaccidents, or may be due to factors other thanthe experimental manipulation, e,g,, graphics.

Summary

We have argued for research on the managerialuse of graphics that: (1) is programmatic andframework-based, (2) uses meaningful andtested measuring instruments, (3) uses appro-priate research designs, (4) is based upon fullyunderstood tasks, and (5) is internally valid. Wewill now set out to demonstrate how much effortis involved in attempting to achieve these objec-tives. We want to emphasize a disclaimer at thispoint — we have only begun to achieve theseoptimistic goals, and the work we will describe isonly a step in what we believe is the rightdirection.

The Experimental ProgramRecently a research group at the University ofMinnesota began to study the managerial use ofcomputer graphics. This group decided to followa comprehensive programmatic approach inorder to correct some of the problems fromwhich other experimental IS research has suf-fered. For example, the Minnesota Experimentswere framework-based but exhibited many ofthe other problems identified in the previoussection. What follows is a description of a seriesof experiments which, over their duration, haveillustrated how one ought to do this type ofresearch rather than generated results showinghow to effectively use managerial graphics. Inparticular, issues of internal validity andmeasurement have been stressed.

The research group, supported in part by a grantfrom the Society for Information Management(SIM), was interested in the relationshipsbetween graphical decision aids, task complex-ity, and decision making performance. Our firststep was to choose an appropriate task environ-ment and, within that environment, to definethree levels of complexity.

Task development andexperimental design

The issue of defining complexity turned out tobe, in itself, a complex activity, Simon's concep-tualization of complexity [44, 45] was adopted.

MIS Quarterly/June 1985 145

Page 6: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodoiogicai issues in iS Research

Simon identified two main contributors to com-plexity: (1) the number of elements in a systemand (2) the degree and nature of the interactionsamong elements. For this experiment weselected only one of the factors (the number ofelements in a system) to be manipulated toreflect levels of task complexity. We opera-tionalized this construct in the task by manipu-lating the number of variables on which subjectsreceived information from one experimentalgroup to another.

The next step was to construct a case that wouldprovide a task setting. First, an industry wasselected based upon the criterion that products,operations, and markets of firms in that industryshould be easy for subjects to understand. Thebusiness forms industry met this criterion. Wefocused on marketing operations since this areawould be likely to benefit from the usage ofgraphics. Successful marketing managementdepends on identifying and examining trendsand relationships of different marketingvariables, and graphics are claimed to presenttrend and relationship information more effec-tively than other methods (e.g., [29, 30, 46]). Toget a general setting for a case, the marketingoperations of several business forms firms werestudied. Other sources that provided vital infor-mation for the case were trade publications,annual and stock reports on business formsfirms, and personal correspondence with anational trade association. The gathering ofindustry data aided in developing a realisticcase. In addition, the case was written followingthe guidelines proposed for good case writing byBennett and Chakravathy [3] and Reynolds [40].

The final case was three pages long and itdescribed the industry, the company's currentstate, its products, and markets. The subjectwas to play the role of a consultant asked to helpthe CEO find the reason for falling profits at atime of increasing sales. The cause of the prob-lem (unknown to the subjects) was the incorrectallocation of the salesforce among the firm'sthree markets. To be successful in the task, thesubject was to determine that the salesforce wasspending most of its time calling on customers ina market area with high sales revenues butseverely declining profits; sales efforts shouldhave been concentrated on a market that wassmaller in terms of sales revenue but was highly

profitable. We created several distractors that,we assumed, successful subjects would be ableto reject as the problem after careful analysis.The distractors for each market area includedsuch things as product pricing relative to thecompetition and advertising expenditures.Overall business conditions is an example of adistractor that applied to all three market areas.Within this task environment, three differentsituations were defined that varied by complex-ity level. The number of distractors determinedthe degree of task complexity (low = 3,medium = 6, high = 9).

The data for salesforce effort and distractorswere presented in simple bar charts andgrouped bar charts. The graphs were generatedby a researcher who had no prior training in thetechniques of presenting data in a graphicalform. To help overcome this lack of experience,a "user-friendly" mainframe graphics softwarepackage was used for graphics generation. Thegraphs were carefully constructed to attempt tomake the content and quality of different graphtreatments as equivalent as possible. Never-theless, problems occurred; for example, thegrouped bar charts had somewhat "fuzzy"labels.

In addition to the graphs, questionnaires wereconstructed to gather information on (1) thebackgrounds of subjects, (2) motivation of sub-jects, (3) subjects' satisfaction with the graphs,(4) the perceived complexity of the problemsolving task, and (5) subjects' interpretationaccuracy in reading the graphs.

The purpose of the first two measuring instru-ments, the background and post experimentquestionnaires, was to control for the subject'smarketing experience, managerial experience,educational level, previous familiarity withgraphical tools, and motivation level in perform-ing the experimental tasks. Lack of control ofthese variables has contributed to "no effect"results in several earlier studies reported in theliterature (e.g., [26]).

Subjects' satisfaction with the graphical aidswas appraised because earlier researchers hadfound the success of an MIS system to be cor-related with user satisfaction (e.g., [37]). Wetried to measure whether differences in satisfac-tion existed when different graph formats were

146 MIS Quarterly/June 1985

Page 7: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

/Methodological Issues in IS Research

employed under varying task complexity levels.The "MIS Satisfaction Questionnaire" byJeni<ins [25] was modified for the experiment.The subjects were asked to rate their overallsatisfaction with the graphs, as weii as thereadability and usefulness of the graphs, onseven-point scales. Jenkins' questionnaire wasoriginally developed according to Nunnaliy'srecommendations [33], and since the instru-ment had been validated before in an experi-mental setting, further validation was not con-sidered necessary.

The fourth instrument was constructed tovalidate that the experimental tasks did in facthave different levels of complexity. Subjectswere asked to rate the complexity of the problemfinding task on a ten-point scale in comparisonto a "base task," where the complexity of thebase task was 5. The base task was a simpleforecasting task that subjects undertook afterthe problem solving task.

The fifth instrument, an interpretation accuracytest, was developed to measure how well sub-jects could identify patterns, relationships, andexact data points from graphs. Its purpose wasto appraise how accurately data was understoodby subjects when displayed in a graphical form.Interpretation accuracy is a prerequisite to cor-rect problem comprehension and improveddecision quality. To assess the construct validityof this test, a sixth instrument, the Spatial Rela-tions sections of the Differential Aptitude Tests(one of the most widely used and validatedspatial relations tests) was given to each sub-ject. We hypothesized that subjects whose per-formance was superior in the spatial aptitudetest would also perform better than average onthe interpretation accuracy test.

Preliminary testing

The experiment was pretested on two doctoralstudents, a faculty member, and an MIS practi-tioner. The subjects completed all the tasks inthe experiment and also underwent a debriefingsession in which they were asked to give arationale for their answers in the problem findingtask. The results were encouraging becausethree of the four subjects identified the correctrationale for finding the problem. The results of

preliminary testing led us to believe that we haddeveloped a good experimental task.

Experiment I

Design of Experiment I

The purpose of this experiment was to deter-mine if the effectiveness in the use of graphicaloutput depends upon the complexity of the taskbeing performed by the decision maker. Thestudy had two independent variables: (1) displayformat, and (2) task complexity. The display for-mats were simple bar charts and grouped barcharts. There were three levels of task complex-ity — low, medium, and high. There were fourindependent variables: (1) decision perfor-mance, (2) interpretation accuracy, (3) self-reports of satisfaction with the displays, and (4)decision confidence. The experimentalhypotheses were as follows:

1. There will be differences in decision perfor-mance at varying task complexity levels.

2. There will be differences in decision perfor-mance when different graph formats areemployed.

3. There will be differences in interpretationaccuracy scores when different graph for-mats are employed.

4. There will be differences in satisfactionratings when different graph formats areemployed.

5. There will be differences in satisfactionratings under varying task complexity levels.

6. There will be differences in decision con-fidence over varying levels of taskcomplexity.

7. There will be differences in decision con-fidence when different graph formats areemployed.

8. Subjects using grouped bar charts in the highcomplexity task will perform better in a prob-lem solving task than subjects using simplebar charts.

9. Subjects using simple bar charts in the lowcomplexity task will make better quality deci-sions than subjects using grouped barcharts.

The subjects were 63 graduate students at theUniversity of Minnesota, 43 of whom were in the

MIS Quarterly/June 1985 147

Page 8: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodological Issues in IS Research

MBA program with an MIS concentration. On theaverage, the students were 28 years old, had 41months of full time, business-related workexperience, and had successfully completed atleast one marketing course.

The subjects participated on a voluntary basisand received course credit of 5% of the finalgrade in the course from which they weresolicited for the study. All subjects were given anoption of either completing a short class assign-ment or participating in the study. In addition tocourse credit, subjects received monetaryprizes for good performance. The subject whoperformed best received $20, second best $10,and the three following performers, $5 each.

Procedure

During a two week period seven experimentalsessions were conducted, with tbe number ofsubjects per session varying from 4 to 12. Todiscourage discussion about the experimentwith subjects who were scheduled for later ses-sions, each participant was asked to sign a con-fidentiality agreement. Each subject completeda background questionnaire and then was givena maximum of 15 minutes for reading andanswering questions related to the case descrip-tion. The questions were asked to ensure thatsubjects bad read and comprehended tbe case.

The problem finding task was given immediatelyafter the case reading. Since time pressure inperforming a task can affect the complexity ofthe task [52], it was very important to create asetting in whicb tbe high, medium, and low-complexity subject groups felt equally pressuredwith regard to time. A timing scheme for tbeproblem-finding task was developed as a resultof experience gained in pretesting. Subjectsworking on the low complexity task had 10minutes; tbe medium complexity group had 15minutes; and tbe high complexity group had 20minutes. Within the allotted time, subjectsstudied the graphs and gave a written descrip-tion of what they felt was the problem. They werethen furnished with a list of potential problems.The list forced subjects to select a factor thatthey felt bad contributed the most to tbe declin-ing profitability of tbe business forms company.For tbe selected alternative, subjects also pro-vided a subjective probability estimate on a

scale from 0 to 1 of bow confident tbey wereabout their choice.

The subjects next were asked to complete a10-minute "base" forecasting task, followed bya questionnaire on the relative complexities oftasks, a satisfaction questionnaire, a 20-minuteinterpretation accuracy quiz, and a 25 minutespatial ability test. At the end of the session, sub-jects reported their motivation level in perform-ing the tasks. Overall, the experiment took fromone-and-a-half to two hours.

Results from experiment I

The analysis of experimental data did not reveala consistent pattern of effects due to graphicaland task complexity treatments. In fact, only thehypothesis on the differences in satisfactionratings under varying task complexity levels wassignificant at a .05 level. A weak relationshipwas found between the performance of the highcomplexity task and tbe type of bar chart (seeTable 1).

Problems resulting from experiment I

The results of tbe experiment raised concernsabout the validity of tbe measures and tasksused. Beyond the nonsignificant experimentalresults, observations in the experimental ses-sions, subjects' comments, and tbe data col-lected concerning the validity of the measuringinstruments and tasks all pointed to problems intbe measuring instruments, research design,and the problem-finding task.

Our concerns about the measuring instrumentsfocused on the validity of the interpretationaccuracy quiz as well as tbe questionnaire onthe complexity of the problem-finding task. Thedata collected for the validation of the interpreta-tion accuracy test indicated a weak relationshipwith tbe spatial aptitude scores. This suggestedthe need for improving the test questions andpossibly the quality of the graphs used as well.Tbe experimental results did show that therewere no significant differences between the lowand high complexity groups in their perceptionsof task complexity. Tbis raised concerns as towhether the instrument designed to measureperceived complexity was invalid, or whether

148 MIS Quarterly/June 1985

Page 9: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodological Issues in IS Research

Table 1. Major Hypothesis and Resuits for Experiment I

Hypothesis

1. There will be differences in decision performance at varyingtask complexity levels.

2. There will be differences in decision performance when differentgraph formats are employed.

3. There will be differences in interpretation accuracy scores whendifferent graph formats are employed.

4. There will be differences in satisfaction ratings when differentgraph formats are employed.

5. There will be differences in satisfaction ratings under varyingtask complexity levels.

6. There will be differences in decision confidence under varyinglevels of task complexity.

7. There will be differences in decision confidence when differentgraph formats are employed.

8. Subjects using grouped bar charts in the low complexity taskwill perform better in a problem solving task than subjectsusing simple bar charts.

9. Subjects using simple bar charts in the low complexity taskwill perform better in a problem solving task than subjectsusing grouped bar charts.

Results

no effect

no effect

no effect

no effect

'significant

no effect

no effect

"significant

no effect

'Test for statistical significance < 0.05 level**Test for statistical significance < 0.1 level

there had been ineffective operationalization ofthe levels of complexity in the experimental task.

Recall that the experiment involved themanipulation of three discrete levels of a com-plexity variable that, in theory, is continuous andcan take on values ranging from low to high. Theinsignificant differences detected between therankings of the high and low complexity groups(on perceived task complexity) led us to con-clude that the task did not significantly vary withregard to task complexity. In other words, thelevels of complexity in the three versions of thetask were either all high or all low.

Other research design concerns were: (1) poten-tial weak manipulation of stimulus complexity inthe graphical formats, (2) inadequate monetaryrewards, and (3) poor quality of stimulusmaterials. Since the display variable was acategorical variable no measure of how differentthe formats really were could be given. It waspossible that the actual difference between sim-ple bars and grouped bars is minor with regardto presenting data. Also the monetary rewards inthe experiment may have been too low tomotivate the subjects to perform well. Many ofthe subjects did not appear to be "trying hard"during the experimental sessions. Finally, the

MIS Quarterly/June 1985 149

Page 10: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodological Issues in IS Research

quality of the graphs used as stimulus materialsin the experiment may have been poor, thuscontributing to insignificant findings. Althoughthe researcher wbo prepared the graphs wasknowledgeable about computer programmingand took a course on how to use the graphicspackage, she had no formal training in graphictechniques. There were small layout problemsand tbe labels in tbe grouped bar cbarts werefuzzy because of the low-resolution plotter usedto generate hardcopy graphs.

In addition, the researchers were uncertainabout what subjects were actually responding toin the problem-finding task. When the perfor-mance data was analyzed no consistent pat-terns in subjects' performance could be iden-tified. In fact, subjects seemed to bave guessedwhat the problem was. Perhaps the task was toodifficult for subjects to handle or contained con-fusing or misleading information.

In sum, the researchers seriously questionedtbe validity of the task and tbe measures used Inthe experiment. Therefore, we decided to con-duct another experiment whicb would focus onidentifying remedies for our methodologicalproblems, rather than on generating results onusing graphics.

Experiment II

This study tested whether the results of Experi-ment I were caused by poor graphs ormisleading or confusing Information in the task.The research design was simplified by reducingthe number of levels in tbe independentvariables. Task complexity now had only twolevels, and display format had one level. To helpisolate the problems of poor quality and Inap-propriate layouts, 20 experimental subjects (allgraduate students) received information in atabular format. To examine whether the taskcontained misleading or missing information,subjects were asked to document what pieces ofinformation they used from the tables and howthey used tbe information in the task.

No significant performance difference wasfound between the high and low complexitytasks. The analysis of experimental data furtherindicated that the graphs had not been the prob-

lem in the first experiment. Subjects in tbe sec-ond experiment using tables performed aspoorly as tbe participants in the first experimentusing graphs. These results convinced us tbatthe main problem was the fact that subjects justwere not able to find the problem. However,what we did not know was whether the poor per-formance resulted from poor problem solvingskills on the part of the subjects, or whether thetask itself was misleading. Poor problem solvingskills appeared as a likely reason because afterthe first two experiments we had realized thatthe actual task demands were much greaterthan originally intended. We suspected thateven the low complexity treatment was a highlycomplex task because subjects had to relatevariables to each other in order to be successfulin the task.

Experiment III

The purpose of tbe third experiment was todetermine whether there was misleading or con-fusing information in tbe task or whether theproblem was poor problem solving skills on thepart of the subjects. To accomplisb tbis, 17managers with an average of 10 years of man-agerial experience were used as subjects. It wasassumed that if they could manage the task wecould conclude that the task was valid, but toodifficult for graduate students. To verify whetherthe managers successfully coped with the task,eacb subject was debriefed for 10 minutes at tbeend of the experimental session.

The results of the third experiment confirmedserious problems with the task. In particular, wefound that managers identified the right prob-lem, but often for tbe wrong reasons; andmoreover, some subjects identified the wrongproblem, although they correctly interpretedinformation portrayed tbrougb high resolutiongraphics. Obviously, the task was not providingthe basis for answering our researcb questionon tbe relationships among task, presentationformat, and decision performance. As a result, amajor revision of the task was undertaken.

Revision of the task

To revise the task we solicited the help of amarketing professor at tbe University of Min-

150 MIS Quarterly/June 1985

Page 11: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodological Issues in IS Research

nesota. Following his recommendations thecase was revised to include more precisestatements of the company's operations, andseveral marketing concepts were clarified thatmay have been confusing for subjects. In addi-tion, the data was completely redevelopedresulting in a less complicated task in which thedata patterns and relationships were easilydetectable. In contrast to the previous task, theproblem for declining profitability of the fictitiousbusiness forms company now clearly stands out(we think). Also, a graphic artist's recommenda-tions were solicited on the graphics generatedfrom the revised data, and small changes weremade in the graph layouts.

The revised material is currently undergoingpretesting. Twenty-one graduate students havecompleted the revised task at this time, and 15of them have detected the right problem in theproblem-finding task.

SummaryWe have gone through several experiments insearching and testing for a valid task and mea-surements. During this endeavor we have dis-covered that developing an effective task andaccurate measurements is a lengthy and costlyprocess that can best be described as an itera-tion of design and testing stages. After the initialdesign is completed, a series of pilot studies andrefinements must follow. Through this multiplerevision of the design, the internal validity of thestudy should increase, and moreover, theresearchers should be more capable of esti-mating how valid the study is.

In the course of improving our experimental taskenvironment we identified several problems. InTable 2, we have summarized these and otherproblems which we feel must be addressedwhen attempting to design valid studies. Theproblems are organized into 4 categories: (1)research strategy, (2) measuring instruments,(3) research designs, and (4) experimentaltasks. In each case, suggestions are offered onhow to alleviate or even avoid the problem.Many of the suggestions, however, require con-siderable "front-end" preparation (e,g., thesearch for a task should be a basis for a seriesof studies). Hence, it is important to incorporate

suggestions early in the experimental designstage.

Another issue that has not received sufficientattention, but is an essential part of the experi-mental process, is the construction and conductof pilot studies. Pilot experiments should not bequick-and-dirty preruns, performed a few daysbefore the actual data collection. Rather, pilotstudies should be carefully planned to ensurethat a sufficient number of subjects has beenrecruited, that subjects are representative of thesubject population to be used in the study, andthat there is adequate time to revise the experi-ment and even run additional pilots prior to datacollection. In particular, the selection of a conve-nient rather than a representative sample of pilotsubjects is an easy trap in which to fall. This hap-pened in our premilinary testing. We selectedsubjects who had shown interest in the study,and as a result we ended up with a group of highachievers that were likely to perform better thanthe average graduate student who participatedin the experiment. This resulted in encouragingbut faulty data on the validity of the experimentaltask. Also, extensive debriefings should be con-ducted, or protocols collected from the pilot sub-jects, to find out what subjects are doing whenperforming the tasks. This will help point out er-ratic components in decision making that maydisguise the effect of treatments.

Even if all possible precautions are taken, aresearcher may never design a perfect studybecause validity is a relative measure andtherefore can only be estimated. In contrast toreliability, for which there are several statisticalmeasures to help in the estimation process,there are no measures that exist for internalvalidity. Hence, we have to live with "rules ofthumb" when trying to assess validity. Our goalis to adjust the task and the data until at least50% of the experimental subjects solve theproblem properly. Only then will we be able toinvestigate the impact of graphical aids on thedecision process. The development of theserules of thumb is crucial in addressing a problemthat we believe has been especially severe inexperimental IS research using decision-making tasks. Take notice of how it became evi-dent only after considerable checking, that thesubjects were unable to perform the experimen-tal task. Researchers must do considerable front

MIS Quarterly/June 1985 151

Page 12: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodological Issues in IS Research

T3

(0

II

QC 1 -

Toffl i— to o

.Iff.I

DC T^

1•55

1111CO

I0)

0) CO

OC ^'

CO -

II3 Q

^ « g> *

•5 O)

CO

o

)nte

n

75.>3

1co

tati

I I

T3 C3)

I I•5^

. S CO

cvi

152 MIS Quarterly/June 1985

Page 13: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodoiogicai issues in iS Research

I

CO

a.

tilX

UJ 1 -

C .<2 Q .— "O CO

? .2 ^CO C p CO W 0)

l lO

c\i

end work to properly position the task difficulty ifthey are to be confident tbat the resultsobserved are, in fact, due to the experimentalmanipulation.

ConclusionTbe primary difficulty of experimental ISresearch is usually considered to be externalvalidity [9]. Too often it is assumed thatlaboratory experiments, by their nature, areinternally valid. This article challenges thatassumption and attempts to convey the mes-sage that assuring internal validity in experi-mental research requires time and great care.To ensure a valid study, several pilot studiesmust be carefully conducted, and the issues oftask and measurement validity must beaddressed. We surmise that it may be a two orthree year process to get the task and instru-mentation correct, with a number of protocolsand trial experiments conducted during thisperiod. Outstanding discussions on internalvalidity by Campbell and Stanley [4], Cook andCampbell [8], and others [10, 23] are recom-mended for assisting researchers in avoidinginternal validity and other design-relatedproblems.

References

[1] Bell, J. "The Effect of Presentation Form onthe Use of Information in Annual Reports,"Management Science, Volume 30, Number2, February 1984, pp. 169-185.

[2] Benbasat, I. and Schroeder, R.G. "AnExperimental Investigation of Some MISDesign Variables," MIS Quarterly, Volume1, Number 1, March 1977, pp. 37-49.

[3] Bennett, J.B. and Chakravarthy, B. "WhatAwakens Student Interest in a Case," Har-vard Business Sctiool Bulletin, March/April1978, pp. 12-14.

[4] Campbell, D.T. and Stanley, J.C. Experimen-tal and Quasi-experimental Designs forResearch, Rand McNally, Chicago, Illinois,1963.

[5] Carter, L.F. "An Experiment on the Designof Tables and Graphs Used for Presenting

MIS Quarterly/June 1985 153

Page 14: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

t\Aethodological Issues in IS Researcti

Numerical Data," Journal of AppliedPsychology, Volume 31,1947, pp. 640-650.

[6] Carter, L.F. "Relative Effectiveness ofPresenting Numericai Data by the Use ofTables and Graphs," U.S. Department ofCommerce, Washington, District of Colum-bia, 1948.

[7] Carter, L.F. "Study of the Best Design ofTables and Graphs Used for PresentingNumericai Data," U.S. Department ofCommerce, Washington, District of Colum-bia, 1948.

[8] Cook, T.D. and Campbell, D.T. Quasi-Experimentation: Design & Analysis Issuesfor Field Settings, Houghton Mifflin, Boston,Massachusetts, 1979.

[9] Courtney, J.F., DeSanctis, G. and Kasper,G.iVI. "Continuity in MiS/DSS LaboratoryResearch: The Case for a Common GamingSimulator," Decision Sciences, Volume 14,Number 3,̂ 1983, pp. 419-439.

[10] Cronbach, L.J. and Meehl, P.E. "ConstructValidity in Psychological Tests,"Psychological Bulletin, Volume 52, Number4, July 1955, pp. 288-302.

[11] Croxton, F.E. and Stein, H. "Graphical Com-parisons by Bars, Squares, Circles, andCubes," Journal of the American StatisticalAssociation, Volume 27, 1932, pp. 54-60.

[12] Croxton, F.E. and Stryker, R.E. "BarChartsversus Circle Diagrams," Journal of theAmerican Statistical Association, Volume 22,1927, pp. 473-482.

[13] DeSanctis, G. "Computer Graphics as Deci-sion Aids: Directions for Research," Deci-sion Sciences, Volume 15, Number 4,1984,pp. 463-487.

[14] Dickson, G.W., Senn, J.A. and Chervany,N.L. "Research in Management InformationSystems: The Minnesota Experiments,"Management Science, Volume 23, Number9, May 1977, pp. 913-923.

[15] Dickson, G.W., Chervany, N.L. and Kozar,K.A. "An Experimental Gaming Frameworkfor Investigating the Influence of Manage-ment Information Systems on DecisionEffectiveness," MISRC-WP-72-12, MISResearch Center, University of Minnesota,1972.

[16] Einhorn, H.J. and Hogarth, R.M. "Behav-ioral Decision Theory: Processes of Judg-ment and Choice," Annual Review of

Psychology, Volume 32, 1981, pp. 53-88.[17] Feliciano, G.D., Powers, R.D. and Bryant,

E.K. "The Presentation of Statistical Infor-mation," Audio Visual CommunicationReview, Volume 11, Number 13, 1963, pp.32-39.

[18] Gerrity, T.P. "Design of Man-Machine Deci-sion Systems: An Application to PortfolioManagement," Sloan Management Review,Volume 12, Number 2, Winter 1971, pp.59-75.

[19] Ghani, J.A. "The Effects of InformationRepresentation and Modificiation on Deci-sion Performance," Unpublished DoctoralDissertation, University of Pennsylvania,1981.

[20] Goldstein, A.G. and Chance, J.E. "VisualRecognition Memory for Complex Con-figurations," Perception and Psychophysics,Volume 8, Number 2B, 1970, pp. 237-241.

[21] Grace, G.L "Application of EmpiricalMethods to Computer-based SystemDesign," Journal of Applied Psychology,Volume 50, 1966, pp. 442-450.

[22] Huber, G.P. "Cognitive Style as a Basis forMIS and DSS Designs: Much Ado aboutNothing?" Management Science, Volume29, Number 5, May 1983, pp. 567-579.

[23] Isaac, S. and Michael, W.B. Handbook inResearch and Evaluation, Edits Publishers,San Diego, California, 1971.

[24] Ives, B. "Graphical User Interfaces forBusiness Infoi^ation Systems," MISQuarterly, Special Issue, 1982, pp. 15-47.

[25] Jenkins, A.M. MIS Design Variables andDecision Making Performance: A SimulationExperiment, UMI Research Press, AnnArbor, Michigan, 1983.

[26] Lucas, H.C. "An Experimental Investigationof the Use of Computer-based Graphics inDecision Making," Management Science,Volume 27, Number 7, July 1981, pp.757-768.

[27] Lucas, H.C. and Nielsen, N. "The Impact ofthe Mode of Information Presentation onLearning and Performance," ManagementScience, Volume 26, Number 10, October1980, pp. 982-993.

[28] Lusk, E.J. and Kersnick, M. "The Effect ofCognitive Style and Report Format on TaskPerformance: The MIS Design Conse-quences," Management Science, Volume

154 MIS Quarterly/June 1985

Page 15: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodological Issues in IS Research

25, Nunfiber 3, August 1979, pp, 787-798,[29] Martin, J, Design of Man-Machine Dialogues,

Prentice Hall, Englewood Cliffs, New Jersey,1973,

[30] McEwan, C,E, "ComputerGraphics: GettingMore from a Management InformationSystem," Data Management, Volume 19,Number 7, July 1981, pp, 30-32,

[31] Nawrocki, L,H, "Alphanumeric versusGraphic Displays In a Problem-SolvingTask," U,S, Army Behavior and SystemsResearch Laboratory, Technical ResearchNote 227, Arlington, Virginia, September1972,

[32] Newell, A, and Simon, H,A, Human ProblemSolving, Prentice Hall, Englewood Cliffs,New Jersey, 1972,

[33] Nunnally, J,C, Psychometric Theory,McGraw-Hill Book Company, New York,New York, 1967,

[34] Peterson, B,K, "The Effect of Tabular andGraphic Presentation on Reader Retention,Reader Reaction and Reader Time,"Unpublished Ed,D, Dissertation, NorthernIllinois University, 1982,

[35] Peterson, L,V,, and Schramm, W, "HowAccurately are Different Kinds of GraphsRead," AVCommunication Review, Volume2, 1955, pp. 178-189,

[36] Powers, M,, Lashley, C , Sanchez, P. andShneiderman, B, "An Experimental Com-parison on Tabular and Graphic DataPresentation," Computer Science TechnicalReport Series TR1142, University ofMaryland, February 1982,

[37] Powers, R,F, and Dickson, G,W, "MIS Pro-ject Management: Myths, Opinions, andReality," California Management Review,Volume 15, Number 3, Spring 1973, pp,147-156,

[38] Prokop, J, "An Investigation of the Effectsof Computer Graphics on Executive Deci-sion Making in an Inventory Control Environ-ment," Unpublished Ph.D. Dissertation,University of North Carolina, 1969,

[39] Remus, W, "An Empirical Investigation ofthe Impact of Graphical and Tabular DataPresentations on Decision Making,"Management Science, Volume 30, Number5, May 1984, pp. 533-542,

[40] Reynolds, J,T, "Cases Which Meet theStudents Need," Academy of Management

Proceedings, R.L, Taylor, M,J, O'Connell,R,A, Zawacki, D,D, Warrick (eds,), KansasCity, Missouri, August 11-14, 1976, pp,48-52,

[41] Sage, A,P, "Behavioral and OrganizationalConsiderations in the Design of InformationSystems and Processes for Planning andDecision Support," IEEE Transactions onSystems, Man, and Cybernetics, VolumeSMC-11, Number 9, September 1981, pp,640-678,

[42] Schutz, H,G, "An Evaluation of Formats forGraphic Trend Displays," Human Factors,Volume 3, Number 3, 1961a, pp, 99-107,

[43] Schutz, H,G, "An Evaluation of Methods forPresentation of Graphic Multiple Trends,"Human Factors, Volume 3, Number 2,1961b, pp. 108-119,

[44] Simon, H,A, "The Architecture of Complex-ity," General Systems Yearbook, Volume 10,1964, pp, 63-76,

[45] Simon, H,A, The Science of the Artificial,Massachusetts Institute of TechnologyPress, Cambridge, Massachusetts, 1969,

[46] Takeuchi, H, and Schmidt, A,H, "New Pro-mise of Computer Graphics," HarvardBusiness Review, Volume 58, January/February 1980, pp, 122-131,

[47] Taylor, R,N, and Benbasat, I, "CognitiveStyles Research and Managerial Informa-tion Use: Problems and Prospects," JointNational Meeting of the Operations ResearchSociety of America and The Institute ofManagement Sciences, Colorado Springs,Colorado, November 1980,

[48] Tullis, T,S, "An Evaluation of Alphanumeric,Graphic, and Color Information Displays,"Human Factors, Volume 23, Number 5,1981, pp. 541-550,

[49] Wainer, H,, and Reiser, M, "Assessing theEfficacy of Visual Displays," Proceedings ofthe American Statistical Association, SocialStatistics Section, Volume 1, 1976, pp,89-92,

[50] Washburne, J,N, "An Experimental Studyof Various Graphic, Tabular and TexturalMethods of Presenting QuantitativeMaterial," Journal of EducationalPsychology, Volume 18, Number 6,September 1927, pp. 361-376,

[51] Watson, CJ . and Driver, R,W, "The In-fluence of Computer Graphics on the Recall

MIS Quarterly/June 1985 155

Page 16: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Methodoiogicai issues in iS Research

of Information," MIS Quarterly, Volume 7,Number 1, 1983, pp. 45-53.

[52] Wright, P. "The Harassed Decision Maker:Time Pressures, Distraction, and the Use ofEvidence," Journal of Applied Psychology,Volume 59, Number 5, October 1974, pp.555-561.

[53] Zmud, R.W. "An Empirical Investigation ofthe Dimensionaiity of the Concept of Infor-mation," Decision Sciences, Volume 9,Number 2, April 1978, pp. 187-195.

[54] Zmud, R.W., Blocher, E. and Moffie, R.P."The Impact of Color Graphic Report For-mats on Decision Performance and Learn-ing," Proceedings of the Fourth InternationalConference on Information Systems, CA.Ross, E.B. Swanson (eds.) Houston, Texas,December 15-17, 1983, pp. 179-193,

About the Authors

Sirkka Jarvenpaa is a Ph.D. candidate in MIS atthe University of Minnesota. For the past two yearsshe has been involved In research concerning theuse of graphics In business decision making. Herdissertation examines the effects of work tasks andgraphical Information display on decision-makingstrategies and performance. The research isfunded by the Society for InformationManagement

Gary Dickson, Professor of MIS, is one of theco-founders of the MIS program and the MISResearch Center at the University of Minnesota.The author of over 50 articles and two booksrelated to MIS, he is the founder and first SeniorEditor of the MIS Quarterly.

Gerardine DeSanctis is an Assistant Professor ofMIS at the University of Minnesota. Her currentresearch interests are in graphicai decision aids,group decision support, and impiementation ofinformation systems.

156 MIS Quarterly/June 1985

Page 17: Methodological Issues Introduction in Experimental IS ... · Experimental Research Taylor and Benbasat [47] describe a series of methodological problems plaguing one area of experimentally

Recommended