+ All Categories
Home > Documents > Evaluating the use of system dynamics for improving ...

Evaluating the use of system dynamics for improving ...

Date post: 24-Feb-2022
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
241
UNLV Retrospective Theses & Dissertations 1-1-2008 Evaluating the use of system dynamics for improving stakeholder Evaluating the use of system dynamics for improving stakeholder decision maKing decision maKing Marcia Lynne Turner University of Nevada, Las Vegas Follow this and additional works at: https://digitalscholarship.unlv.edu/rtds Repository Citation Repository Citation Turner, Marcia Lynne, "Evaluating the use of system dynamics for improving stakeholder decision maKing" (2008). UNLV Retrospective Theses & Dissertations. 2858. http://dx.doi.org/10.25669/7maq-mtz7 This Dissertation is protected by copyright and/or related rights. It has been brought to you by Digital Scholarship@UNLV with permission from the rights-holder(s). You are free to use this Dissertation in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/or on the work itself. This Dissertation has been accepted for inclusion in UNLV Retrospective Theses & Dissertations by an authorized administrator of Digital Scholarship@UNLV. For more information, please contact [email protected].
Transcript

UNLV Retrospective Theses & Dissertations

1-1-2008

Evaluating the use of system dynamics for improving stakeholder Evaluating the use of system dynamics for improving stakeholder

decision maKing decision maKing

Marcia Lynne Turner University of Nevada, Las Vegas

Follow this and additional works at: https://digitalscholarship.unlv.edu/rtds

Repository Citation Repository Citation Turner, Marcia Lynne, "Evaluating the use of system dynamics for improving stakeholder decision maKing" (2008). UNLV Retrospective Theses & Dissertations. 2858. http://dx.doi.org/10.25669/7maq-mtz7

This Dissertation is protected by copyright and/or related rights. It has been brought to you by Digital Scholarship@UNLV with permission from the rights-holder(s). You are free to use this Dissertation in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/or on the work itself. This Dissertation has been accepted for inclusion in UNLV Retrospective Theses & Dissertations by an authorized administrator of Digital Scholarship@UNLV. For more information, please contact [email protected].

EVALUATING THE USE OF SYSTEM DYNAMICS FOR IMPROVING

STAKEHOLDER DECISION MAKING

By

Marcia Lynne Turner

Bachelor of Arts University of San Diego

1988

Master of Arts University o f Nevada, Las Vegas

1997

A dissertation submitted in partial fulfillment of the requirements for the

Doctor of Philosophy Degree in Environmental Science Department of Environmental Studies Greenspun College of Urhan Affairs

Graduate College University of Nevada, Las Vegas

December 2008

UMI Number: 3352190

Copyright 2008 by Turner, Marcia Lynne

All rights reserved.

INFORMATION TO USERS

The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleed-through, substandard margins, and improper alignment can adversely affect reproduction.

In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.

UMIUMI Microform 3352190

Copyright 2009 by ProQuest LLC.

All rights reserved. This microform edition is protected against

unauthorized copying under Title 17, United States Code.

ProQuest LLC 789 E. Eisenhower Parkway

PC Box 1346 Ann Arbor, Ml 48106-1346

Copyright by Marcia Lynne Turner 2008 All Rights Reserved

Dissertation ApprovalThe Graduate College University of Nevada, Las Vegas

November 6 . 20 08

The Dissertation prepared by

M arc ia Lynne T u rn e r

Entitled

E v a lu a t in g th e Use o f System D ynam ics f o r

Im prov ing S ta k e h o ld e r D é c is io n M aking

is approved in partial fulfillment of the requirements for the degree of

D o c to r o f P h ilo s o p h y in E n v iro n m e n ta l S c ien c e ________

CoU^gè Faculty Representative

Exam inm ton CbmmitVee Chair

ExaminM ion C o m m itta l M em ber

irmimn Com m ittee M em ber

Dean o f the G raduate College

11

ABSTRACT

Evaluating the Use of System Dynamics for Improving Stakeholder Decision Making

By

Marcia Lynne Turner

Dr. Krystyna Stave, Examination Committee Chair Associate Professor and Graduate Coordinator

Department of Environmental Studies University o f Nevada, Las Vegas

When lay stakeholders are involved in complex environmental decision making,

the ensuing decision does not always effectively solve the problem of focus. This can be

due to the fact that standard facilitation methods commonly used to manage such efforts

frequently fail to promote thorough and rational decision analysis. A review of classical

and behavioral decision theory, stakeholder research and standard facilitation practices

suggests that standard facilitation methods tend to enable behavioral decision making

strategies which oversimplify decision making tasks, rather than employing classical

rational strategies which stress a more thorough decision analysis and maximization of

decision outcomes.

To test this hypothesis, I conducted a comparative experiment involving 196

stakeholders who attended a solid waste management public meeting in Los Angeles.

Participants were randomly assigned to a control and experimental group. The control

group was facilitated with standard methods and the experimental group was facilitated

111

with a more classically rational method, specifically system dynamics-based facilitation.

Pre- and post-intervention surveys were administered to measure participants’ ability to

identify effective solutions, their level o f focus on the presented materials and their level

of procedural satisfaction. I hypothesized that the experimental group would score higher

in each of these areas.

The results supported my first two hypotheses by showing that the experimental

group was better at helping its participants identify more effective outcomes and maintain

a greater focus on relevant information. However, the results failed to support the third

hypothesis that the experimental group would have a higher level o f procedural

satisfaction than the control group. Instead, the results showed that the standard

facilitation methods used in the control group were better at promoting participant

satisfaction and self confidence than were the system dynamics methods.

If the objective o f stakeholder involvement in complex environmental decision

making is the development of effective decisions to solve pressing environmental

problems, this experiment shows that system dynamics-based facilitation is an effective

tool for managing stakeholder involvement. The results also show that the identification

of effective solutions does not guarantee participant satisfaction and confidence.

IV

TABLE OF CONTENTS

ABSTRACT................................................................................................................................. iii

LIST OF FIGURES ..............................................................................................................vii

LIST OF TABLES.................................................................................................................... viii

ACKNOWLEDGEMENTS................................................................................................. ix

CHAPTER I PROBLEM............................................................................................................. ILegislative M andates.............................................................................................. 2Pragmatic Motivation............................................................................................................... 3Examples o f Failure to Implement Effective Solutions.......................................................5

Example #I : Mass-Transit Development Diluted and Delayed.................................... 5Example #2: Freev^ay Development Delayed and Defeated........................................ 7

General Research Question................................... 9Classical and Behavioral Decision Theory................................... 10

Classical Decision Making Theory Overview................................................................10Behavioral Decision Making Theory Overview............................................................ 14

Implications o f Decision Theory.......................................................................................... 22Analysis of Standard Group Decision Making Facilitation Practice.............................. 28Hypothesis................................................................................................................................36

CHAPTER 2 APPROACH................................................................................ 37Overview o f System Dynamics-Based Facilitation...........................................................38

Definition o f Problem ....................................................................................................... 40Identification o f Problem Causes.....................................................................................41Construction and Validation of Model............................................................................ 44Model U se........................................................................................................................... 45Policy A nalysis.................................................................................................................. 46

Analysis of System dynamics-based facilitation Adherence to Ideal............................. 48Related Research.....................................................................................................................50

CHAPTER 3 M ETHOD............................................. 54Experimental Procedures .................................... 54Experimental Controls............................................ 57Experimental Setting............................................... 59Conference Schedule and A genda....................................................................................... 61Small-Group Work Session Assignment ............................................................. 62

Leverage Point Evaluation Criteria...................................................................................... 66Group Facilitation Intervention.............................................................................................67Measurement Instrument....................................................................................................... 69

Demographic and Descriptive Questions........................................................................70Pre- and Post-lnten:ention Survey Questions Design................................................... 73

CHAPTER 4 RESULTS............................................................................................................ 83Results Overview....................................................................................................................83

Demographics and Descriptions...................................................................................... 84Questions Related to Research Hypotheses....................................................................93

CHAPTER 5 DISCUSSION........................................... ......................................................111General Summary and Implications of Results.................................................................I l lDiscussion o f Results Related to Hypothesis 1.................................................................112Discussion o f Results Related to Hypothesis 2 .................................................................118Discussion of Results Related to Hypothesis 3 .................................................................121Strengths and Limitations....................................................................................................129

Strengths.............................................................................................................................129Limitations........................................................................................................................ 131

Suggestions for Future Research............................................................................... 136Confirm Effectiveness of System dynamics-based facilitation with PublicStakeholders ............................................................................................................. 136Study the Effectiveness o f System Dynamics at Different Points Along a Spectrumof Involvement Intensity.................................................................... 138Study the Effectiveness o f Traditional Facilitation Outcomes Independently, Not inComparison with System Dynamics..............................................................................139

Conclusion................................................................................................................. 140

APPENDIX................................................................................... 143Standard Facilitation Process Analysis..............................................................................143Pre-Intervention Survey.......................................................................................................159Post-Intervention Survey......................................................................................................163Demographic and Descriptive D ata................................................................................... 169Data Related to Hypothesis 1................................................................ 183Data Related to Hypothesis 2 ................................................................ 185Data Related to Hypothesis 3 .............................................................................................. 187

BIBLIOGRAPHY.................................................................................... 193

VITA.......................................................................................................................................... 229

VI

LIST OF FIGURES

Figure 1. Causal Loop Diagram..............................................................................................43Figure 2. Solid Waste Integrated Resouree Planning.......................................................... 60Figure 3. Reeycling Loop.........................................................................................................63Figure 4. SWIRP Model........................................................................................................... 69Figure 5. Démographie and Deseriptive Responses............................................................ 93Figure 6. Findings o f Signifieant Difference Assoeiated with Hypothesis 1................... 118Figure 7. Findings o f Signifieant Differenee Assoeiated with Hypothesis 2................... 120Figure 8. Findings o f signifieanee related to Hypothesis 3.................................. 129

V ll

LIST OF TABLES

Table 1. Summary o f Rational Group Decision Making Process S teps.......................... 24Table 2. Analysis of Standard Group Decision Making Facilitation Process Steps 31Table 3. Comparative Analysis of Level of Adherence..................................................... 49Table 4. Demographic and Descriptive Survey Questions................................................72Table 5. Hypothesis 1 and Related Survey Questions....................................................... 76Table 6. Hypothesis 2 and Related Survey Questions....................................................... 78Table 7. Hypothesis 3 and Related Research Questions.................... 81Table 8. Demographic and Descriptive Questions..............................................................85Table 9. Number o f Past SWIRP Meetings Attended........................................................ 86Table 10. Reeyeling Behavior.................................................................................................. 86Table 11. Years Living in L A .................................................................................................. 87Table 12. Zip Code/Regional “Wasteshed” ........................................................................... 87Table 13. Sex.............................................................................................................................. 88Table 14. Education Level........................................................................................................88Table 15. A ge............................................. 89Table 16. Housing Type............................................................................................................89Table 17. Own or Rent................................................................ 89Table 18. Number in Household..............................................................................................90Table 19. Income........................................................................................................................90Table 20. Systemic Value Coding K ey...................................................................................96Table 21. Summary o f Statistieal Analysis o f Hypothesis 1 Questions............................100Table 22. Ranking Scale for Hypothesis 2 ............................................................................101Table 23. Summary o f Statistieal Analysis of Hypothesis 2 Questions............................104Table 24. Summary o f Statistical Analysis o f Hypothesis 3 Questions............................108Table 25. Sample participant feed back regarding what did not go well......................... 132

Vlll

ACKNOWLEDGEMENTS

Dr. Krystyna Stave introduced me to system dynamics-based facilitation some

years ago and ever since, I’ve had a hunch that it was a tool which could help improve

stakeholder participation in environmental decision making. Dr. Stave has mentored me

at each stage of my quest to test my hunch and 1 am sincerely thankful to her for sharing

her knowledge, expertise and friendship over the years.

1 would also like to acknowledge and thank my Examination Committee

Members Dr. Timothy Famham, Dr. Anthony Ferri, and Dr. Jerry Simich for their time,

their sage advice and their kindness. Thank you too, to Chancellor James Rogers, Dr.

Etienne Rouwette and Dr. Maurizio Trevisan, and the late Dr. Hal Rothman for their

support and encouragement.

Without the help of Dr. Stave, Steve Coyle, Ruth Abbe and the leadership at the City of

Los Angeles, 1 would not have been able to conduct such a robust experiment. 1 am very

grateful to them for their willingness to enable and assist with this research project. 1 also

appreciate assistance o f the following system dynamics facilitators who helped make this

experiment possible; Dan Andersen; Mike Dwyer; Stephanie Fincher; Nick Grenier; Leah

Hare; Megan Hopper; Emy Laija; Michael Matulis; Ashley Rosia; Surbhi Sharma;

Heather Skaza; Simon Wade; Jennifer Ward; Henry Weckesser.

And finally, a special thanks to my husband Daniel Turner, my stepchildren

Hunter and Nathan, and my parents MaryLou and Jerry Holmberg for their patience,

support and love.

ix

CHAPTER 1

PROBLEM

When government ageneies initiate deeision making proeesses to solve complex

environmental problems, they often solicit public stakeholder input. There are good

reasons to involve stakeholders, including federal mandates and pragmatic

considerations. However, such stakeholder involvement processes often do not result in

the selection o f effective decision outcomes. This is due in part to the failure of

commonly-used group facilitation techniques and approaches to promote a thorough and

rational decision analysis. A rational decision analysis should weigh and balance

technical, financial and environmental feasibility along with soeial acceptability to

identify the solutions with the greatest potential to effectively solve the problem at hand

once implemented. If stakeholder group facilitation processes do not keep the

participants focused on the task of rational deeision making, the effectiveness of the

ultimate decision can suffer, which leaves the pressing environmental problem

unresolved.

In an analysis o f 161 cases, Bingham (1986) found that public decisions in

environmental mitigation issues were not implemented in 20% of cases involving site-

specific issues and in 59% of cases involving policy action. While sueh cases could have

failed due to obstacles to implementation, it is also possible that the decision making

I

process, themselves failed to help the participants identify solutions that could be

implemented. In either case, failure to implement an effective solution is problematic

because it leaves potentially pressing environmental problems unresolved.

The purpose of my analysis was to examine how such stakeholder involvement

efforts could be better facilitated to promote a more rational decision analysis, and to

study why standard group decision making facilitation methods often fail to do so.

Legislative Mandates

Public participation in governmental decision making can be traced to federal

mandates in the 1940s, with the enactment of the Administrative Procedures Act (APA)

of 1946 (Beierle and Cayford, 2002; Creighton, 1999; Gale, 2006). In the days of

President Roosevelt’s “New Deal,” the scope of the executive branch influence, and the

size and scope of governmental agencies expanded (Shapiro, 2006; Gale, 2006).

Legislation was crafted to limit the influence of governmental agencies in response to

these expansions. The APA was passed to ensure that steps would be taken to inform the

public about, and involve them in, the task of rulemaking (Shapiro, 2006).

The APA specifically required rulemaking agencies to: provide public notice of

the rulemaking effort, provide an opportunity for public representation at hearings, ensure

that the agency kept records o f the hearings; and that the agency held public hearings

(Gale, 2006). The APA also included provisions that enabled the courts to withhold

agency findings that it deemed to be, “ .. .I ) arbitrary and capricious, 2) unconstitutional,

3) in excess o f legislative mandate; 4) made without observing procedures required by

law; 5) unsupported by substantial evidence...,” (Garson, 1998, p. I). These court

provisions gave the public recourse if an agency failed to meet the standards set forth in

the APA.

Other key pieces of federal legislation, which include public involvement

mandates, that have been enacted since the APA in 1946 include the Water Pollution

Control Act (1948), National Housing Act (1954), Air Pollution Control Act (1955),

Economic Opportunity Act “War on Poverty” (1964), Wilderness Act (1964),

Demonstration Cities and Metropolitan Development Act “Model Cities” (1966).

Freedom of Information Act (1966), National Environmental Policy Act (1969),

Environmental Quality Improvement Act (1970), Federal Advisory Committee Act

(1972), Endangered Species Act (1973), Government in the Sunshine Act (1977),

Nuclear Waste Policy Act of 1982 (1982), Emergency Planning and Community Right to

Know Act (1986), and Administrative Dispute Resolution Act (1996). While this list is

not exhaustive, it illustrates that virtually no major governmental act is exempt from

giving the public an opportunity to participate in governmental action.

Pragmatic Motivation

Stakeholder involvement is also pragmatic because involving stakeholders can

help to improve the quality and sustainability of outcomes (Creighton, 1980). Among the

benefits o f involving the public in governmental decision making are a series of “social

goals” identified by Beierle and Cayford (2002) in their study of 239 public involvement

cases. These goals include, “ ... incorporating public values into decisions... improving

the substantive quality o f decisions ... resolving conflicts among competing interests

....building trust in institutions ....educating and informing the public” (p. 14).

Striving for such goals can help to improve the quality o f the decision outcome

and the likelihood for its implementation. By promoting “high quality deliberation,” such

involvement can help to improve participants’ ability to make more fully informed

decisions (Williamson & Fong, 2004). In turn, this can improve the potential

effectiveness of the decision outcome in helping to solve the problem of focus. It can also

help improve decision effectiveness by ensuring that new and different perspectives or

issues that may not have otherwise been considered are included in the decision analysis

(Allen, 1998).

Failure to involve the public can lead to strong public opposition to a proposed

action. Prior to the institutionalization of legislative mandates, some public agencies

adopted a decide-announce-defend (DAD), attitude in which they would make decisions

without public knowledge or input and then announce the decision at the time of

implementation. The public quickly became wise to these subversive strategies and

developed sophisticated strategies for halting progress on such projects (Beierle &

Cayford, 2002).

While the DAD strategies have become obsolete, the sophisticated public

involvement skills for challenging proposed governmental action to address

environmental problems have persisted. In addition to the well-known opposition

attitudes o f Not-In-My-Back-Yard (NIMBY), Kiefer (2008) outlines other similar

strategies such as Not-Over-There-Either (NOTE), Not-In-Anyone’s-Back-Yard

(NIABY), Build-Absolutely-Nothing-Anywhere-Near-Anyone (BANABA), and even

Not-on-PIanet-Earth! (NOPE) (p. 1). These opposition attitudes often manifest

themselves as obstructionist behavior, which can inhibit constructive discussion

regarding how best to solve the problem at hand. Sometimes such behavior stems from

selfish interests, but other times stakeholders challenge government action for more

altruistic reasons. Failure to sufficiently address either type of stakeholder challenge can

result in failure to identify an agreeable solution, or tentative agreement on a diluted

solution to resolve a pressing environmental problem.

Examples of Failure to Implement Effective Solutions

While many public involvement efforts result in the implementation of effective

decision outcomes, research has shown that some outcomes of such processes are never

implemented. Beierle and Cayford (2002) studied a number of public involvement cases

and assigned a score to each case representing the likelihood that the final decisions

would be implemented. They studied 61 public decision-making efforts recommending a

change in policy, law, or regulation. Thirty percent of cases received a medium to low

score in degree o f implementation. Similarly, 51% of the 90 cases analyzed for

recommendations for site-specific action received a medium to low implementation

score. The following two examples illustrate how failure to implement a solution, or

failure to develop a comprehensive solution, prevented the agency from solving its

complex transportation-related problem.

Example #1: Mass-Transit Development Diluted and Delayed

Due to the unprecedented growth in Clark County Nevada over the past decade,

and the failure of transportation infrastructure to keep pace with that growth, traffic

congestion has become a major problem throughout the region. In seeking to alleviate

this congestion problem, the Clark County Regional Transportation Commission (RTC)

was considering whether it should, and how it could most effectively enhance its mass

transit operations throughout the region. However, the RTC wanted to solicit input from

the public prior to making a decision.

In 2005, the RTC convened a Citizen Advisory Committee (CAC) comprised of a

diverse and representative group of public stakeholders to address this issue. The purpose

was to provide CAC participants with the relevant information about the potential

alternative mass transit modes and routes under consideration to help reduce congestion.

This CAC met for a number o f months and received presentations on a variety of related

issues. Interactive and hearty debate was encouraged throughout the process.

In the end, a majority of the CAC participants were able to agree upon a

comprehensive region-wide combination of mass transit solutions, which included the

development o f light rail services in the southeastern portion of the region. However, two

members o f the CAC who lived in a neighborhood adjacent this light rail alignment in the

southeast opposed this alignment and adopted a NIMBY attitude. They worked to delay

implementation of the overall mass transit project.

At one point in the CAC process, these two members emailed information

countering the data provided by the agency to other CAC participants in an effort to

persuade them to oppose the alignment along their neighborhood. Other CAC members

responded negatively to this approach. The CAC Chairman ultimately sent out an email

to participants stating;

“As the Committee Chair, I believe for the sake o f good order I need to step

forward and ask everyone to please not get caught up in our passion of the

moment. I would ask o f all of us that we just stay the course and use our

meetings to debate and exchange thoughts and ideas” (G. Johnson, personal

communication, December 12, 2005).

In the end, these two members successfully fought to exclude the southeast

alignment from consideration, and advocated for the delay of the implementation of light

rail development in other sectors o f the region. The headline in the Las Vegas Sun

newspaper during CAC deliberations read, “Not in My Back Yard: Proposals for

Improving Transportation Don’t Fly in Henderson” (June 12, 2006). When the CAC

recommendations ultimately went to the RTC officials for consideration, the RTC

officials voted to dilute the scope and delay the implementation of the project. The

headline in the Las Vegas Sun announcing the final decision by the RTC read, “Light

Rail Option is Derailed” (March 3, 2007).

The agency missed the opportunity to implement a more robust set of

recommendations supported by the majority of CAC participants by failing to sufficiently

address the biases o f two participants. The agency’s ultimate goal of alleviating traffic

congestion was not sufficiently met because the final decision reduced the scope of the

project and delayed its implementation.

Example #2: Freeway Development Delayed and Defeated

The Hatton Canyon freeway development project proposed by the California

Department of Transportation (Caltrans) in Carmel, California, was initiated in the 1930s

to help solve growing traffic congestion and routing problems in the region. The purpose

of this project was to alleviate congestion-related problems by building a new freeway

through the Carmel Valley to improve traffic flow. However, despite agency efforts

solicit public input through a variety of standard public participation facilitation methods.

it appears that Caltrans ignored the public feedback it received and was unwilling to

consider altering its preferred project proposal. As a result, the public debate lasted over

54 years and ultimately ended in defeat. The defeat was not due to failure to build a

freeway in Hatton Canyon, but rather due to Caltrans’ inability to find a mutually-

acceptable and effective solution that could be implemented to solve the traffic

congestion problems.

Caltrans took an all-or-nothing approach; therefore, was unable to address the

conflicts in participants’ and agency’ positions, and was unwilling to rethink or revise the

scope of their proposed project to find a solution to the problem. In addition to the

tremendous amotmt o f time and money wasted on the part of both the agency and the

citizens over this 54-year period, the issue was entangled in a long lasting and costly legal

challenge (Carmel-by-the-Sea v. U.S. Department o f Transportation, 1996).

In the end, a group of stakeholders who opposed Caltrans proposal eventually

helped to initiate legislation (California Senate Bill 45, 1997) to prevent the development

of the project. This legislation ultimately convinced the Governor o f California to declare

the project officially defeated, which resulted in a transfer o f the Hatton Canyon freeway

right-of-way from Caltrans to the Department of Parks and Recreation “ .. .for the purpose

of developing a state park....” (Governor Gray Davis Press Release, August 1, 2001).

One community stakeholder summed up Caltrans’ inability to negotiate a

mutually-agreeable solution in the following way:

“I think this [Hatton Canyon Freeway issue] is a good illustration of....the failure

of the agency to work cooperatively with the local populace. In this case, had

Caltrans not adopted the stance it did, basically stonewalling any community

8

efforts at design modification, it is likely that the impasse would not have

developed and some modified form of the improvements would have been built.

However, Caltrans created a war by their intransigent stance and only because of

great effort on behalf of the local citizenry, they lost,” (F. P. Lloyd, personal

communication, May 6, 2002):

Over the course o f 54 years, traffic congestion in the region got worse, road

construction got more expensive, and Caltrans and the stakeholders wasted countless

amounts of time and money. Instead of collaboratively identifying a way to solve the

congestion-related problem, the Caltrans’ stakeholder involvement effort did more to

promoted animosity towards the agency, than it did to identify a solution to the problem it

was charged to resolve.

General Research Question

Rational decision analysis should include a thorough weighing and balancing of

technical, financial and environmental feasibility, while also considering the social

acceptability of the altenaative solutions. The function of the facilitation process is to

keep the all participants, agency representatives and stakeholders, focused on the task of

engaging in a rational decision analysis to identify solutions with the greatest potential

effectiveness to solve to problem at hand once implemented. Without a thorough and

rational decision analysis solutions are diluted or defeated and therefore fail to

sufficiently resolve the complex environmental problem of focus. The research and

examples listed above show that limitations to rational decision making can inhibit the

identification o f solutions with a higher level o f potential effectiveness in solving the

problem at hand. In my 20 years as a public participation practitioner and participant in

environmental decision making, I’ve seen many times in which facilitators struggle with

keeping a group focused on the task of rational decision analysis and watching the

ensuing decision quality suffer as a result. My personal observations and my recent

research lead me to ask what should facilitators be doing differently to improve the level

of rational decision analysis in stakeholder group decision making efforts?

Classical and Behavioral Decision Theory

To better understand why there is such a high rate o f failure to develop effective

solutions in group decision making efforts I studied decision making theory and standard

stakeholder group facilitation practices. I first reviewed decision making literature to

better understand how people, especially in groups, make decisions in theory and

practice. This review included an analysis of the differences between classical and

behavioral decision theory and an analysis of standard group decision making processes.

By “standard,” I mean those facilitation processes most commonly employed by group

decision making professionals.

Classical Decision Making Theory Overview

Classical decision-making theory describes the steps that would be taken to make

fully rational decisions to maximize a decision outcome (Shafer, 1996). It assumes that

decision makers have access to all relevant information they need to make a good

decision and that they possess the mental capability to process the information to define

probable utility. This theory assumes that decision makers focus on: “ ... identifying

problems or opportunities; identifying goals and objectives; identifying alternative

1 0

solutions; gathering data; evaluating alternatives; and choosing the best alternative” (Club

Managers Association o f America [CM A A], 1991).

Such theories have their origins in utility and probability theories. Classical

decision theory can be traced back to “Utility” theory presented by Bentham (1789) and

Mill (1863). Mill (1863) describes utility as “the greatest good for the greatest number.”

Utility theory describes how human decision makers evaluate consequences to identify a

solution that produces maximum utility. Mill (1863) called such a decision maker the

“Economic Man,” a hypothetical decision maker who is both omnipotent and omniscient

and able to maximize the utility of decision outcomes while minimizing effort.

Probability theorists, such as Baye (1763), explained that decision makers assess

the probable utility of various alternative courses o f action and maximize utility by

choosing among them. Baye’s theorem is a means of calculating conditional probabilities

(Joyce, 2003). Bernoulli (1738) claims that decision makers identify the “expected

utility” of alternative solutions by judging the possible utility o f each probable outcome

in an effort to determine the highest probability of selecting the best option.

One example o f a rational decision theory is Dewey’s (1910) description of the

five-step decision making process o f “reflective thinking.” For Dewey, reflective thinking

means “ .. .turning a topic over in various aspects and in various lights so that nothing is

overlooked - almost as one might turn a stone over to see what its hidden side is like or

what is covered by it” (pi. 57). Therefore, reflective thinking refers to thorough analysis.

As with scientific inquiry, Dewey’s theory explains that a decision maker who employs

the steps o f reflective thinking will achieve a more optimal outcome than those who do

not.

11

Dewey’s (1910) reflective thinking involves the following five “logically distinct”

steps, which include identifying: “(1) a felt difficulty [the problem of focus]; (2) its

location and definition; (3) suggestions o f possible solutions; (4) development of

reasoning of the bearing o f the suggestion; (5) further observation and experimentation

leading to its acceptance or recognition that it is the conclusion of belief or disbelief’ (p.

72). Dewey’s theory o f reflective thinking is grounded in the process o f scientific inquiry

and general logical theory.

This concept o f reflective thinking has been further refined in research on the

effectiveness o f communication in group decision making conducted by Gouran and

Hirokawa (1983). This research describes the following decision making process steps in

functional decision making: show correct understanding of the issue to be resolved;

determine the minimal characteristics any alternative, to be acceptable, must posses;

identify a relevant and realistic set o f alternatives; examine carefully in relationship to

each previously agreed-upon characteristic of an acceptable choice; and select the

alternative that analysis reveals to be the most likely to have desired characteristics

(Gouran et al., 1993). These steps assume that participants are motivated, the choice is

not obvious, and relevant information is available; however, a decision maker who

adheres to these steps will be more likely to rationally evaluate the problem to identify

the best possible solution to address and resolve the problem at hand.

Janis and Mann’s (1977) “decisional conflict” provides an additional refinement

to the rational decision making theory. This research articulates some of the ways in

which decision makers limit the scope of their decision analysis. Based on this theory and

a review o f the related literature on decision performance, Janis and Mann identified a set

12

of “ideal” procedural steps to describe how rigorous decision makers behave. These ideal

procedural criteria include:

• Thoroughly canvasses a wide range o f alternative courses of action.

• Surveys the full range of objectives to be fulfilled and the values implicated by the

choice.

• Carefully weighs whatever he knows about the costs and risks of negative

consequences, as well as positive consequences that could flow from each alternative.

• Intensively searches for new information relevant to further evaluation of alternatives.

• Correctly assimilates and takes account of any new information or expert judgment to

which he is exposed, even when the information or judgment does not support the

course o f action he initially prefers.

• Re-examines the positive and negative consequences of all known alternatives,

including those originally regarded as unacceptable, before making a final choice,

(p .ll) .

These process steps focus on the link between “vigilant” or thorough decision

analysis and effective outcomes. They describe the steps that effective decision makers

make in selecting the most effective outcome. Janis and Mann (1977) contend that failure

to follow these ideal steps will prevent a decision making process from resulting in a

successful outcome.

In all three o f these approaches to classical decision making, the ability for

decision makers to live up to these rational decision analysis requirements rests on a

number of core assumptions. These group decision making assumptions are: (I) all

participants are motivated to make the best choice, (2) the choices are not obvious, (3) the

13

groups’ resources are better than any one individual members’ abilities, (4) the task is

specific, (5) the relevant information is provided, (6) the participants have sufficient

mental capacity to complete the task, and (7) communication is an essential element of

success (Gouran et a l, 1993).

Classical theory is also based on the assumptions that all goals are agreed upon

and not in conflict, all alternatives and consequences can be and are completely

evaluated, all critical data are available and accessible, decision makers are instinctively

seeking to maximize outcomes, decision evaluation criteria are agreed to by all and all

are seeking to optimize outcomes, and that all participants are capable of and willing to

be rational Higgins (1991). However, many researchers believe that classical decision

theory does not accurately reflect the way in which people make decisions because its

underlying assumptions are unrealistic.

Behavioral Decision Making Theory Overview

In contrast to classical decision theory, behavioral decision making theory claims

humans cannot make and often do not actually attempt to make fully rational decisions

(Hogarth, 1987). For instance, Orasanu and Cormolly (1993) found that classical

approaches largely ignore dynamic decision-making setting issues such as the fact that

problems are often “ ...ill-structured; information is incomplete, ambiguous or changing;

goals are shifting, ill-defined or competing; decisions occur in multiple event feedback

loops; time constraints exist; stakes are high; and many participants contribute to the

decision” (p. 19). Through the observation of actual human decision making behavior,

behavioral-decision theorists have found that due to a variety of natural limitations,

humans do not actually try to maximize decision outcomes.

14

For instance, Simon’s (1957) concepts of satisficing and bounding of rationality

are seminal theories describing the irrational tendencies of human decision makers.

Simon points out that classical, rational decision makers are expected to review all

alternatives in “panoramic fashion,” they consider the “whole complex” of consequences

for eaeh alternative, and they use eriteria to single out the best alternative (p. 80).

However, he contends that such rationality requires complete knowledge and a keen

ability to antieipate consequences (p. 81). He concludes that beeause real decision makers

have limits to their knowledge of relevant information and their ability to mentally

process information and anticipate consequences, they are not able to fully comply with

rational decision making standards (p. 40).

In contrast to Mill (1863) Economic Man’s maximization tendencies, Simon

(1957) proposes a hypothetieal “Administrative Man” who Simon claims has a tendency

to satisfice because he does not have “ .. .the wits to maximize...” when making deeisions

(p. xxiv). Satisficing is described as the outeome of a deeision making proeess in which

the deeision maker efficiently seleet alternatives that are “good enough” rather than ones

that would maximize the decision outcome. Satisfieing replaces rational deeision making

because it limits the breadth and depth of the analysis of all alternatives prior to making a

deeision.

Where Mill’s Eeonomic Man’s attempts to eonsider all of the real-world

eomplexities, Simon’s Administrative Man, oversimplifies the seope of the analysis of

the issue to a more manageable set of information. This is what Simon (1957) calls

“bounding” rationality. According to Arnold and Feldman (1986), bounded rationality

implies that beeause decisions are always ineomplete and based on inadequate

15

information, it is impossible to identify all possible alternative solutions, it is impossible

to completely analyze alternatives because we cannot possibly predict all possible

consequences; therefore, it is impossible to maximize or optimize decision outcomes.

Subsequent to Simon’s identification of satisficing and bounding of rationality,

researchers began to develop additional theories to describe other simplification strategies

in decision making behavior. In the early 1970s, Tversky and Kahneman (1974) proposed

a theory of heuristics and biases to describe how decision-makers make judgment under

uncertainty. They argued that decision-makers often rely on heuristic behaviors which

involve the use of short-cuts, or rules-of-thumb to oversimplify and reduce the overall

complexity o f their decision-making task (Tversky & Kahneman, 1974). They contend

that heuristic behavior interferes with the ultimate effectiveness o f the decision outcomes

because it produces systematic error or particular biases when a decision maker engages

in predicting potential outcomes of a decision making process (Tversky & Kahneman,

1974).

Simon (1957), Miller (1956), and Vennix (1999) explain that humans have

limited information processing capacity, which means that humans can naturally

comprehend certain amounts or levels of complexity o f information. To avoid stretching

ourselves beyond our capabilities, humans naturally use heuristic strategies to stick to

what they know, or what is easiest for them to understand. Hogarth (1987) explains that

as a result of heuristic tendencies, humans naturally try to reduce the amount of effort

they must exert in making decisions. If decision makers do not have all the necessary

information to make a folly informed decision and they are not willing to seek additional

information, the quality o f the ensuing decision is likely to be suboptimal.

16

According to Tversky and Kahneman (1971), one o f the primary ways in which

people use heuristic strategies in making decisions is through a strategy they call

representativeness. They define representativeness as a thought proeess used by deeision

makers, in which they judge the merits o f an alternative by the degree to which it

resembles something familiar to them. They are biased towards things that represent what

they already know, beeause it makes it easier for them to prediet the related outeome of

the decision. Cohen (1993) likens this to making a decision based on a stereotype or

prototype rather than objectively analyzing the facts of each situation as unique. This

implies that deeision makers are not always open to new eoneepts and instead seek to

support eoneepts that reinforce known eommodities.

Another heuristic strategy Tversky and Kahneman (1973) describe is the concept

o f availability. Availability refers to how easy it is for a decision maker to recall or access

relevant information or how easy it is them to reeognize, imagine, or understand the

details of the decision event (Tversky & Kahneman, 1974). They say that the breadth and

depth of the diseussion is severely limited by how easy it is to access or process

information.

A third heuristic strategy deseribed by Tversky and Kahneman (1974) is the

eoneept of anehoring and adjustment, in whieh deeision makers adjust their thinking plus

or minus a few degrees from their eurrent baseline understanding o f the issue or their

“anehor position”. As Beaeh, Barnes, and Christensen-Szalanski (1986) explain, when

this happens the final outeome does not deviate much from the baseline anehor position.

Liehtenstein, Fisehhoff, and Phillips (1982) and Cohen (1993) eaution that anchoring and

adjusting often results in deeision makers feeling overeonfident in the results, when in

17

fact the ultimate effectiveness of the decision can be limited by this decision making

strategy.

Other researeh highlights additional limitations to rationality in deeision making.

In conducting research on unsuccessful decision making, Janis and Mann (1977)

developed a theory of decisional conflict. This theory highlights eonflieting feelings

deeision makers often experience whieh interferes with their deeision analysis. “The most

prominent symptoms of such conflicts are hesitation, vacillation, feelings o f uncertainty,

and signs of aeute emotional stress whenever the decision comes within the focus of

attention” (Janis & Mann, 1977, p. 46). When a decision making participant experiences

such decisional conflict, they are more likely to exhibit indifference, close mindedness,

bias, procrastination, and indiscriminant goals (p. 204).

In addition to these patterns of inertia whieh interfere with making progress

towards making an effective decision, Janis and Mann (1977) explain that decision

makers also demonstrate coping patterns of defensive avoidance or hypervigilance, which

involve avoiding confliet by ehanging the subject, shifting responsibility, or bolstering

the support for a less-than optimal option. Janis’ (1972) groupthink eoneept is yet another

suboptimal way in whieh deeision maker’s deal with deeisional eonfliet. The core

eoneepts of groupthink (Janis & Mann, 1977) describe the following dysfunctional group

decision-making behavior which interferes with the development of effeetive deeision

outcomes:

1 ) an illusion of invulnerability... which creates excessive optimism and

eneourages taking extreme risk ...2) eolleetive efforts to rationalize in

order to discount warnings which might lead the members to reconsider

18

their assumptions.. .3) an unquestioned belief in the group’s inherent

morality, inclining the members to ignore the ethical or moral

consequences of their decisions... 4) stereotyped views of rivals and

enemies as too evil.. .or as too weak.. .5) direct pressure on any member

who expresses strong arguments against any of the group’s stereotypes,

illusions or commitments... 6) self-censorship of deviations from the

apparent group consensus... 7) a shared illusion of unanimity... 8) the

emergence of a self-appointed “mindguards” - member who protects the

group from adverse information that might shatter their shared

complacency about the effectiveness and morality of their decision (p.

130).

In groupthink, groups of individuals employ collective strategies of defensive

avoidance by seeking concurrence through joint rationalization of a suboptimal decision.

A good example of how groupthink can negatively affect group decision effectiveness is

when Neville Chamberlain and his staff failed to heed warnings about Hitler that were

contrary to their rationalized position in 1937 (Janis & Mann, 1977, p. 130).

Another challenge to rational decision making is the natural limitations of one’s

perspectives or mental models of how the world works. Johnson-Laird (1983) explains

these limitations in a general theory of “inference based on mental” models. This theory

contends that humans use their mental models of how they think the world works to draw

inferences when making decisions. Because these mental models are limited by our

subjective interpretation of the world around us, they are often incomplete or incorrect.

19

As such, the limitations of our mental models can skew or inhibit the inferences we draw

in making decisions.

Craik (1943) described the basic concepts o f mental models and their role in

decision making by explaining that a human carries, “ ...a ‘small scale model’ of external

reality and o f its own possible actions within its head, it is able to try out various

alternatives, conclude which is the best of them, react to future situations before they

arise, utilize the knowledge o f past events in dealing with the present and future” (p. 3).

However, our mental models often do not accurately reflect external reality. As such,

making decisions based on incomplete or incorrect mental model-based inferences can

inhibit the effectiveness o f the decision outcome in addressing the core problem the effort

sought to resolve.

Various researchers have characterized the nature of mental models. Johnson-

Laird, et al. (1998) describes mental models as an internal mirror of the external thing

they represent. Forrester (1961) describes mental models as, “the mental image of the

world around us that we carry in our heads” (p. 49). Ideally, mental models should be a

true facsimile of the thing they represent. For instance De Kleer and Brown (1983) point

out that ideally a model “should be consistent, corresponding, and robust” (p. 167).

However, Forrester (1961) points out, that mental models are not necessarily accurate.

Norman (1983) explains that mental models are incomplete, our ability to “run” our

models is limited, mental models are unstable, and they are unscientific, superstitious,

and parsimonious.

McDaniel (2003) found that the primary characteristics of mental models include

the idea that mental models do not always match reality. They tend to oversimplify

20

reality and humans tend to ignore the limitations of mental models, and instead make

decisions based on mental models as though they were fully reflective of reality. Others

such as Oatley (1996) and Oakhill (1996) found that mental models are incomplete or

inaccurate. According to Forrester (1971) and Richardson and Pugh (1981), mental

models can be deficient because they are “fuzzy,” meaning that they are a generally

unclear facsimile of the thing they are intended to represent.

Hogarth (1987) contends that mental models are affected by hindsight and

memory bias. Hutchins (1990) explains that mental models are influenced by routines

and habits. Miller (1951) and Forrester (1994) believe that cognitive processing

limitations inhibit mental models. Byrne (1996) claims imagination can also limit the

accuracy o f mental models. As Sterman (1994), Brehmer (1992), Kleinmuntz (1993), and

Vennix (1999) explain, mental models can also be affected by the fact that people often

ignore feedback information.

Senge (1990) explains that we are often unaware o f our mental models or the

effect they have on the way we behave. Anderson, Howe, and Tolmie (1996) explain that

mental models do not have to “be wholly accurate nor correspond completely with what

they model in order to be useful” (p. 252). Larsen, Mclnemey, Nyquest, Santos, and

Silsbee (1996) explain that these mental model flaws create inaccurate abstractions,

which can negatively affect decision analysis.

The limitation of not having a correct and complete mental model of a situation

interferes with the accuracy of the decision analysis and the degree to which participants

share a common view o f the problem or solutions. In turn, this can negatively affect the

effectiveness o f the decision making outcome in solving the problem of focus. If the

21

participants’ points of view cause them to have incorrect or ineomplete levels of

understanding o f the causes of the problem or the relative effectiveness of solutions, and

these limitations are not sufficiently addressed, it is less likely that they will identify the

best alternative to solve the problem. If individual group deeision making participants’

perspectives or understanding of the causes o f the problem and consequences of

alternative solutions are different, or eonflieting, and such divergence is not sufficiently

addressed, it is unlikely that the group will reach a mutually acceptable deeision outeome.

Addressing and resolving mental model limitations, or differences in group decision

making facilitation, is essential in fostering the identification o f the best alternative to

solve the problem the group was assembled to resolve.

Implications o f Decision Theory for Stakeholder Group Deeision Making

While classical deeision making theorists believe that deeision makers can,

should, and do behave rationally to maximize decision outcomes, behavioral decision

making theorists claim that deeision makers cannot and most often do not even attempt to

rationally maximize deeision outcomes (Lipshitz, 1993). The limitations to rationality

resulting from bounding of rationality and satisficing (Simon, 1957), heuristics and biases

(Tversky & Kahneman, 1974), conflict behavior (Janis & Mann, 1977), and the myriad of

limitations presented from incomplete and incorrect mental models (Johnson-Laird, 1983;

Norman, 1983; Oatley, 1996) make it difficult for individual and group decision makers

to completely proeess and correctly interpret relevant information when making

deeisions.

22

The research and examples listed above shows that limitations to rational decision

making can inhibit the identification of solutions with a higher level of potential

effectiveness in solving the problem of focus once implemented. Given these limitations,

what should facilitators be doing differently to improve the level o f rational decision

making analysis in stakeholder group decision making efforts?

A review of Dewey (1910), Gouran et al. (1993), and Janis and Mann (1977)

identifies the process steps facilitators should ideally follow to promote rational decision

analysis. Each of these researchers provides a list o f specific process steps they believe

are necessary for promoting rational decisions. Dewey (1910) and Gouran et al. (1993)

emphasize the early phases of decision analysis in which the problem is defined and

articulated; and Janis and Mann (1977) emphasize the later phases of deeision analysis in

which the solutions are generated and evaluated prior to making a deeision. For this

study, I analyzed and summarize the specific process steps identified by these researchers

to develop an aggregate list of ideal rational facilitation proeess steps. This list of 10

ideal process steps, along with the specific process steps identified by these researchers is

listed in Table 1. This list of 10 ideal process steps will be used throughout this study as

a means of determining the degree to which facilitation methods are likely to promote

rational deeision analysis.

23

<L>

i

10. Make decision

9. Evaluation, discussion

8. Identify consequences

7. Analyze alternative solutions against criteria

6. Establish criteria for solution effectiveness

5. Collect data

4. Generate alternative solutions

causes

2. Define problem

1. Identify, discuss problem andgoals

<L>Üo

IIao

Q

X

X

X

II"2

I73

I1IO

ao

X

X

IsO On

ao

IIIIS sX) >

1 O

o bûcd(/) . s (4-4(U C o

iz% I %

c5 o (U£X >> >

o

2(4-4o

.ë i B(4-4(U

s(U

& 1c §8 cd0> 3 d> c

"O C/3 o

o

<L>(4-4

^ o

(50 3

r -S 8

II •£ %I I1|<D O

illUh ea -t3

24

10. Make decision

9. Evaluation, discussion

8. Identify consequences

7. Analyze alternative solutions against criteria

6. Establish criteria for solution effectiveness

5. Collect data

4. Generate alternative solutions

3. Identify problem causes

2. Define problem

I . Identify, discuss problem and goals

X

I

Î

X

X

ê1•s

IIC/3 >

cy

ill

X

X

18

g1 .S!i“ "8

I<D qS

llII 1

>0)

"a % u "cd S

i l l1 1 1

X

X

I>0)

IE

(2c

If| l2 c

II

ë g-C /3

c • -cd (u Cm

i!II-O *3S-S,

II

It:01

il

X

X

g

i,ë ^It1:1

1 S0 <u

Itsi l l1 1 1

Iff<D ^ id

1I" 8

I

25

10. Make decision

9. Evaluation, discussion

8. Identify consequences

7. Analyze alternative solutions against criteria

6. Establish criteria for solution effectiveness

5. Collect data

4. Generate alternative solutions

3. Identify problem causes

2. Define problem

1. Identify, discuss problem and goals

Fc/3

82

PL,f

§C/3

X

X

«I •g1 s .

X

g

cd

Î

Cdc

2 1 Î

oCL

I■s s

Q .8

2»■Bs■o

X

•sg “

il■S 8C cd

i J

i/T ;= E Ë

I ?

- 1 â S ’üG 3

Cd

S

I

X

<L>_ o

ilCd

cd

ilO

■ I lII1 1

I I 1) Jxi 00 g:

26

The summary of the ideal group decision making process steps from theory

includes the following:

1. Identify, discuss the problem and goals.

2. Define the problem.

3. Identify problem causes.

4. Generate alternative solutions.

5. Collect data.

6. Establish criteria for solution effectiveness.

7. Analyze alternative solutions against criteria.

8. Identify consequences.

9. Evaluation, discussion.

10. Make decision.

It is important to identify and discuss the problem and ensure that the goals and

objectives of group participants are aligned. It is also important to define the problem,

and identify and reach agreement on what is the undesirable trend that the group wants to

address. Next, it is critical to define the causes of the problem so that the group does not

address the symptoms and leave the root problem to fester. Promoting an open-minded

brainstorming of a complete range of alternative solutions is very important so that the

decision makers sincerely canvass all possible solutions in their quest to solve the

problem instead of just looking at solutions with which they are familiar.

As the group is defining the problem or as it is assembling the alternative

solutions, the group should also gather data to help them make more fact-based and less

anecdotal judgments when making decisions. In addition, they should establish criteria

27

forjudging the effectiveness of alternatives. They should determine what characteristics

an alternative should have to be considered effective, and then apply those to all

alternatives equitably to test their relative merits in solving the problem. This analysis of

the alternatives against the criteria should shed light on the consequences o f the

alternatives. Understanding the potential consequences o f alternative actions helps to

assess the effectiveness o f an alternative in solving the problem, it also highlights if there

are any negative, unintended consequences that should be avoided.

Finally, a rational decision making process should involve evaluation and

discussion of the policy options under consideration. A great deal of data can be collected

throughout the process, but the stakeholders must still make judgments and negotiate

their differences before making a final decision. Once the rational analysis is complete,

the stakeholders make a decision on which alternative will best solve the problem.

Analysis of Standard Group Decision Making Facilitation Practice

I reviewed the prescribed facilitation procedures from several fields related to

group decision making in order to analyze the degree to which standard group decision

making facilitation methods adhere to this list of ideal group decision making process

steps. The references I reviewed came primarily from literature in the areas of decision

making, group process, public participation, decision performance, as well as other online

government or management group facilitation “how-to” manuals. By canvassing this

wide array o f decision making references, I attempted to gather a comprehensive list of

the “standard operating procedures” that group facilitators commonly employ. My goal

was to identify as many references as possible in which the author actually listed a

28

specific set of process steps they recommended for facilitating a group decision making

process. I identified 44 distinct references that listed specific group facilitation process

steps. I then listed each reference and its group decision making process steps in a table,

and compared each one to the list of ideal criteria generated from the review of the

classical decision theory. The objective of this analysis was to determine the frequency

with which any of the unique process steps identified in the 44 standard facilitation

references adhered to the 10 ideal process steps. Table 2 summarizes these results.

As this table illustrates, the standard facilitation methods analyzed showed the

following three steps were the most commonly used steps in the 44 sources evaluated:

95% of sources involved a step to define the problem; 91% of sources had a step for

generating alternatives; 77% of sources included a step for making decisions. There is a

significant gap between the frequency o f the three most common steps and the next most

frequent step identified in these processes. The fourth most frequent step involves

identifying and analyzing the goals and the problem (43%). Only 34% of standard

processes recommended collecting data in their decision making process. Qnly32% of

standard methods involved analyzing alternatives, and only 18% made an effort to

identify consequences o f alternative solutions. Just 27% of standard processes establish

criteria against which solutions would be judged to determine their ultimate effectiveness

in achieving the stated goals, and a mere 9% of standard processes devoted effort to

evaluating and discussing the options, yet 77 % involved a step which required making a

decision.

In conducting this analysis I based the evaluation on the actual wording for each

process step as listed in the source material. It is possible that the authors intended to

29

imply a broader function than the stated objective (e.g. “decision making” implies that

some evaluation is conducted). However, to maintain a consistent evaluation of all

sources and prevent speculation on unstated intent, I used the exact wording provided by

each source as the basis o f comparison to the 10 ideal process steps.

These results show that such processes tend to enable behavioral decision-making

tendencies, because they skip over key steps that would force participants to more

thoroughly evaluate options. Such tendencies are more likely to reinforce existing

knowledge rather than strive for a new level of understanding, as in representativeness

heuristics described by Tversky and Kahneman (1974). These results could show that

these standard processes tend to support existing mental models, and ignore incorrect or

incomplete models as described by Johnson-Laird (1983), Oakhill (1996), Gamham

(1996), and others rather than strive to improve them.

The fact that only 34% of references include processes steps which involves

gathering new data for the decision analysis shows that such processes are reinforcing

heuristic behavior which limits the scope of decision analysis. This analysis supports

Tversky and Kahneman’s (1974) theory that decision makers use availability heuristics in

making decisions. In other words, the decision maker is more likely to use the

information readily available instead o f collecting new data, even if the new data are

essential for promoting a higher quality decision. This relatively low level of data

collection also shows that these processes enable bounding of rationality as described by

Simon (1957) by failing to rationally evaluate the causes of the problem and the

consequences o f the alternative solutions before making a decision.

30

entC / D

%82

p-,

IcS

pL,

I§y

Q§■II

C / Dcw0Vi

1(N

i

10. Make decision

9. Evaluation, discussion

8. Identify consequences

7. Analyze alternative solutions against criteria

6. Establish criteria for solution effectiveness

5. Collect data

4. Generate alternative solutions

3. Identify problem causes

2. Define problem

1. Identify, discuss problem and goals

o

§■C/3

(UÜo

0>

I80

1O ’2 b

■IIQ 3L-, oO (ZDIIo o

° a 8

Ml

X X X X X X

X X

X X

X

X X X X X

X X

X X X X X X X

X X X X

§■

3o(ZD

U~lo0tNd1

VOOs

.

cg

■g

m''OOS

<NooCN

d8

X )

oOnOS

IOCO

Os00OS

0000OS

Ic /3

31

10. Make decision

9. Evaluation, discussion

8. Identify consequences

7. Analyze alternative solutions against criteria

6. Establish criteria for solution effectiveness

5. Collect data

4. Generate alternative solutions

3. Identify problem causes

2. Define problem

I. Identify, discuss problem and goals

X X X

X

X

X X

(NOo(Nd8(§

X

X

X X

X X

X

X X X X

X

VOo

R

o

£

4!

Li s

l l

X

X X X X x x x x

X

(No0(N

1O

Os

I■g00

OSOSOS

X X X

X

X

X X

X

X X

X

x x x x X X X X X X

X X

X

X X X X X

X X X

gI

o(UIJ1Oh

moo(Ndo

8(N

OCQ

o

§

Üo

IfI ÎII

VOOSOS

32

10. Make decision

9. Evaluation, discussion

8. Identify consequences

7. Analyze alternative solutions against criteria

6. Establish criteria for solution effectiveness

5. Collect data

4. Generate alternative solutions

3. Identify problem causes

2. Define problem

1. Identify, discuss problem and goals

X X X X X X X X X X

X X

§(N

1 fVO QOsOs

C

1 iê ë

X X

X X

X X

X X X

X X

x x x x

X X X X X X X X X X x x x x

X X X X X X X X X X X X X

X X X X

ooo(N

CH

(Noo(NÉos

og(N

Iosos

I

co

iI fc ^

IIOsOs

VOOsOs

is0tS

1u

oo(Nc

inoo(NÉX

«y -

Im.SP§

o oc3

OsOsOs

OT3.£

33

10. Make decision

9. Evaluation, discussion

8. Identify consequences

7. Analyze alternative solutions against criteria

6. Establish criteria for solution effectiveness

5. Collect data

4. Generate alternative solutions

3. Identify problem causes

2. Define problem

1. Identify, discuss problem and goals

X X X X X X

X' X

X

X

O sooO S

<

I

X

%

X

OsooO S

I.tiPQ

inOsOs

oPm

X

in00Os

X

X X X X X X

X

<NOsOs

X

X

gOs

X

X

(No\o\

IO'cu

X

00ooo\

34

In general, the low frequency o f each step in between generation of alternatives

and making a decision (i.e., collecting data, 34%; establishing criteria, 27%; analyzing

alternatives, 32%; identifying consequences, 18%; and evaluation and discussion, 9%)

indicates that such process are more likely to use satisficing strategies described by

Simon (1957) to pick the easy solution rather than working to identify the best solution.

These results also suggest that such processes are not thoroughly analyzing

alternative solutions. As lanis and Mann (1977) caution, this type of deficiency in the

decision analysis process can inhibit the quality of the final decision. By skipping from

generating alternatives to making decisions and forgoing a thorough and rational analysis

of the alternatives under consideration, how do the decision makers know that their

decisions will be effective in solving the problem? These steps are critical in ensuring

participants have the help they need to process complex information, for improving

participants’ mental models of the issue, and for revealing and addressing any conflicts

among participants or areas of discomfort for individual participants. If such issues are

not addressed during the decision making process, they will likely surface later and

prevent the implementation of the final outcomes and leave the problem unresolved.

While 18% of standard processes analyzed consciously involve a step designed to

identify the cause o f the problem, 82% of the processes did not. To thoroughly define a

problem, one must identify its causes. Failure to thoroughly define the problem and its

causes can inhibit the identification of the most effective solutions to resolve the root

problem, rather than its symptoms. In addition, in my experience in working with groups

of diverse stakeholders, failure to reach agreement among participants on the definition

of the problem and its root causes makes it difficult to reach consensus on a solution.

35

Hypotheses

These data show that standard facilitation methods do not follow the ideal group

decision making facilitation process steps closely. Standard facilitation processes

reinforce behavioral decision making strategies instead of promoting more classical

rational approach to decision analysis by not adhering closely to the ideal process steps.

This lack of adherence to these steps limits the potential for developing effective

solutions to sufficiently resolve the problem of foeus.

This analysis had led me to hypothesize the following:

• Hypothesis 1 : Participants in group decision making facilitation processes that

adhere more closely to the ideal group decision making facilitation process steps will

identify more effective solutions to resolve the stated problem, than will participants

in groups using standard facilitation methods.

• Hypothesis 2: Participants in group decision making facilitation processes that adhere

more closely to the ideal group decision making facilitation process steps will stay

more focused on relevant information related to the stated problem, than will

partieipants in groups using standard facilitation methods.

• Hypothesis 3 : Participants in group decision making facilitation processes that

adhere more closely to the ideal group deeision making facilitation process steps will

be more satisfied with the interpersonal dynamics, process, and outcome of the group

decision making experience, than will participants in groups using standard

facilitation methods.

36

CHAPTER 2

APPROACH

Classical decision making theory describes how a deeision maker would

rationally evaluate a problem and identify the most effeetive solution to maximize

deeision outcomes. Behavioral deeision theorists have found that deeision makers often

cannot and do not even tiy to rationally maximize the deeision making outcome. Simon

(1957), Tversky and Kahneman (1974), Janis and Mann (1977), and others describe the

various strategies used by deeision makers to oversimplify information processing tasks,

avoid interpersonal eonfliets, and settle for suboptimal solutions.

Analysis of the work of Dewey (1910), Janis and Mann (1977), and Gouran et al.

(1993) revealed lists of ideal eriteria for the proeess steps that should be undertaken in

group deeision making facilitation to promote more thorough and effeetive solutions to

sufficiently resolve the problem of foeus.

In analyzing the examples of failed stakeholder involvement as well as the review

of the public involvement literature, I found that standard facilitation of stakeholder

groups often fails to result in effeetive deeision outcomes. I also found in analyzing 44

standard group facilitation processes that standard facilitation processes do not adhere

closely to the ideal group deeision making facilitation eriteria.

Given this analysis, my approach to examining the general research question of

how stakeholder involvement facilitation methods could facilitate better, more effective

37

outcomes, was to compare the relative effectiveness of standard and non-standard group

decision making facilitation techniques. My hypotheses are that facilitation methods

whieh adhere more closely to the ideal group decision making facilitation criteria would

be more likely to result in the identification of more effective solutions and the promotion

of a higher level of foeus and procedural satisfaction among participants, than standard

facilitation processes. The group facilitation method I used as a basis o f comparison in

this study is based on the use of system dynamics simulation modeling. The following is

an overview of the system dynamics-based group deeision making facilitation process.

Overview o f System Dynamics-Based Facilitation

System dynamics is a more classical approach to the facilitation o f group decision

making, in that it takes a very rational approach to organizing and managing the decision

analysis in an effort to solve a particular problem. System dynamics seeks to understand

the causes of the problem, and the consequences o f alternative solutions. System

dynamics is an endogenous approach to problem solving, meaning that it assumes that

problems are caused by the interactions of connected parts o f a system, called the system

structure. According to Sterman, “A fundamental principle o f system dynamics states

that the structure of the system gives rise to its behavior,” (2000, 28). In other words, the

underlying structure or the relationships between intereonnected parts o f a system

influence the way in whieh the system behaves. To correct problem or an undesirable

behavior, system dynamics practitioners seek to define and understand the underlying

structure o f the system which is creating the undesirable behavior and identify and test

ways in which to intervene on the structure to change the problematic behavior. By

38

carefully articulating and defining the problematic behavior in a systems context, system

dynamics facilitators help ensure the decision makers are addressing the causes rather

than the symptoms, that they are correctly interpreting the structure of relationships

within a system which is enabling the problem, and that diverse participants of the group

have a common level o f understanding about the nature of the problem. By carefully

articulating the problem in this way, the deeision makers are better able to identify where

and how to intervene to change the behavior of the system.

Once the structure of the system is defined, the system dynamics proeess uses

computer simulation modeling to replicate the network of causes and effects in the

system surrounding the problem. System dynamics models enable deeision makers to test

the relative effectiveness of alternative solutions prior to making a deeision (Forrester,

1961). By illustrating the distinct elements o f the problem situation, and identifying the

relationships among these elements, the system dynamics modeling helps deeision

makers to take a more holistic, systems-thinking approach to solving the problem at hand

(Sterman, 2000).

Kim (1999) states systems thinking is, “a school o f thought that focuses on

recognizing the intercoruaections between individual parts of a system and synthesizing

them into an unified view of the whole” (p. 19). However, Vennix (1996) says people

often have difficulty taking a system perspective beeause they “tend to think in simple

causal chains rather than networks o f related variables” (p. 3). In addition, system

behavior is often difficult to antieipate because a change in one part of the system can

cause unanticipated changes in other parts o f the system (Stave, 2003). Without taking

counterintuitive systemic behavior into account, decision makers may be more likely to

39

inadvertently seleet an alternative that make things worse or cause a new problem in

another part of the system, instead of solving the problem at hand (Sterman, 2000). The

system dynamics approach helps decision makers to better understand the system

structure and behavior through the use of computer simulation and the facilitation of

interactive discussion, which helps decision makers’ to anticipate consequences and

understand the tradeoffs among alternative solutions under consideration (Richardson and

Pugh, 1981). Vennix (1996) says his helps them to be better able to, “design robust

policies to alleviate the problems in the system” (p. 49).

By helping decision makers carefully define the problem, clearly define its

causes, and construct, validate, and use the simulation model to test the relative

effectiveness of alternative solutions, the system dynamics-based facilitation process

takes a more classical approach to helping decision makers stay more focused on

selecting the most effective solution to the problem at hand. The following is a general

overview of the primary system dynamics group facilitation process steps.

Definition o f Problem

The first step in system dynamics group facilitation involves the identification and

definition of the problem of focus. This step is critical because it ensures all participants

have a similar view o f the problem and a common understanding of why it is problematic

(Vennix, 1996). System dynamics is suitable for problems that are dynamic, that is,

where the problem is defined as an undesirable trend over time (Sterman, 2000). In

Stave’s (2002) analysis o f a system dynamics-based facilitation involving public

stakeholders in a transportation decision making process, the dynamic problem was that

40

due to unprecedented and continued growth in Clark County, Nevada, traffic congestion

and related air quality had become a problem that was getting worse over time.

As part o f defining the problem, the system dynamics facilitator helps the group

identify the factors which are contributing to the problem. In so doing, the system

dynamics facilitator begins to illustrate the elements of the systems structure. This helps

participants to begin to understand that the problem is not an isolated event, but rather

caused by a number of dynamic events.

In the ease described in Stave (2002), the group brainstormed the various

elements of the congestion problem such as population growth, amount of road capacity,

use of mass transit, etc. As they began to identify the individual elements, they began to

see how interconnected they really are, for instance population and road capacity are

related as illustrated by the example that road capacity in Las Vegas was sufficient 15

years ago before the population doubled, and now it is no longer sufficient.

As the problem of focus is defined, the system dynamics modeling facilitator

produces a graphical representation of the behavior of a problem variable in the form of a

“behavior-over-time” (BOT) graph (Sterman, 2000). The purpose o f a BOT graph is to,

“capture the history or trend of one or more variable over time” (Kim, 1999, p. 19). This

provides a reference graph against which alternative solutions can be measured, to help

determine the degree to which the solutions affect change in the system to correct the

problematic trend (e.g. Vennix, 1996).

Identification o f Problem Causes

System dynamics-based facilitation also helps group decision makers to carefully

identify the causes o f the problem in addition to thoroughly defining the problem

41

(Richardson & Pugh, 1981). By helping group decision makers clearly understand the

things that contribute to the problematic behavior of the system, it is easier for them to

identify the most appropriate areas in which to intervene in the system to correct the

problem (Meadows, 1991).

By helping the group to collectively define the causes of the problem, the system

dynamics facilitator can foster the elicitation of participants’ beliefs about the problem

(Andersen et al., 1997). This helps to reveal areas in which participants’ mental models

are incorrect, incomplete, or conflicting (Richardson & Pugh, 1981). If incorrect or

incomplete mental models addressed, it will help prevent these limitations from

interfering with the ultimate quality of the decision outcome. Likewise, resolving any

mental model conflicts can help promote a common understanding of the problem and its

causes, and foster greater alignment among participants regarding the assessment of the

relative effectiveness of alternative solutions (ven den Belt, 2000).

One of the ways in which system dynamics facilitators seek to elicit and align

participants’ understanding of the problem’s causes is through conducting a causal-loop

diagram exercise. A causal-loop diagram is brainstorming exercise in which the elements

of the problem situation are listed on flipcharts. As the unique element is listed, lines are

drawn to illustrate the connections among individual system elements. Next, a plus (+) or

minus (-) sign is drawn to indicate if the connection among elements represents a positive

or negative relationship (Sterman, 2000). A positive link is self-reinforcing, and a

negative loop is self-con:ecting (Sterman, 2000). Understanding these dynamic

relationships among system elements helps prevent decision makers from ignoring

42

feedback loops within the system when making decisions (Vennix, 1996). Below is a

sample of a causal loop diagram from Stave’s (2002) transportation case study (p. 152).

populationerceived activenes

of Las Vegas

attractiveness of mass transit and

+— alternative modes

number o f rail miles, buses and

routes

number o f bicycle routes

total travel demand

distance per trip

use of mass transit and alternative

modes

capacitymodifier sh eet and

highw ^ capacity

number of lane mUes

number o f lane miles ordered

Cost

number of lane miles under construction

difference between CO budget and

amount generated

Congestion:Volume/Capacity

i-System-wide

AverageSpeed

CO per vehicle mile

COgenerated

^cpbudget

vehicleoccupancy

mevolume o f personal

y vehicles

Figure 1. Causal Loop Diagram

Illustrating the problem in causal loops helps participants begin to visualize the

relationships among system elements. The more participants understand the causal-

feedback loops, the better able they will be to improve their understanding of the problem

(Sterman, 2000). It also improves their ability to anticipate the consequences of

alternative solutions (Forrester, 1971). In addition, the discipline involved in defining the

causes o f the problematic behavior in the system also keeps participants from

prematurely making a decision before fully understanding the causes o f the problem

43

(Stave, 2002). As such it helps to prevent the selection o f suboptimal decision making

resulting from strategies such as hypervigilance as described by Janis and Mann (1977)

and satisficing as described by Simon (1957).

Construction and Validation o f Model

The results of the problem and cause definition phases identify the underlying

causal structure of the problem, which serve as the basis for the construction of the

formal system dynamics computer model. Again, the structure o f the system is what is

causing the undesirable behavior or problem. If the model is to be used to test alternative

solutions to correct this behavior, the model must accurately reflect the structure of the

underlying system which is creating or enabling the behavior. As such the model must be

validated prior to testing alternative solutions, to ensure that the model’s output creates an

accurate representation of the undesirable behavior. This validation process builds trust

in the integrity and authenticity of the model’s assumptions (Vennix, 1996). It also

promotes shared ownership in the model, which helps participants to feel more vested in

the models output (Akkermans & Vennix, 1997). Sterman (2000) points out that model

validation does not happen in a single event, but rather occurs gradually as the

participants interact with and use the model to test their assumptions.

System dynamics-based facilitation often involves the group in the development

of the model. In group model building, the participants of the decision making activity

are directly involved in constructing the model (Vennix, 1996). The benefit of group

model building is that because participants have actually created the model, they are less

suspicious about its assumptions (Sterman, 2000). In decision making efforts, in which

there is not sufficient time to involve participants in a group model building effort, the

44

system dynamics facilitators develop a simulation model prior to the decision-making

effort for use by the stakeholders. This makes model validation a bit more challenging,

but every bit as essential as with group model building.

Model Use

In system dynamics-based facilitation a computer model is usually used to test the

relative effectiveness of alternative solutions in meeting the objective criteria defined in

the problem definition stage (Richardson & Andersen, 1995). The alternative solutions

are referred to as leverage points, or places in the system in which a specific intervention

could change the structure and behavior throughout the system (Meadows, 1991).

The system dynamics simulation model improves participants’ mental models of

the problem through helping them to understand feedback loops in the problem situation

(Andersen & Richardson, 1994). Simulation also helps give participants an opportunity to

measure the effectiveness of each alternative against the previously defined criteria to test

relative merits o f alternative policy interventions (Andersen et al., 1997). It is during this

stage that the decision m akers gain a better understanding of consequences and tradeoffs

among alternative solutions (Richardson & Pugh, 1981).

In testing the effectiveness of these interventions, the simulation model can at

times reveal what Meadows (1991) refers to as “backward intuition,” or when we expect

the system to behave one way, when in fact it behaves in the opposite way. Stave (2002)

refers to these moments o f realization o f the unexpected behavior o f a system as gaining

“insight through surprise” (p. 159). These “ah ha” moments create new insight about the

problem situation (Akkermans & Vennix, 1997).

45

According to Stave (2002), simulation provides instant feedback that helps

participants “revise and retest their ideas” (p. 144). It provides new information that helps

people to improve their understanding and rethink their previously held paradigms about

how the world works (Meadows, 1991). It also promotes a new level o f openness to

learning and a willingness to refine ones’ mental models (Rouwette & Vennix, 2006). For

Senge (1990) this new insight promotes what he calls “metanoia” or a willing shift of

mind based on new information.

Policy Analysis

As the group begins to analyze the output from the system dynamics simulation

model, and they are encouraged to begin to evaluate and discuss the meaning of the

models output as they develop policy recommendations. This process is intended to

promote a high level o f interaction and discussion. However, because the discussion

centers on the objective output of the model, it creates a more neutral platform for

discussion (Stave, 2002). This helps to prevent defensive routines or face saving

behaviors from interfering with objective decision analysis. It can also help, mitigate

decisional conflict as described by Janis and Mann (1977) or groupthink as described by

Janis (1972) from derailing the focus of the discussion or disrupting the interpersonal

dynamics o f the group. Van den Belt (2000) refers to system dynamics as “mediated

modeling” because of its ability to constructively facilitate productive discussion among

decision makers.

The objectivity o f the model also helps to present biases or selective perception from

interfering in the analysis of alternatives (Hogarth, 1987; Fischhoff, 1975). Because the

model does most o f the difficult work in calculating the technical assessment of each

46

individual combined set of alternatives, system dynamics modeling also helps to prevent

various types of heuristic behavior as described in the review of the behavioral decision­

making literature (Tversky & Kahneman, 1974).

In the policy analysis stage the system dynamics facilitator focuses on helping the

group to design and evaluate various solution scenarios (Richardson & Pugh, 1981).

Sterman (2000) states in this stage the group focuses on understanding the consequences

of implementation o f the various scenarios by conducting a “what if .. .analysis” and

“sensitivity analyses” to determine the implications of various policy options and to

identify the appropriate level of an option to implement to achieve the desired goal (p.

86). The group sees the output from the various model runs, and discusses and evaluates

the resulting data. They can retest alternatives already run to verify the output. They can

test new individual or combined alternatives to evaluate the solutions in new ways.

According to Luna-Reyes and Andersen (2003), in this stage the facilitators “ ... generate

discussion among actors about the meaning of both the results of the policy experiment

and the stories generated by the model” (p. 291).

As the group works through the system dynamics process steps, they share a

common and interactive experience, they share their ideas, they gain insight on the

perspectives o f other participants, and they learn from output of the model. The model

output provides them with feedback on the relative effectiveness of alternative solutions.

They can use the model to conduct a sensitivity analysis by changing the degree of a

particular solution, such as calculating the effect o f adding 25%, 50%, or 75% more road

capacity in solving traffic congestion. If participants question the output of the model,

they can also change the model’s assumptions and rerun the model, such as changing the

47

effect the assiimption of population growth in anticipating the traffic congestion trend

over time. The transparent nature of the modeling process also helps to document the

process (Stave, 2002). This transparency also provides the appropriate checks-and-

balances to prevent manipulation of the data or assumptions used in the model.

The steps in the system dynamics-based facilitation process adhere more closely

to the rational decision making approaches described in the review of classical decision

making theory, than do standard facilitation processes. The standard system dynamics

process steps involve a very rational, structured, and methodical approach to helping

participants to “organize, clarify, and unify knowledge” about the problem (Forrester,

1987).

Analysis o f System dynamics-based facilitation Adherence to Ideal Steps

In analyzing specific process steps identified in a review of the system dynamics

literature, I found two interesting characteristics. First, I found that system dynamics

practitioners tend to follow a very consistent set of process steps (Richardson &Pugh,

1981; Roberts, Andersen, Deal, Grant, & Shaffer, 1983; Vennix, J, 1996; Sterman, 2000;

Stave, 2003; Zagonel, 2004). Secondly, I found in evaluating the specific steps that

system dynamics practitioners take in facilitating group decision making adhere to each

of the ideal process steps.

These findings aire consistent with the general overview of system dynamics

methodology I provided earlier in this chapter. System dynamics modeling involves a

thorough effort to define the problem and analyze its causes, collecting data and

establishing criteria for analyzing the relative levels of effectiveness of each alternative

48

solution before making a decision. Both by design and default, the development of a

system dynamics simulation model requires a rational and thorough analysis of the

problem and potential solutions. The use of the model helps participants to rationally

analyze the relative consequences and tradeoffs among the alternative solutions.

In comparing the degree to which the system dynamics-based facilitation process

steps adhere to the ideal, with the level of adherence of the standard process steps, system

dynamics has a relatively higher level of adherence to the ideal. Table 3 compares the

level of adherence of standard and system dynamics-based facilitation methods to ideal

group decision making facilitation process steps.

Table 3. Comparative Analysis of Level of Adherence

Ideal Group Decision Making Process Steps

Standard Group Decision

Making Facilitation Process Steps

System Dynamics Group Decision Making Process

Steps1. Identify, analyze problem and goals X

2. Define problem X X

3. Identify problem causes X

4. Generate alternative solutions X X

5. Collect data X

6. Establish criteria for effective solutions X

7. Analyze alternative solutions X

8. Identify consequence X

9. Evaluation, discussion X

10. Make decision X X

49

This comparative analysis of the level of adherence to the ideal group decision

making steps sets the stage for the next step in this research study. The next step in this

research process involved comparing standard and system dynamics-based facilitation in

an experimental setting to determine if there was also a difference in the level of

effectiveness o f both approaches in helping decision makers to identify more effective

solutions to a given problem.

Comparison o f this Study to Related Research

One o f the unique characteristics o f my study is that I have chosen to study the

effectiveness o f system dynamics-based facilitation with public stakeholders instead of

subject-matter experts. With the exception of two research studies, I have not been able to

find any other studies in the system dynamics literature that focus on studying the

effectiveness o f the system dynamics approach to facilitation with lay public

stakeholders, as opposed to subject-matter experts who may do work or research in a

related field. Conversely, there are a number o f studies that have been conducted to

evaluate the effectiveness of system dynamics modeling with experts within an industry

or organization.

The following is a sampling o f research that focuses on studying the use of system

dynamics in an organizational setting. Ford (1996) studied the importance o f the use of

system dynamics in aiding planning efforts in the electric power industry. Vennix (1996)

focused on evaluating how system dynamics group model building techniques was used

to improve the strategic thinking abilities o f Dutch merchant fleet managers. Calaveri

and Sterman (1997) conducted an analysis of the use o f system dynamics and systems

50

thinking in organizations. Zagonel (2004) worked with the State o f New York

Department o f Social Services to determine whether system dynamics techniques could

improve welfare managers’ thinking about how best to reform the system.

O f the research analyzed, only one researcher analyzed the use o f system

dynamics as a facilitation tool in a general public, rather than organizational setting.

Since system dynamics computer simulation modeling is a complex way of solving

problems, some may assume that the general public would be unable to successfully use

such a sophisticated approach. In addition, since the general public is composed of lay

stakeholders who may not be experts in the issue of focus, some may think that this lack

of familiarity with the issue would make it even more difficult for such stakeholders to

use the system dynamics computer simulation model effectively. However, Stave (2002)

was able to demonstrate that system dynamics could be successfully employed in a public

stakeholder setting.

Stave (2002) conducted a case study assessment to determine the potential

effectiveness of system dynamics in improving public involvement in environmental

decisions. Stave found that system dynamics would be a useful tool for managing general

public group decision making because it focuses on striving to understand the problem,

problem causes within a system structure, policy levers, feedback tools for learning and

policy design, and process documentation. The results of this case study showed that the

model building process helped the group create a common definition of the problem,

identify criteria, organize and link information, monitor the process, and set boundaries

for the types of policy levers that were reasonable to consider. The process also served as

51

a valuable tool for documenting the group’s discussions and to identify and evaluate

potential policy recommendations.

Dwyer (2007) conducted a case study analysis of a public group decision making

effort that was facilitated with standard methods and an organizational group decision

making effort that used system dynamics tools to facilitate the effort. One of the most

striking findings o f his study was that the standard process group spent almost no time

discussing the causes o f the problem, yet the system dynamics process devoted a

significant amount of time to discussing cause. Dwyer concluded that the traditional

group focused on anecdotal evidence while the system dynamics group spent more time

gathering and evaluating information in making their decisions.

Another distinguishing characteristic of my study is that I conducted an

experiment rather than a case study. The vast majority o f research projects studied

employed a case-study methodology. In a meta- analysis o f 107 group model building

reports, Rouwette, Vennix, and Mullekom (2002) found that 88 of the 107 reports

analyzed followed the case study methodology, whereas only 19 reports involved a

quantitative study of which only five involved a pre-intervention and post-intervention

analysis.

Case studies are a very common method found in the system dynamics literature,

(Rouwette and Vennix, 2006; Akkermans and Vennix, 1997; Calaveri and Sterman,

1997) as well as other fields (Yin, 2003 a; Yin 2003 b; Gillham, 2000). While such

analyses yield extremely fruitful results, I chose to use an experimental approach to test

my hypotheses. By conducting an experiment, I am able to take a prospective, rather than

a retrospective view o f the research question. According to Bordens and Abbott (1991),

52

an experimental design enables the researcher to have more control over the variables

they wish to test, and the methods by which the variables will be measured. While

experimental design often involves daunting logistical challenges, I was fortunate to have

an opportunity to conduct a field experiment within a real-world stakeholder involvement

process, which helped to reduce the logistical difficulties of designing and executing my

experiment.

53

CHAPTER 3

METHOD

Experimental Procedures

I conducted my experiment on February 2, 2008, during a city-wide

conference held in Los Angeles (LA) to solicit input from LA stakeholders. This

conference was part of LA’s city-wide Solid Waste Integrated Resource Planning

(SWIRP) process which was designed to identify ways in which to reduce the amount

of solid waste sent to its local landfills annually. The experiment took place during a

90-minute morning work session in which attendees were asked to participate in

small-group discussions to review and prioritize a set o f eight alternative waste

management policy options and provide LA officials with feedback on where it

should direct its efforts in developing its solid waste reduction plans.

This experiment followed a quantitative design, using a between-subjects,

single-factor, random assignment, two-group experimental design (Bordens &

Abbott, 1991). Approximately 200 individuals took part in the experiment and were

assigned to either a control group or experimental group. The control group was

facilitated with standard methods, and the experimental group was facilitated with

system dynamics methods. Pre- and post-intervention questionnaires were

administered to measure the differences between group participants’ responses to

54

questions designed to identify the degree to which the facilitation method contributed

to promoting greater effectiveness, focus, and procedural satisfaction.

The goal of this experiment was to test the assumption that a higher degree of

adherence to a more classical, rational, ideal group decision-making facilitation

approach would yield better, more effective decision outcomes. The objective of the

experiment was to test the following three research hypotheses;

• Hypothesis 1 : Participants in group decision making facilitation processes that

adhere more closely to the ideal group decision making facilitation process steps

will identify more effective solutions to resolve the stated problem, than will

participants in groups using standard facilitation methods.

• Hypothesis 2: Participants in group decision making facilitation processes that

adhere more closely to the ideal group decision making facilitation process steps

will stay more focused on relevant information related to the stated problem, than

will participants in groups using standard facilitation methods.

• Hypothesis 3 : Participants in group decision making facilitation processes that

adhere more closely to the ideal group decision making facilitation process steps

will be more satisfied with the interpersonal dynamics, process, and outcome of

the group decision miaking experience, than will participants in groups using

standard facilitation methods.

I used pre- and post-intervention survey instruments to gather comparative

data both before and after the morning work session during the conference. A unique

reference identification number was used to match each participant’s pre- and post­

intervention responses. The identification number was composed of: table number;

55

control or experimental group identification; and a self-selected four-digit

identification number to ensure fidelity when comparing pre- and post-responses.

All facilitators who were going to work in either the control or experimental

group were required to participate in a facilitator training session. In this training

session, the facilitators were made aware o f the work session task. They were given

special instructions regarding the steps they needed to take to distribute and collect

the experiment doeuments.

I developed a strategy for randomly assigning conference attendees into the

control and experimental group prior to the conference. This strategy was based on a

randomization technique that developed random lists o f non-unique sets, with

numbers per set ranging from 1 to 2 (Urbaniak & Pious, 2008). This list was used to

guide the placement o f green and yellow dots on the back of the attendee name tags

that were to be used on the day of the conference. The morning of the eonferenee I

asked the representative from the City’s planning team, to flip a coin to determine

which color would be assigned to which group. The participants with a yellow dot on

the back o f their name tag were assigned to the control group, and those with a green

dot were assigned to the experimental group. O f the 197 attendees who volunteered

to participate in the experiment, 101 were in the control group and 96 took part in the

experimental group.

Once the participants had assembled into the control and experimental groups,

I and a member o f our research team provided the two groups with an overview of my

experiment and invited attendees to volunteer to participate in the experiment. I

explained that participation required that individuals complete a consent form and a

56

survey before and after the work session. The facilitators administered and colleeted

the consent forms from those who chose to participate after the overview

presentation. Next, the facilitators administered and collected the pre-intervention

survey. After the surveys were collected, the facilitators helped the groups begin their

work session task. The facilitators administered and colleeted the post-intervention

surveys at the end o f the work session. They then submitted the completed consent

forms and both surveys from their group to a representative o f the research team.

Experimental Controls

In designing this experiment I took steps to promote internal validity to ensure

that the experiment tested what it was intended to test. I implemented measures to

reduce error variance by holding extraneous variables constant. For instance, both

groups were given the same general overview presentation from a representative from

the City of Los Angeles (LA) planning team prior to being assigned to their work

session groups. Both groups were give the same work session task and were asked to

complete the same survey forms. The facilitators o f both groups were given

consistent instructions and training. Other logistics, such as room setup, refreshments,

and time in which to complete the task were held constant. The use of a pre- and post­

intervention survey design also helped to ensure the internal validity o f the results.

The instruments enabled me to measure participants’ responses to the same questions

before and after the intervention. If there was no significant difference in the pre­

intervention responses, but there was in the post-intervention responses, I could then

57

attribute the différence after the intervention to the intervention and not some other

cause.

By conducting this experiment during a real-world public participation

meeting, instead o f a simulated event using students posing as stakeholders, I was

also able to promote the external validity of the experiment. The field setting

promoted what Aronson and Carlsmith (1968) call “mundane realism.” Mundane

realism is when an experiment closely mirrors the real world. This helped to ensure

that that the participants were focused on the meeting task rather than the

experimental dynamics. Because this experiment measured the responses o f real

stakeholders who took part in a real public participation effort about a real

environmental problem it is easier to generalize the results to other such efforts. This

field experiment also enabled me to gather data from a much larger sample size than I

would likely have been aible to gather in a simulated experimental setting.

In addition to promoting internal and external validity, I also took steps to

minimize bias. 1 was able to avoid any problems associated with participant selection

bias because the participants o f the experiment were recruited from a pool o f

attendees responding to an invitation sent to all LA residents, and were randomly

assigned to the control or experimental group prior to being invited to volunteer for

the experiment. In addition, 1 deliberately chose not to directly participate in the

experiment as a facilitator for either group to prevent experimenter bias in which the

experimenter subconsciously influences the participants to act in a certain way. 1

observed both groups while they were working, 1 entered data into a database, and 1

coded responses to questions as required. 1 took special steps to prevent observer bias

58

when coding the responses. First, I hid the participant identifier number so I could not

tell if the response was from a participant of the control or experimental group when

coding. I also randomly sorted the responses so that I could not subconsciously guess

which group the respondent was from based on the grouping of responses. These

steps prevented me from subconsciously projecting my assumptions about the groups

when coding their responses.

Experimental Setting

In May 2007, the City of LA initiated its city-wide SWIPR process to identify

ways in which to reduce the amount o f solid waste sent its local landfills annually.

The City of LA currently diverts 62% of its solid waste from the landfills annually.

The City’s goal with this “Zero Waste” initiative was to increase the solid waste

diversion rate to 70% by 2015 and to 90% by 2025; with the ultimate goal of sending

“zero waste” to the landfill by 2030.

In seeking to develop a 20-year master plan, the City of LA’s Department of

Public Works, Bureau o f Sanitation, initiated a three-phase planning process that

began with a year-long stakeholder input and participant process. The objective of the

first phase was to involve stakeholders in the development of a set of principles to

guide the development o f the master planning and implementation process in the

years to come.

59

we doing

'*?• I

V .

Figure 2. Solid Waste Integrated Resource Planning

(City of LA, 2008a)

This first phase of the SWIRP process began in May 2007 and concluded in

May 2008. During this first phase, six public workshops were held in each o f the six

waste eolleetion regions in the eity, for a total o f 36 workshops. In addition to the

workshops, the City condueted three city-wide eonferenees to give stakeholders from

the six regions the opportunity to interaet with one another. 1 condueted my

experiment at the seeond eity-wide eonferenee.

Figure 2 shows the representatives o f the City presentation used to illustrate

the SWIRP process and the organization o f the various workshops (WS) and city-

60

wide conferences. The arrows represent the six waste collection or “wasteshed”

regions.

Conference Schedule and Agenda

I conducted my experiment during the City’s second city-wide conference,

which was designed to solicit input from LA stakeholders regarding alternative

“leverage points” or policy options to help LA waste managers in prioritizing their

efforts during the SWIRP process. The meeting began with presentations to all

participants, and then transitioned into two work sessions in which participants were

asked to gather in small groups to enable discussion about the policy options under

consideration. My experiment took place during the 90-minute Work Session #1.

General City of LA “Zero Waste” SWIRP City-Wide Conference Schedule

7:30 - 8:30 AM: Zero Waste Film Festival, Stakeholder Registration and Continental Breakfast• Provide attendees with a name tag• Provide each attendee with an agenda of the day’s activities and schedule• Have short clips o f zero waste videos from other cities or entities playing in

the room while attendees have breakfast.8 :3 0 - 10:00 AM: Welcome by City Officials• Welcome remarks and presentations from City of Los Angeles officials 10:00 - 10:15 AM: Welcome by City and HDR (the City’s SWIRP consulting

firm)• Introduction of day and activities by City staff• Explain break up groups and simulation objectives by representative of HDR

consulting firm10:30 - 12:00 PM: Work Session #1• Purpose: To give participants a chance to discuss and provide feedback on

leverage points under consideration12:00 - 1:30 PM: Lunch with Panel Discussion 1:30 - 3:00 PM: Work Session #2• Purpose: Continue discussion from work session #1 3 :0 0 - 3:30 PM: Wrap up and Conclusion

61

Small-Group Work Session Assignment

A representative from the City’s waste management agency and a

representative o f HDR (the City’s SWIRP consulting firm) provided overview

presentations about the day’s activities and the objective of the work sessions prior to

the beginning of the first small-group work session. The overview presentation

reminded partieipants of the purpose and need for the Zero Waste initiative and

provided a summary of the public participation efforts and feedback to date.

During this presentation, the HDR representative also provided an overview

of the reeyeling loop to help ensure that partieipants understood the strueture of the

recycling system. She also introduced the eore policy areas in which leverage could

be applied to ehange the amount o f solid waste sent to the landfills. For instance,

mandated eolleetion sendee or disposal fee sureharge, eould be implemented to

eneourage people to reuse and reeyele more. Figure 3 is similar to the graphie used

by the HDR representative to illustrate the “extraetion, proeessing, manufacturing,

consumption, collection disposal” sectors of the reeyeling loop (Stave, 2008). This

illustration also introduced examples of alternative policy leverage points, or areas in

the system in which a policy change could be made to reduce the amount o f waste

that ultimately ends up in the landfill. Figure 4 is a copy of the handout that was given

to all partieipants to provide additional information on these policy leverage points.

In addition to the overview presentation, all attendees o f the conference were

given a copy o f the following handout to provide them with instruetions for the work

session, and additional information on the alternative leverage points under

consideration. While the assignment for the work session is well artieulated in the

62

first paragraph of this handout, the general task was for the groups to evaluate,

discuss and prioritize the leverage points listed on this handout. Both groups received

this same handout and were given the same assignment. The only planned difference

between groups was the method by which they were facilitated.

B u y r e c y o l e d c a r r p a i s i n s / '

P r o c u r « m n « ^ ^ ( C o n S U m p t l O n

, A d t f a n c * d

hit

PuWtc moE f r t « i p r f î «zone;

R e c v c l e d c o n t e n t r a q u i r e t m n t s

M b n d a t e d c o l t e c w n «nice

C ollection t # n g

rate ICotlecdonbans /M a n d W w y r » c y d i n g

■Gotemment grant;

% p o s a l ( e e f S u r o h a n j o s

\ ____M an u fac tu rin g Collection

TechnicalDisposal J

grants\

Processing .Extraction

M i n i m u m F u n c t i o n a l S t a n d a r d s

AgencyAectod m a d * : d e e e l o p m e f i t

O p e r a t m g t t a n d a r d i i

F l j s o u n c e e x t r a c t i o n t a x ; ^ R e m o t e t a x e x e m p t i o n S t u b b i e s

OaibV GBtetui «OitBiP tilBdti Aznaiilig

Figure 3. Recycling Loop

In reviewing this handout, it is important to note that each o f the eight

questions listed on this handout represent one of the leverage points or policy options

for the City of LA to employ to help promote zero waste. For instance, the first

question on the handout states: “What if we could increase the useful lifetime of

consumer products?” In this case, increasing the useful lifetime of consumer products

is the leverage point under consideration. Both groups were instructed to use this

63

handout as the basis for their discussion, evaluation and prioritization of these eight

leverage points or basis for a policy LA could use to promote zero waste. The

following is a copy o f the handout all participants of both groups were given at the

beginning o f the work session;

ZERO CItywide C onference 2

WASTE PLAN Policies, Program an d FacilitiesSolid W aste in teg ra ted Resources Plan

Listed below are the leverage points in the recycling loop and exam p le strategies identified by LA stakeholders for reaching, zera w aste . The id ea at these leverage paints is ta say: it w e could c h a n g e something by a certain am ount, w hat im pact would it hove? Today, w e will discuss these lev era g e points, describe their individual strengths and w eak n esses an d the oppartunities and constraints that c o m e with e a c h leverage point. We will then rote their potential im pact (high, medium, law) with respect ta w aste reduction, environmental benefit, cast effectiveness, and e a s e at implementatian. Finally, w e will c a m e up with a recom m endatian far haw aggressively the City shauld pursue e a c h leverage paint.UPSTREAM Production Sector1. What if w e cauld increase the a v e r a g e useful lifetime at

consum er products?Examples:■ Increase praduct durability■ Educate cansumers an the c a n s e q u e n c e s at excess

cansumiption■ Encaurage repair and reuse

2. What if w e cauld red u ce the am ount of w aste in products an d packaging?Examples:■ Implement praduct and pack ag in g bans or take backs far

an w a ste reduction■ Require manufacturers ta red u ce the w eight at packaging

3. What if w e could increase the recycled can ten t at praducts and packag ing?

64

Examples:■ Promote “buy recycled" com p oign■ Require manufocturers to increose tine use of recycled

con ten t in products ond pockoging4. Whot if w e could m oke products ond p ock og in g more

recycloble?Exemples:■ Implement product ond pockoging bons or toke bocks

focu sed on recycled content■ Require monufocturers to c h o n g e tine con ten t of ttneir

products ond pock og in g to m oke ttiem more recyclobleDOWNSTREAMConsumption Sector5. Wtnot if w e could c h o n g e the o v e r o g e am ount of moteriol

co n su m ed by e o c h consumer?Exemples:■ Mossive ond sustoined public outreoch on d educotion

co m p o ig n focu sed on w oste prevention (olso colled “source reduction")

Collection Sector6. Whot if w e could increose consumer diversion rotes?

Exemples:■ Massive ond sustoined public outreoch on d educotion

Com poign focu sed on recycling■ Mondotory porticipotion in recycling ond orgonics

programs (single-fomily, multi-fomily, com merciol) - no trosh in the recycling ond no recycling in the trosh

■ Roll-out recycling ond orgonics contoiners to oil multi- fomily buildings

■ Roll-out recycling ond orgonics contoiners to oil com m erciol generotors

■ Roll-out recycling on d orgonics contoiners to oil schools in Los Angeles Unified School District

Processing Sector7. Whot if w e could increose the processing c o p o c ity for

diverted moteriols?Exemples:

■ Increose the p resen ce of neighborhood sco le focilities such as reuse centers ond fix-it shops through technicol ossistonce, gronts, and incentives

■ Increose the processing co p o c ity of existing recycling ond com posting focilities through focility exponsion or by odding more shifts

65

■ MRP first (process residual w aste prior to disposal to rem ove recyclobles an d com postobles)

■ Site n ew mulctiing and com posting facilities■ Site n ew SAFE centers for collection of tiousehold

hazardous w aste and electronics■ Site n ew resource recovery porks for self-houled moteriols

RESIDUAL WASTE MANAGEMENTDisposol Sector8. Whot if w e could increose the co p o c ity for olternotive

technologies?Exemples:

■ Biological treotment of residuol w oste through onoerobic digestion

■ Thermol treotment of residuol w oste through woste-to- energy

■ Conversion of residuol w oste to biofuels

(City of LA, 2008 b)

Leverage Point Evaluation Criteria

As the participants discussed, evaluated and prioritized each of these leverage

points, they were directed to compare them in terms of the following criteria: the

amount o f waste sent to the landfill, the relative costs, the relative greenhouse gas

emissions, and relative level of effort to implement. For instance a leverage point

could rank high in reducing the amount of waste sent to the land fill, producing low

greenhouse emissions, but it could be very costly and hard to implement. Participants

of both groups were asked to evaluate each of the eight leverage points against these

criteria and provide feedback to the City of LA on which of the eight leverage points

it should devote its time pursuing.

66

Group Facilitation Intervention

Both the control and experimental groups were given the same list of leverage

points and the same four criteria upon which to evaluate the leverage points. The

difference between the two groups was the method by which they were facilitated.

The control group was facilitated with standard methods and the experimental group

was facilitated with system dynamics-based methods.

The purpose of this experiment was to compare the relative differences in the

responses o f groups facilitated with standard and system dynamics-based facilitation

methods. The goal was to measure whether groups facilitated with a method that

adhered more closely with the ideal process steps, system dynamics-based

facilitation, would yield a higher level o f effectiveness, focus, and procedural

satisfaction than standard facilitation methods.

The control group was facilitated with standard facilitation methods. The

standard methods used to facilitate the control group were consistent with those

outlined in Chapter 1. The facilitators were instructed to focus on generating

discussion about the issue through soliciting input on the participants’ opinions o f the

strengths, weaknesses, opportunities, and constraints about the various leverage

points under consideration. They also encouraged participants to discuss how to

prioritize the leverage points and decide which ones the City should focus its efforts

on. The primary tools used in the facilitation of the control group’s small groups, was

a flip chart on an easel amd colored pens. These tools were used to summarize and

record the groups’ feedback and help them to focus on developing a set of

recommendations for the City.

67

The experimental group was facilitated with system dynamics-based

facilitation methods. The facilitators of the experimental group used a system

dynamics simulation model to help participants to better understand the nature of the

problem, and the relative effectiveness of the alternative leverage points in helping

the City of LA to achieve zero waste.

Because there was not sufficient time during this conference to involve

participants in the development of the model, the model that was used in this

experiment was designed in advance by an expert system-dynamics modeler who

worked in collaboration with representatives from LA and HDR to develop a system

dynamics model which accurately represented the solid waste system in the LA

region. The model is described in Stave (2008).

Figure 4 illustrates the components and relationships of the model developed

for use at this conference. It is consistent with the recycling-loop graphic and the

work session handout in that it identifies the same primary “sectors” and illustrates

how these sectors are interconnected. This illustration served as the conceptual basis

for the development of the formal system dynamics computer-simulation model used

for this conference. The computer model was used to simulate what would happen if

the City implemented any of the leverage points under consideration. This helped the

participants better understand the differences in the relative levels of effectiveness

among the alternative leverage points. It also kept participants focused on the fact that

the solid waste management system is a system of interconnected parts rather than

isolated elements.

68

Consumption

Collection

Recydhg

Processing

Production

(Stave, 2008)Figure 4. SWIRP Model

Measurement Instrument

I designed the pre- and post-intervention survey as a means of collecting data

to compare the relative differences between two groups’ responses. The pre­

intervention survey instrument established a baseline for comparing respondents’

attitudes before the intervention (Dillman, 1978). This determines the degree to which

the intervention affects the responses (Moser & Klaton, 1972).

The format of the questions in the pre-intervention survey included restricted

questions, closed-ended questions with ordered alternatives (Dillman, 1978). It also

69

included partially open ended questions, and Likert-scale questions (Bordens &

Abbott, 1991). Six of the questions that were asked in the pre-intervention survey

instrument were also asked in the post-intervention survey instrument.

The post-intervention survey instrument included a variety o f different types

of questions. In addition to the six pre-intervention, it also included Likert-scale

questions to measure if participants strongly disagree to strongly agree with a series

of 20 statements. These questions were modeled after various process assessment

survey instruments and related research developed by Wilson (2005), Gottlieb (2003),

and Brilhart (1968).

Demographic and Descriptive Questions

The pre-intervention survey instrument included 12 demographic and

descriptive questions designed to gain a better understanding o f the characteristics of

the participants. The primary goal o f these questions was to gather data to determine

whether there was a significant difference in the composition o f participants of the

experimental and control groups in terms of their past experience with the SWIRP

process, their general recycling behavior, and other questions related to general

demographics.

The general demographic questions included in this survey instrument (e.g.,

sex, age, household income) are very common in survey instruments; however, the

wording o f these questions was mostly modeled after Dillman (2000). The recycling

behavioral questions were modeled after similar questions geared towards measuring

behavior developed by Nardi (2003), and the interest in participation questions were

modeled after Brilhart’s (1968) “work in group process assessment.” Dillman and

70

Nardi are experts in the field of survey research and Brilhart is an expert in the

research o f group performance.

The first two questions I included were intended to collect some descriptive

information. The first question asked participants to identify how many SWIRP

meetings they had attended in the past. I assumed that those who had participated in

past SWIRP meetings would be more knowledgeable about the subject and process

than those who had not attended past meetings. This question identified if there was

an even distribution betv/een the two groups o f participants who had and had not

attended past SWIRP meetings.

Next I asked participants to indicate their recycling behavior by selecting one

of these statements, “no, not at all; a little; some but not everything I can recycle;

most o f what I can recycle; everything I can recycle.” This also helped me measure if

there was a significant difference between the group members’ recycling behavior. If

one group had been made up o f those who did not recycle and the other group was

composed of those who recycled everything they could, this difference between the

two groups could have skewed the results o f the other questions. Therefore, it was

important to establish whether there was a significant difference in the two groups’

participants’ recycling behavior between the participants in the two groups.

The other questions collected demographic data for both groups. I gathered

data on how long they had lived in the area, their zip code to identify in which regions

participants resided to ensure there were no significant differences between the

groups knowledge of the area, and ensure that there was an even mix o f regional

representation between groups. I also asked questions to identify sex, education

71

level, age, dwelling type, whether they owned or rented, the number of people living

in their household, and their annual income level. The validity of the results of the

other questions would have been called into question if there had been a significant

difference between groups in any of these areas. The structure o f these questions

were modeled after samples provided by Dillman (2000).

Table 4 lists the demographic and descriptive questions asked in the

pre-intervention survey.

Table 4. Demographic and Descriptive Survey Questions

Questions:

How many SWIRP meetings have you attended before this one?

Do you recycle?

How many years have you lived in Los Angeles?

Current Zip Code? (coded by zip codes within the six regional collection“wastesheds”)

Sex?

Highest level o f education?

Age?

What kind o f housing do you live in?

Do you own or rent?

How many people in your household?

Annual household income?

72

Pre- and Post-Intervention Survey Questions Design

The following is a list o f questions posed to address each o f the three

hypotheses o f this study. The questions that were asked in the pre-intervention and

post-intervention survey instruments are indicated as such in the lists below and the

rest o f the questions were asked only in the post-intervention instrument. The

research references upon which the individual questions are modeled are listed for

each individual question. I selected Huz (1999) and Rouwette (2003) from my review

of the system dynamics literature for relevant qualitative survey instruments to serve

as the central sources after which I modeled many of the questions in my survey

instrument.

I modeled many of my survey questions after the instruments used in research

conducted by Huz (1999) and Rouwette (2003). Both o f these studies involved a

between-groups experimental design and administered surveys to measure the level of

effectiveness o f system dynamics modeling. While each had a different research

focus, both research projects sought to measure participants’ responses to questions

regarding their experience. The focus o f these questions was very close to what I was

seeking to measure in my study. While I was tempted to replicate a blending of the

specific questions asked in these two studies, the focus of my study was different

enough that I chose not to replicate these prior survey instruments exactly. In the

listing o f my survey questions below I referenced these authors next to the questions

that were influenced by their work. I also was able to get input from Rouwette as I

was developing my survey instrument. His guidance helped me to refine the focus of

my questions.

73

The other primary research areas I drew upon in developing my survey

instruments were the areas of group process and group performance research. The

primary sources from these areas I referenced in crafting my survey questions were

Brilhart (1968), Gottlieb (2003), Wilson (2005), Rees (2005), and Zakay (1984).

Brilhart (1968) is one of the leading researchers on group performance and Gottlieb

(2003) has also done extensive research in the area as well. Their research focuses on

understanding how group process affects group performance. Wilson (2005) and Rees

(2005) focus on understanding how facilitation affects group process and

performance. Zakay’s (1984) research focuses on studying group performance in a

business setting. I modeled many o f my procedural satisfaction questions after the

work of these researchers. Some of the demographic and descriptive questions, and

some of the questions designed to measure the participants self-reported knowledge

and ability about the issue were also modeled after these researchers’ work. In the

listing of the questions I have identified which of my questions were influenced by

these researchers.

Questions testing the first hypothesis were intended to measure whether there

was a difference in the degree to which the facilitation method helped participants to

process information and anticipate consequences of alternative solutions. The goal

was to identify which group was able to identify more solutions that have a relatively

greater potential o f resolving the problem o f focus upon implementation.

Specifically, I tested whether or not the group facilitated with system dynamics

modeling was better able to identify the leverage points with a higher level of

74

potential in effecting positive change in helping LA achieve zero waste, than the

group facilitated with standard methods.

Other questions related to this first hypothesis measured which group had

higher level of confidence in their ability to understand and select the solutions with

the highest level of potential effectiveness in solving the problem of focus and their

confidence in their overall knowledge of the issue. The intent with these questions

was to measure if there vvas a correlation between actual and perceived ability to

select the most effective solutions to the problem at hand. The coding methodology

for this question is described in detail in the next chapter.

The question asking participants to identify “ ... the best things for the City to

focus on in order to move towards Zero Waste;” and the question asking participants

how much they " ... know about the solid waste challenges in LA,” were asked both

in the pre and post-inten/ention surveys. I asked these questions before the work

session to establish a baseline understanding of whether or not the groups

demonstrated a significant difference in their responses to these two questions. In

both cases, there was not a significant difference in the pre-intervention responses.

Table 5 lists the first hypothesis and the specific questions that were asked in

an effort to measure the differences between groups with respect to this hypothesis.

75

Table 5. Hypothesis 1 and Related Survey Questions

Hypothesis 1 : Participants in group decision making facilitation processes that adhere more closely to the ideal group decision making facilitation process steps will identify more effective solutions to resolve the stated problem, than will participants in groups using standard facilitation methods.

Goal o f Comments & Questions

Comments & Questions Posed Source after which comments & questions were modeled

Goal: Identify which group actually identified solutions that were objectively more effective in helping achieve zero waste.

What do you think is the best thing for the City to focus on in order to move towards Zero Waste? (Pre­intervention, as coded for systemic value)

(Huz, 1999; Rouwette, 2003; Brilhart, 1968; Zakay, 1984)

After this morning’s workshop, what do you think would be the best things for the City to focus on in order to move towards Zero Waste in LA? (Post-intervention, as coded for systemic value)______

(Huz, 1999; Rouwette, 2003, Brilhart, 1968)

Goal: Identify which group had higher level o f confidence in their ability to select the best solutions.

We are helping the City o f Los Angeles discover the best options for achieving Zero Waste.

(Rouwette, 2003; Wilson, 2005; Zakay, 1984; Brilhart, 1968)

I feel confident that my group's suggestions represent the best approach to Zero Waste planning.

(Huz, 1999; Rouwette, 2003; Brilhart, 1968; Zakay, 1984)

How much do you know about the solid waste challenges in LA? (Pre-intervention)

(Wilson, 2005)

After this morning’s workshop, how much do you know about the solid waste issue in LA? (Post- intervention)____________________

(Wilson, 2005)

76

Questions related to the second hypothesis measured the degree to which

participants focused on relevant information. The logic behind this hypothesis was

that the facilitation process that helps its participants stay more focused on relevant

information will be better able to help its participants to improve their understanding

of the problem and solutions. This improved focus and understanding should then

help participants to be more able to make more fully informed and better decisions in

selecting the best solution to a problem.

The first goal in designing questions to address this hypothesis was to identify

a way to measure the degree to which participants are focused on the relevant

information provided to the group. To measure focus, “the best things for the City to

focus on in order to move towards Zero Waste,” was coded to identify the degree to

which the participants of both group specifically referenced the materials presented.

Because the same materials were presented to both groups, and the fact that these

materials listed the relevant information to help participants understand the nature of

the eight leverage points, this question was designed to help measure which group

was more focused on these materials.

The second goal o f questions testing this second hypothesis was to identify

which group was more influenced by what they learned during the process. One

question measured the degree to which participants of both groups were aware that

they had learned something new, and another question was designed to test if they

had consciously changed their mind about the subject based on what they had learned

during the experience. These two questions measured whether there was a difference

77

between the two facilitation methods in helping participants to be focused on if the

process helped them to improve their understanding of the issue.

Table 6 lists the second hypothesis and the specific questions that were asked

in an effort to measure tlie differences between groups with respect to this hypothesis.

Table 6. Hypothesis 2 and Related Survey Questions

Hypothesis 2: Participants in group decision making facilitation processes that adhere more closely to the ideal group decision making facilitation process steps will stay more focused on relevant information related to the stated problem, than will participants in groups using standard facilitation methods.

Goal o f Comments & Questions

Comments & Questions Posed Source after which comments & questions were modeled

Goal: Identify which group was more focused on relevant information.

What do you think is the best thing for the City to focus on in order to move towards Zero Waste?(Coded for degree o f influence o f materials presented)

(Gottlieb, 2005; Brilhart, 1968)

Goal: Identify which group was more influenced by what they learned during the process.

I learned something new about Zero Waste management.

(Rouwette, 2003; Huz, 1999; Wilson, 2005)

I changed my ideas about Zero Waste management during this workshop.

(Rouwette, 2003; Huz, 1999; Wilson, 2005)

Questions testing the third and final hypothesis in this study sought to identify

which facilitation method was better at garnering a higher level o f procedural

satisfaction among its members. Procedural satisfaction is a way of referring to a

general level o f satisfaction that participants of a group process feel about the overall

78

interpersonal dynamics, the process structure and the ensuing outcomes of a group

effort Creighton (1980).

My first goal in measuring the relative levels of procedural satisfaction

between groups was to identify how participants felt about the interpersonal dynamics

of their group experience. Participants were asked to respond to statements intended

to identify how they felt about; the level of inclusion, the degree to which they felt

they could share and explain their ideas; the degree to which they felt respected by

other participants and that all participants had an equal opportunity to participate; the

degree to which the group interacted and dealt with disagreement; the degree to

which participants agreed on their recommendations, and the likelihood that they

would attend another SWIRP meeting in the future. Each of these statements sought

to measure how much the participants felt the process sincerely included them and

valued their input.

The second goal in measuring procedural satisfaction among participants was

to measure the degree to which participants felt satisfied with the structure and level

of rigor of the process. If people give up a Saturday to participate in an event like

this, they want their time to be spent productively. The statements included in the

post-intervention survey to measure process structure and rigor sought to identify

how much participants felt that the group worked hard, and worked well together.

They also were intended to measure participants’ feelings about how well the

discussion was structured and if the tools used to facilitate the discussion were

helpful.

79

The third goal in measuring proeedural satisfaetion was to identify whether

there was a differenee between the two groups in the level of eonfidenee they felt for

their final reeommendations. In addition to directly measuring the partieipants’

eonfidenee that their input will help, and their support for the group’s

reeommendations, I also sought to measure if they were enthusiastic about the goal of

reaching zero waste and if they felt that LA valued their input. I also asked questions

to determine how possible they felt it would be to aehieve zero waste. This series of

statements and questions was intended to measure the degree to which participants

felt proud of their accomplishments, and if they felt enthusiastie and optimistic about

the City’s goal of achieving zero waste.

In all three of these areas, the idea was that the higher a group’s response

would be with regard to these issues, the greater their over all level of procedural

satisfaction would be. My hypothesis was that the group faeilitated with the system

dynamies methods would have a higher level of procedural satisfaction then the group

faeilitated with standard means.

Table 7 lists the third hypothesis and the specific questions that were asked in

an effort to measure the differenees between groups with respeet to this hypothesis.

80

Table 7. Hypothesis 3 and Related Researeh Questions

Hypothesis 3: Participants in group decision making facilitation processes that adhere more closely to the ideal group decision making facilitation process steps will be more satisfied with the interpersonal dynamics, process, and outcome o f the group decision making experience, than will participants in groups using standard facilitation methods.

Goal o f Comments & Questions

Comments & Questions Posed Source after which comments & questions were modeled

Identify which group was more satisfied with the interpersonal dynamics.

I felt included in the discussion. (Rees, 2005; Brilhart, 1968)

I had opportunities to share my ideas during the discussion.

(Rouwette, 2003; Brilhart, 1968; Rees, 2005)

I had opportunities to explain my (Rouwette, 2003; Brilhart,ideas during the discussion. 1968; Rees, 2005)

I felt other participants respected (Rouwette, 2003; Brilhart,my views. 1968; Rees, 2005)

Suggestions by all group members were considered equally.

(Rouwette, 2003; Brilhart, 1968; Rees, 2005)

There was a lot o f interaction among group members.

(Brilhart, 1968)

We dealt constructively with disagreements among members.

(Rouwette, 2003; Brilhart, 1968; Wilson, 2005)

All members o f my group agreed (Huz, 1999; Rouwette, 2003; on our group's Gottlieb, 2003; Rees, 2005)recommendations.

Are you likely to attend another SWIRP meeting after this one? (Pre-intervention)

(Brilhart, 1968)

Goal: Identify which group was more satisfied with the general meeting structure and process rigor.

After this morning’s workshop, are you likely to attend another SWIRP meeting? (Post­intervention)

We discussed all options presented.

(Brilhart, 1968)

(Huz, 1999; Rouwette, 2003)

Our group worked hard to develop recommendations.

(Brilhart, 1968; Gottlieb, 2003; Rees, 2005)

81

Hypothesis 3: Participants in group decision making facilitation processes that adhere more closely to the ideal group decision making facilitation process steps will be more satisfied with the interpersonal dynamics, process, and outcome o f the group decision making experience, than will participants in groups using standard facilitation methods.

Goal: Identify which group was more satisfied with the general meeting structure and process rigor. (Continued)

M y group worked well together to develop its recommendations.

(Brilhart, 1968; Gottlieb, 2003; Rees, 2005)

Goal: Identify which group demonstrated a higher level o f support for process/outcome.

The discussion was well structured.

The tools we used in the discussion were helpful.

I feel confident that my group's input will help to achieve Zero Waste in Los Angeles.

I fully support my group's recommendation,

I am enthusiastic about the idea o f working towards Zero Waste in LA.

(Huz, 1999; Wilson, 2005; Brilhart, 1968; Rees, 2005; Zakay, 1984; Gottlieb, 2003)

(Huz, 1999; Wilson, 2005; Brilhart, 1968; Rees, 2005; Zakay, 1984; Gottlieb, 2003)

(Huz, 1999; Rouwette, 2003; Brilhart, 1968; Zakay, 1984; Brilhart, 1968; Wilson, 2005)

(Huz, 1999; Rouwette, 2003; Brilhart, 1968; Zakay, 1984; Brilhart, 1968; Wilson, 2005)

(Rouwette, 2003; Brilhart, 1968; Zakay, 1984)

I believe the City o f Los Angeles values my input.

(Brilhart, 1968; Zakay, 1984)

How possible do you think it is to achieve Zero Waste? (Pre­intervention)

(Huz, 1999; Rouwette, 2003; Brilhart, 1968; Zakay, 1984)

How possible do you think it is to achieve Zero Waste? (Post­intervention)

(Huz, 1999; Rouwette, 2003; Brilhart, 1968; Zakay, 1984)

How possible do you think it is to achieve Zero Waste by 2030? (Pre-intervention)

(Huz, 1999; Rouwette, 2003; Brilhart, 1968; Zakay, 1984)

How possible do you think it is to achieve Zero Waste by 2030? (Post-intervention)

(Huz, 1999; Rouwette, 2003; Brilhart, 1968; Zakay, 1984)

82

CHAPTER 4

RESULTS

Results Overview

I analyzed pre-and post-intervention surveys from 197 participants (101 surveys

from the control group and 96 surveys from the experimental group). Only participants

who completed both the pre- and post-intervention questionnaires were included in the

analysis. Data from the questionnaires were entered into spreadsheets and verified to

correct data entry error. Responses to open-ended questions were entered verbatim and

later coded. Special efforts were made to hide the unique participant identification

numbers and sort participant responses prior to coding the responses, so that the coders

would not know whether the respondent was in the control or experimental group.

I used the Statistical Package for Social Sciences (SPSS), version 16.0, to conduct

statistical analyses o f the results after all the results were recorded. The normality

distribution of each variable was tested using the Kolmogorov-Smimoff test, and all

variables proved to be non-normally distributed. The control and experimental groups

were compared for pre-and post-intervention values using the Kruskal-Wallis Test. A

level o f statistical significance of p < 0.05 was used to determine statistical significance. I

developed summary tables based on the results o f the Kruskal-Wallis test, which provide

information for each question regarding the total number o f responses (tt) and the mean

83

scores for both groups. I also included details on the level of significance o f the

difference between the groups’ responses, as well as the chi-square for each question to

provide additional detail on the strength of the significance. Finally, I included a column

to indicate for questions in which a significant difference between groups of;? < 0.05 was

observed, whether or not the results support the research hypothesis.

Demographics and Descriptions

The first step in analyzing the data from this experiment involved a frequency

analysis o f the demographic and descriptive responses of all participants who completed

both pre- and post-intervention questionnaires in both the control and experimental

groups. I conducted the Kolmogorov-Smimov test, which revealed the groups were not

normally distributed. Therefore, I conducted a Kruskal-Wallis Test to determine whether

there was a significant difference between groups in their responses to the demographic

and descriptive questions. This analysis revealed no significant differences between

groups with respect to these demographic and descriptive questions.

Table 8 summarizes the results of this statistical analysis of the demographic and

descriptive responses. I also included frequency tables (Tables 9-19) to demonstrate the

results of each individual question to provide additional background, including the

response scale used for each of these questions.

84

Control Group ExperimentalGroup

Kruskal Wallis Test

Question n Mean N Mean Chi-Square

Asymp. Sig. (p = < .0 5 )

How many SWIRP meetings have you attended before this one?

100 1.76 92 1.62 0.196 0.658

Do you recycle? 101 4.24 96 4.17 0.497 0.481

How many years have you lived in Los Angele?

94 3.52 91 3.56 0.225 0.635

Current Zip code: 94 2.77 95 2.39 2.058 0.151

Sex: 99 1.58 94 1.49 1.439 0.23

Highest level o f education

97 3.52 94 3.48 0.029 0.864

Age: 97 2.64 95 2.74 0.506 0.477

What kind o f housing do you live in?

98 1.43 95 1.38 0.809 0.368

Do you own or rent? 96 1.42 93 1.31 2.228 1.36

How many people live in your household?

98 2.76 94 2.76 0.008 0.93

Annual household income:

90 2.71 89 2.7 0.046 0.83

85

Table 9. Number o f Past SWIRP Meetings Attended Group # o f past SWIRP Frequency Percent

mtgs. Attended

Control Group 0 38 37.6

1 15 14.9

2 18 17.8

3 9 8.9

4 10 9.9

5 3 3

6 6 5.9

7 1 1

Total 100 99

Experimental 0 35 36.5Group 1 20 20.8

2 13 13.5

3 7 7.3

4 8 8.3

5 4 4.2

(3 5 5.2

Total 92 95.8

Table 10. Recycling BehaviorGroup Recycling Behavior Frequency Percent

Control Group Not at all 0 0

A little 4 4

Some 16 15.8

Most 33 32.7

Everything 48 47.5

Total 101 100

Experimental Not at all 1 1Group A little 2 2.1

Some 16 16.7

Most 38 39.6

Everything 39 40.6

Total 96 100

86

Table 11. Years Living in LAGroup Y ears Living in LA Frequency Percent

Control Group 0 6 5.9

Less than 2 years 10 9.9

3 to 5 years 5 5

6 to 15 years 13 12.9

16 to 30 years 28 27.7

Over 30 years 32 31.7

Total 94 93.1

Experimental 0 9 9.4Group Less than 2 years 5 5.2

3 to 5 years 6 6.2

6 to 15 years 16 16.7

16 to 30 years 16 16.7

Over 30 years 39 40.6

Total 91 94.8

Table 12. Zip Code/Regional “WasteshedGroup Zip Code Frequency Percent

Control Group Other 19 18.8

West Valley 8 7.9

Western 12 11.9

East Valley 11 10.9

North Central 27 26.7

South LA 15 14.9

Harbor 2 2

Total 94 93.1

Experimental Other 26 27.1Group West Valley 9 9.4

Western 12 12.5

East Valley 11 11.5

North Central 26 27.1

South LA 9 9.4

Harbor 2 2.1

Total 95 99

87

Table 13. SexGroup Sex Frequency Percent

Control Group Male 42 41.6

Female 57 56.4Total 99 98

Experimental Male 48 50Group Female 46 47.9

Total 94 97.9

Table 14. Education LevelGroup Education Level Frequency Percent

Control Group High school 4 4

Some college or 18 17.8vocational training

College degree 32 31.7

Some graduate work 11 10.9

Graduate degree 31 30.7

Other 1 1

Total 97 96

Experimental High school 6 6.2Group Some college or 14 14.6

vocational training

College degree 36 37.5

Some graduate work 9 9.4

Graduate degree 25 26

Other 4 4.2

Total 94 97.9

88

Table 15. AgeGroup Age Frequency Percent

Control Group 18-25 11 10.9

26-45 30 29.7

45-65 39 38.6

Over 65 17 16.8

Total 97 96

Experimental 18-25 8 8.3Group 26-45 29 30.2

45-65 38 39.6

Over 65 20 20.8

Total 95 99

Table 16. Housing TypeGroup Housing Type Frequency Percent

Control Group Single-family home 59 58.4

Apartment, condo. 36 35.6townhome, duplex

Other 3 3

Total 98 97

Experimental Single-family home 63 65.6Group

Apartment, condo. 30 31.2townhome, duplex

Other 2 2.1

Total 95 99

Table 17. Own or RentGroup Own or Rent Frequency Percent

Control Group Own 56 55.4

Rent 40 39.6

Total 96 95

Experimental Own 64 66.7Group Rent 29 30.2

Total 93 96.9

89

Table 18. Number in HouseholdGroup Number in

householdFrequency Percent

Control Group 1 18 17.8

2 37 36.6

3 15 14.9

4 16 15.8

5 7 6.9

6 3 3

8 2 2

Total 98 97

Experimental 1 18 18.8Group 2 37 38.5

3 11 11.5

4 14 14.6

5 9 9.46 4 4.29 1 1

Total 94 97.9

Table 19. IncomeGroup Income Frequency Percent

Control Group Less than $25K 18 17.8

$26K to $50K 17 16.8

$51K to$100K 28 27.7

More than $100K 27 26.7

Total 90 89.1

Experimental Less than $25K 14 14.6Group $26K to $50K 21 21.9

$51K to$100K 32 33.3

More than SlOOK 22 22.9

Total 89 92.7

90

The following figure (Figure 5) is an analysis of the frequency data for the

responses to the demographic and descriptive questions in bar chart format. This analysis

shows that the groups were very similar in these demographic and descriptive

characteristics. It also helps to provide a general description of the participants of this

experiment. For instance, most of participants of both groups had not attended a SWIRP

meeting before. Most participants in both groups recycle most or all of what they can,

and most have lived in LA for over 16 years. There was a balance of women and men

participating and most participants were over the age o f 26. More participants owned than

rented, and more had less than two people in their household. The annual income was

fairly evenly distributed among each o f the income categories from which they could

choose on the survey. Figure 5 illustrates demographic and descriptive responses of this

study. In each o f these graphs below, the black bars indicates the responses of the control

group and the grey bars indicate the responses of the experimental group. The

demographic question subject is listed below each of the graphs. These graphs help to

illustrate that there is not a significant difference in the makeup of the participants of the

two groups in any of the demographic or descriptive areas. This means that it is

reasonable to compare the differences in the groups’ responses as a measure of the affect

of the intervention rather than such changes being due to differences in the pool of

participants in both groups.

91

These graphs summarize the responses of both groups to the demographic and descriptive questions. The control group’s responses are indicated by the black bars, and the experimental group’s responses are represented by the gray bars. These bar graphs help to illustrate that there was not a significant difference between the groups’ participants.__________________________

I

R e c y c l in g b e h a v io r

Cortrol UutV EopsiknMal

E «-

■hCRCortiol □ U | . \ ’ E « p e r k n e r ta (

H o u sin g ty p e

I

Y e a rs liv ing In LA

■hoR Cnnrol □i*iv

A13 Zip c e d e

92

□iM -V E»p«rtn«ital■K )R Conrol O l.f tV 6>v«'*n«ntal

La»*man|2Sl< »261UO*50K »5ll<.l0l1CO'. h»r« tlian »1 OCh

5=0*

E d u c a tio n lovo l

■hCRCortrd □LM-V E<pw*iwrt.«

Figure 5. Demographic and Descriptive Responses

Questions Related to Research Hypotheses

I prepared a table for the set o f questions associated with each of the three

hypotheses that summarized the question, the total number o f responses and the mean for

both groups, along with the chi-square and statistical significance determination for each

question. I included a column on the summary tables when a significant difference {p <

0.05) was identified between the groups’ responses to indicate whether or not the

difference supported the research hypothesis.

93

Summary o f Results Related to Hypothesis 1

The first hypothesis of this study states: Participants in group decision making

facilitation processes that adhere more closely to the ideal group decision making

facilitation process steps will identify more effective solutions to resolve the stated

problem, than will participants in groups using standard facilitation methods. The

following question was asked before and after the intervention, “What do you think is the

best thing for the City to focus on in order to move towards Zero Waste?” The first step

in analyzing the responses to these questions was to review each participant’s pre- and

post post-intervention responses to determine if their suggestions changed from before to

after the intervention. If the participant listed the exact same suggestions in their pre- and

post-intervention response, I would not be able to determine if the post-intervention

responses was affected by the intervention. I eliminated these participants from the

analysis o f the responses to this question in order to prevent ambiguity in my results. In

total, responses from 35 participants (16 from the control group and 18 from the

experimental group) were eliminated for this reason. These 35 participants were also

excluded from the analysis of the remaining questions so that consistency was maintained

throughout this analysis and to enable me to be able to directly compare the results of

different questions.

The next step in the process involved coding the responses from the remaining

162 participants; 82 in the control group and 80 in the experimental group. The responses

were coded to determine the “systemic value” of comments. The process used to code

these responses is described below. The term “systemic value” refers to the level of

potential effectiveness o f a given leverage point or solution identified by participants

94

relative to the other leverage points. For instance, increasing the capacity to process

diverted materials will do more to reduce the amount of waste sent to the landfills than

increasing consumer diversion rate. The relative effectiveness o f the eight leverage points

was determined by solid waste management experts from the City o f Los Angeles and the

HDR consulting firm. The system dynamics model developed to simulate the LA waste

management system for this experiment was used to further validate the effectiveness

ranking for each of the eight leverage point.

The pre- and post-intervention suggestions the participants listed for the best

things LA should focus on to achieve zero waste was coded for systemic value using the

ranking listed in Table 20. To help minimize coding bias, I worked with another

researcher to code these responses. This provided a “check-and-balance” in the coding of

responses to ensure that the analysis of the responses was objective and consistent. We

used the scale listed in Table 20 to code the participants’ responses. This scale ranks the

leverage points identified at the workshop based on their relative level o f systemic value

in terms of their relative potential effectiveness of achieving “zero waste.” The

determination o f ranking for each leverage point was determined by “running” each

leverage point through the system dynamics simulation model designed for this SWIRP

process. Since the model was developed in eonsultation with the solid waste

management experts from the City o f LA and its consulting firm, the model was tested to

ensure that it accurately represent LA solid waste system. Each of the eight leverage

points was “run” through the model to determine the degree to which it affected the four

evaluation criteria: amount of waste sent to the landfill, the relative cost, the relative

greenhouse gas emissions, and the relative level of effort to implement. The resulting

95

ranking of these leverage points is listed in Table 20, and was used in the coding of the

systemic value of participant responses.

Table 20. Systemic Value Coding Key

Rating Scale: 0-10

Leverage Points

0 No response, no reference to leverage point or systemic comment

1

2

Non-specific or general mention o f leverage point or systemic comment

3 Reference to:

Increase o f consumer diversion rate

4 Reference to:

Reduced waste in products and packaging

5 Reference to:

Increase recycled content o f products and packaging

Increase recyclability o f products and packaging

Increase capacity for alternative technologies

6 Medium level o f specificity or frequency o f reference to leverage points or systemic comments

7 Reference to:

Increase processing capacity for diverted materials

8 High level o f specificity or frequency o f reference to leverage points or systemic comments

9 Reference to:

Increase useful lifetime o f consumer products

Reduce consumption

10 Very high level o f specificity or frequency o f reference to leverage points or systemic comments

We coded these responses on a scale from “0” to “ 10.” If no reference of systemic

comment or leverage point was made, the response was coded a 0. If two or more high

96

leverage points, sophisticated or well articulated systemic comments were included, the

response was scored a “ 10.” If a leverage point was specifically listed, it received the

ranking corresponding to the leverage point. If the leverage point was not specifically

listed, but is adequately articulated in other words, it received the ranking for the

corresponding leverage point. The degree or frequency of reference to leverage points or

systemic comments affected the ranking. As for degree, if the reference was weak or

strong, the coding would reflect a slightly higher or lower degree o f ranking.

As for frequency, if more than one leverage point was mentioned, the response for

the highest value leverage point was identified, and an additional point was added for

each additional leverage point mentioned. If a leverage point was not specifically listed

and not adequately articulated, but there was some indication o f awareness o f leverage

points or systemic value, the response was ranked a “1” or “2.” If no reference to

leverage point or systemic concept was mentioned, the response was ranked a “0.”

The following are few examples of actual suggestions offered by participants of

this experiment as to what the best things LA could do to achieve zero waste, as coded

for systemic value. The coding score is in parenthesis to demonstrate the range of

rankings. The coding values ranged from 0 to 10, with 10 as the highest in parenthesis.

• Reduce amount of packaging, increase diversion rates, increase capacity for

alternative technology (9);

• Reduce consumption, increase recyclability, increase consumer diversion (10);

• Provide recycling bins everywhere, support less packaging, make products more

recyclable (5);

• Recycling in public venues, reduction in packaging, educate the public (4)

97

• Educate citizens on how to recycle (1);

• Educate people, advertise, teamwork (1).

The results of the statistical analysis of the responses as coded for systemic value

of comments revealed that there was no significant difference between the pre­

intervention responses {p = 0.567) between groups. However, there was a significant

difference in the systemic value results in post-intervention responses {p = 0.028) to this

question. This means that I was able to reject the null hypothesis that the observed

difference was due to chance. Because the mean score was higher for the experimental

group in the post-intervention responses, these results support my hypothesis that the

group facilitated with the system dynamics methods would do better at identifying

solutions that are objectively more effective in helping to achieve zero waste.

The goal of the next set of questions related to this first hypothesis was to identify

which group had higher level o f confidence in their ability to select the best solutions.

The results of the statistical analysis of the responses to these questions are as follows.

There was a significant difference in the responses to these two questions: “We are

helping the City o f Los Angeles discover the best options for achieving Zero Waste”

{p = 0.017), and “I feel confident that my group's suggestions represent the best approach

to Zero Waste planning” (p = 0.001). The mean score was higher for the control group in

both cases. This means that while I was able to reject the null hypothesis for these

questions. However, because the control group had a higher mean score in their

responses to these questions, I was unable to support the research hypothesis with these

results.

98

The responses to the pre -intervention, “How much do you know about the solid

waste challenges in LA,” indicate that there was no significant difference between the

groups’ responses before the work session {p = 0.14). The post-intervention responses to

this same general question did show a significant difference: After this morning’s

workshop, how much do you know about the solid waste issue in LA (p = .012). The

results of this analysis show I was able to reject the null hypothesis on the post­

intervention responses; however, because the control group had a higher mean score in

their responses to this question I was not able to support the research hypothesis.

Table 21 summarizes the results of the statistical analysis of the responses to the

questions and statements related to hypothesis 1.

Summary o f Results Related to Hypothesis 2

The second hypothesis of this study states: Participants in group decision making

facilitation processes that adhere more closely to the ideal group decision making

facilitation process steps will stay more focused on relevant information related to the

stated problem, than will participants in groups using standard facilitation methods.

The first goal of questions related to the second hypothesis was to identify which

group was more focused on relevant information. To measure the differences between the

two groups in this area, I coded the responses to the question, “What are the best

things that LA should focus on to achieve Zero Waste by 2030?” for the degree to which

the responses appeared to have been influenced by the materials presented to both groups

prior to the work session. In this second coding, the post-intervention responses were

analyzed based on the following rating scale. This scale included a ranking from 0 to 10.

Responses that ranked higher in this coding of the responses directly referenced the

99

M l^ C /3

§ 3

- o Il| 1:1ilII§ I

It% P .

s iIIa >-

II42 g

IIC (U:#gitII

I.= cII

H

$

§■e0

1S'I1U

yX I

.= b Il i tll•I ^B .£?V) t/l

s i

kno

VI VO

O nVO

0000

O00

III II IsIIi'lIIOX) o

l i

o. i

C*B

u 3 3

■S g

i1

Ü1%.5 V3,0

SXI 2

0> ■So > "3

X 0 0E 0

c/30 3

.S km cf'2

<u-2 1

3 0 C

.B Ë0 c 273 0 .B1 3 à

(2 &

00(No

8

o00

O n

On

2ft'ifi!t i-§ § .

II

01T315I> . 0) ÇQ

112 gIIII| î

ê | l. k

II

po

00m

(NO n

(N

O00

;8

Oo

00

00

O n

m

Ov

o00

o

8

m(N

O00

V-)

(N00

I.Xi"3 § >% % % 1 a

’ôS2 1 2

8 "s'0

< 0> « ■S •5N N t i S

0 OX) CL 0 g >H-l c 3 ÎT\q-,0 >•u & t

0■I

£ iu

1

a

a3

E%ac 1

. s

1&C‘

. 3CL

0*3

(U-a •§ . 2

CL « •4->0

0 8 &1 u"Ü 1 E

1

êCL ca £ & œ 1

(N

po

sm

00

0000

(N00

*3•8

C*

3 cE> 3

.§ .=CL0

«20

80

^ 2 13 3 •3

0 30 X •uE ed &

• S3 g.g 0 _c3 +i*

1 3ê

100

presented materials or referenced them indirectly. In some categories, such as medium-

level of reference of the materials, there is a range of rankings that could be given to the

responses. Since participants were asked to list three suggestions, this range enabled me

to take this into consideration when ranking the responses. For instance, if a participant

only listed one suggestion but it was a medium-level suggestion they would be ranked 4,

yet if they listed three medium-level suggestions, they would be ranked a 6. Table 22

summarizes the ranking scale used for coding these responses.

Table 22. Ranking Scale for Hypothesis 2

Ranking Scale; Degree o f Influence o f Presented Materials 0-10

0 N o response, no reference to leverage point or other materials presented

1 Low level o f reference to the materials

2 Range o f minimal-level reference to the materials

J ___________________________________________________________4 Range o f medium-level references to the materials

5

_6 ____7 Range o f maximum-level references to the materials

J ______________________________________________________________________9 Range o f extremely high or exact references to the materials

10

As Table 22 indicates, the seale by whieh responses were eoded runs from 0 to

10, with zero representing no influenee of materials demonstrated. A seore of 1 was given

for low level of reference to the materials. A seore of 2 or 3 was given based on a degree

of minimal referenee to the materials. A seore of 4, 5, or 6 was given based on a degree

of medium level of referenee to the materials. A seore of 7 or 8 was given based on a

101

degree of maximum level of reference to the materials. And a score of 9 or 10 was given

based on a degree of extremely high or exact reference to the materials. The results of the

influence o f materials co ding, are discussed in the summary of the results of the second

hypothesis.

The following are a few examples of actual suggestions offered by participants of

this experiment as to what the best things LA could do to achieve zero waste, as coded

for the degree to which the suggestions matches the presented materials or concepts. The

coding score is in parenthesis to demonstrate the range of scores. A ranking of 10 is the

highest possible score, meaning the most closely adhering to the presented materials.

• Encourage recycling, encourage use o f durable products, decrease consumption

(9);

• Reduce consumption, reduce packaging (9);

• Packaging reduction, increase diversion from landfills, encourage acquisition of

more durable goods (10);

• Processing capacity, consumer behavior, alternative technology (7);

• Recycling in public venues, reduction in packaging, education of the public (4);

• More places to recycle, businesses reducing packaging (4);

• Educate the general population, offering some kind o f incentives (2);

• Focus on educating the public ( 1 );

• Mandatory recycling for all residents/businesses in City (1)

The results o f the statistical analysis of the responses as coded for systemic value

of comments revealed that there was a significant difference in the systemic value results

in post-intervention responses {p = 0.005) to this question. This means that I was able to

102

reject the null hypothesis that the observed difference was due to chance. Since the mean

score was higher for the experimental group in the post-intervention responses, these

results support my hypothesis that the group facilitated with the system dynamics

methods maintain a greater level of focus on the presented materials than the group

facilitated with standard methods.

The second goal o f questions designed to test this second hypothesis was to

identify which group had a higher level o f confidence in what they learned during the

process. I asked the following two questions in an attempt to determine if there was a

difference between the groups’ responses, but in both cases, no significant difference was

detected. The statement, “I learned something new about Zero Waste management” had

a significance level o f ip = 0.664), and the statement, “I changed my ideas about Zero

Waste management during this workshop,” had a significance level of (p = 0.382). Since

no significant difference was detected, it is not possible to support the research

hypothesis with these results.

Table 23 summarizes the results of the statistical analysis of the responses to the

questions and statements related to hypothesis 2.

Summary o f Results Related to Hypothesis 3

The third research hypothesis o f this study states: Participants in group decision

making facilitation processes that adhere more closely to the ideal group decision making

facilitation process steps will be more satisfied with the interpersonal dynamics, process,

and outcome o f the group decision making experience, than will participants in groups

using standard facilitation methods. Table 24 summarizes the results o f the statistical

analysis o f the responses to questions related to the third hypothesis.

103

IE§

§■c

§■o

III I 2

I1

2

:"8

iI1CQ

; 2

IIB Z

IM 2

li•5 ^ o o11m I' .S =E>eg Vi

If.52ll

8 I

III§■

J

Iu

i l l

t i lÏ3 T3 xi'

OoW-iC>

# 3 V I 111@1

Ia

§"S3

a

0 E1 §■

§,

!■

oo

r-00r-

o00

o

.= «

Io

.2-g

ioo

lin O

33

I

O T 3

jj3 2 Ü .s

I■s§3

3VO

Ov00

VO00

Ovr**

O v00

2 -5) y.■§ w

E ^ l

' l i1 ^ o a

II3Is

3

i |

| l£i

00COo

VOr-

00o

00

TfOvrs

00

30E0W)sI2Vi3

0k#0N30x>a CL0c8 X:0 (/):5 ■g

>13&3 603 .Ç0 3T3

104

The first goal in designing the questions to test this hypothesis was to identify

whieh group was more satisfied with the interpersonal dynamics. According to Creighton

(1980) interpersonal dynamics play an important role in the development of procedural

satisfaction in groups. I designed a set of questions to measure the degree to which the

participants felt the interpersonal dynamics supported their involvement. In the question

eoneeming if they felt included, there was not a significant difference between the groups

responses (p = 0.147); therefore, I could not reject the null hypothesis for the responses to

this question.

Other questions measured if the participants felt they had an opportunity to

contribute to the discussion. In response to the question measuring if they felt they could

share their ideas there was a significant difference between groups (p - 0.022), there was

also a significant difference between groups’ responses to the question asking if they felt

they could explain their ideas (p - 0.022). In both eases I could reject the null hypothesis,

but I could not support the research hypothesis because the mean seore o f the control

group was higher than the mean score for the experimental group for both questions.

As another way to measure the participants’ satisfaction with the interpersonal

dynamics, I asked if they felt that other participants respected their views. The analysis of

the responses to this question there was a significant difference between group responses

ip = 0.005). While I was able to reject the null hypothesis, I was not able to support they

hypothesis because the control group had a higher mean than the experimental group.

There was no significant difference between the groups responses ip = 0.061) to

the question of whether participants felt the discussion was equitable. Therefore, I could

not reject the null hypothesis with regard to the responses to this question.

105

However, I eould reject the null hypothesis in the difference observed between the

groups’ responses to the question seeking to measure if the participants felt that the

discussion was interactive. In this ease, the significant difference {p = 0.006) did not

support the research hypothesis because the mean of the control group was higher.

The analysis of the responses to the next three questions that were asked to

measure the participants’ satisfaction regarding interpersonal dynamics did not indicate a

significant difference, “We dealt constructively with disagreements among members”

{p - 0.0298); “All members o f my group agreed on our group's recommendations”

ip = 0.691); “After this morning’s workshop, are you likely to attend another SWIRP

meeting” {p = 0.959). I could not reject the null hypothesis in each of these cases.

The second goal in designing questions to measure procedural satisfaction of

participants was to identify whieh group was more satisfied with the general meeting

structure and process rigor. In response to questions regarding whether participants felt

they had discussed all options there was a significance level o fp = 0.0 between the

groups responses. The analysis of the responses to the question measuring if participants

felt they had worked haid to develop recommendations, there was a significance level of

^ = 0.014 between the groups responses. In both cases I eould reject the null hypothesis.

However, in both eases the control groups’ mean seore was higher than the experimental

groups’ so I was unable to support my research hypothesis with these results.

The analysis of the question asking if the participants felt their group had worked

well together to develop its recommendations there was not a significant difference

between the groups’ responses {p = 0.055). However, there was a significant difference

between the groups’ responses to the question asking if the discussion was well

106

structured {p = 0.001). Again, while I eould reject the null hypothesis, I eould not support

the research hypothesis because the control group’s mean seore was higher than the

experimental group’s.

The final question in this set of questions designed to measure the differences in

the groups’ levels of satisfaction with group process, the results to the question asking if

the tools we used in the discussion were helpful, did not indicate a significant difference

between groups’ responses {p = 0.102). Therefore, I eould not reject the null hypothesis

for the responses to this question.

The final goal in designing questions to test this hypothesis was to identify whieh

group demonstrated a higher level of support for proeess/outeome. The analysis of the

responses to the first three questions asked in this set o f questions designed to measure

participants overall support of the outcome and the zero waste initiative both

demonstrated a significant difference between the two groups’ responses. In response to

the question asking if the group felt confident that their input would help to achieve Zero

Waste in Los Angeles there was a significance ofj? = 0.003 between groups. In response

to the question asking participants if they fully supported their group's recommendation

there was a significance of;? = 0.018 between groups. And in response to a question

asking them if they felt enthusiastic about the idea of working towards Zero Waste in LA,

there was a significance o îp = 0.028 between groups. In all three eases I was able to

reject the null hypothesis, but was unable to support my research hypothesis because the

control group’s mean score was higher than the experimental group’s.

107

co

S i

II11T 3 û O

11II8T3

SIl li sO O u o

îlII 1»III rI

IjlSlla I”

IIIIi l

%

|P

t

I•Sa

.£■o

Iuog

3ex

S °S C L C '

g ^ 3

IIIi -

1 s VII % a

ë l

\0

i

I§■o .60 o-g IP

Co

3

"3•§1

%

r- (N (N V) 00 Ov(N (N O VO ov ov voO O O O Ci \o vq Ovo O O o O o d d d

VO 7f 7f 7f 00 m mr- m (N 00 ov o(N (N 00 V) q 1-1 q(N «X (X d d d

g 2rg 2 Ü .s

S oen

00en

?

(N (N00 00

o*£o

200 .s.s 3

7373 3

OB ?20>bE

E .E

JC %V) oO oVi coO OS sc c3 3t :o c O GCL o E- OCL CLO % O (A

73 3 73 33 O cg OX X .2273

o00

I>

I*T3

I!01CLI

(D

1Iig

IO0a

1

I

S3

g,III■S

g1IIsi

(NenT|-

§■g.gg730>Ig>

■si

115 B

(N00

Ii

(/)

i l112 &

IIII

(N

m m o (N Ov or- r- r- r- r- r- r- r- oo

(N Ov m 7f vo m r- (Nm m m m (N Ov m (N (NTl" Tl" Tl" Tl" Tl" rX rX Tl" Tl"

(N00

g |>> gIIl ' i

Ifg sO ü-IX cg

II

108

73ëI gI!i lII II îl iî

1§CLO

cd

7 3

i1

I1Iig-O•22 £

7 3 00

g

•S

%

IIH

T3U

$iCO

37 3d)

a

z z z z Z z

* o ( N m 0 0 0 0 m O n* o o O o ( N 7f vo O 0 0

o o o o o O m O)o d d d d d d d d d d d d

voT f «o « o o 00 (N ON m m 7|-m m 00 oo vo • o (N VD m vo m OO

Ï N q vo vo Os vo 00 q q qvo rX Ï N oo vX Ï N d d d O

m vo 00 7f 00 (N «o 00 ( Nvo 00 7f • o VO m 00 ON M O n FZrX rX rX rX m rX m rX rX rX fX fX

vo T f «o T f l > vo m «O ON ON O n oM t-- l > M l > 00

«O o \ 00 • o mM q M l > ON O n •O ON O n O n 00fX m rX fX fX 7T rX m fX m

o (N o 00 (N (N (N (N (N (Nz 00 00 00 00 00 00 00 00 00 00 00

3

Iti■sil i

" tP

1 1en 6 0

13E .s

I IIIli

ISd>

IËto

Ii0

1

O

îl

tE

ig

IU(4-,O

u■sd)

.y"oX>

N

a>

c

ëa>

N

I8

N

O'c .E o .So JS \g .g•r^C 3 g 3P O > O> hd) o % o

.S "O . s 73o> d)

k z g zeu B eu C/3

c *a>

OCL

g > 1X

10>

N

g.yy

ilo> m

IIîl

o

109

The results o f the analysis of the final questions in this section did not reveal a

significant difference between groups: “I believe the City o f Los Angeles values my

input” ip = 0.144), “How possible do you think it is to achieve Zero Waste?” (pre­

intervention,/? = 0.412; post-intervention,/? = 0.361), and “How possible do you think it

is to achieve Zero Waste by 2030?” (pre-intervention,/? = 0.909; post-intervention,/? =

0.487). Since no significance was demonstrated in the pre- and post-intervention

responses to these questions, I could not reject the null hypothesis.

110

CHAPTER 5

DISCUSSION

General Summary and Implieations o f Results

The overarehing question of this study was how to improve group deeision

making faeilitation methods to better help participants to select the most effective

deeision outeome to solve a given problem. Beeause standard facilitation processes do

not suffieiently adhere to elassieal deeision making proeedures, they enable and at times

reinforce behavioral deeision making tendencies which limit the scope of decision

analysis and inhibit the partieipants’ abilities to identify the most effeetive solutions. This

study showed that a non-standard group deeision making faeilitation proeess that adhered

more elosely to the ideal elassieal decision making methodologies; yielded the

identifieation o f more effeetive solutions.

I hypothesized that the facilitation method that adhered more elosely to the

elassieal methodology system dynamies would yield a higher degree of effeetiveness,

foeus, and proeedural satisfaction than the standard facilitation methods which do not

adhere elosely to the elassieal decision making methodologies. The results o f my

experiment supported the first two hypotheses that the system dynamics method would

yield a higher degree o f effectiveness and focus; the results did not support the final

hypothesis that system dynamics would yield a higher degree of procedural satisfaction.

I l l

The overarching research question of this study was to ask how could stakeholder

involvement facilitation methods be improved to facilitate better, more effective

outeomes? I believe the results of this analysis demonstrate that a facilitation process that

adhered more closely to more thorough and rigorous methods was able to help its

participants identify more optimal outcomes.

Specific details on the results related to each o f the three research hypotheses of

this study are provided in the following seetion. While there were some surprise findings

related to participants proeedural satisfaetion and level of self confidence, the findings of

this experiment support the general hypothesis that facilitation method, that follows more

closely to the classical, rational decision making steps, like system dynamics, will do

more to help participants identify solutions that are more effective in solving a give

problem once implemented, than will standard facilitation methods.

Discussion of Results Related to Hypothesis 1

The goal of the first two questions asked in relation to the first hypothesis was to

determine whieh group was better able to identify the more effective solutions for solving

the solid waste problem in LA. Both groups were given the same background materials

for their deliberations. Participants were asked both before and after the work session to

identify the best things LA could do to achieve zero waste. The results showed that while

there was no significant difference between groups in their pre-intervention responses {p

= 0.567), there was a significant difference in the post-intervention responses {p - 0.028).

This means that in the post-intervention responses, the experimental group’s mean score

was higher than the control group, which shows that the experimental group participants

112

were better able to identify more of the more effective leverage points than were the

control group members after the intervention. These combined pre- and post-intervention

results help to strengthen the reliability that the post-intervention difference is attributable

to the intervention rather than chance.

While these results are based on a coding of subjective comments, I made special

efforts to ensure that the coding procedures were consistent, objective, and unbiased. The

responses were consistently coded based on an objective ranking o f the relative level of

effeetiveness of the eight leverage points under analysis that was developed based on

information from the City and HDR solid waste management experts. I also made special

efforts to reduce coding bias by hiding the participant’s identification information and

randomly sorting the responses so that I could not determine if the responses came from

the control or experimental group.

Beeause o f the design and coding methods I used to determine whieh group was

better able to identify the more effective solutions, I am confident in the unbiased nature

of this analysis. While there may have been other ways in whieh to word the questions or

measure partieipants ability to identify the relative effeetiveness of alternative solutions, I

believe that the method I used in this analysis was sufficient to accurately capture

genuine responses from both groups for the sake of group comparison. These results

show that a faeilitation method that adheres more elosely to the ideal group decision

making facilitation steps was indeed better able to help its participants to identify more

effective solutions.

In addition to the first two questions asked to test this hypothesis, I asked three

supplemental questions to measure which group’s participants felt more confident in their

113

abilities to select more effective solutions. Since I thought the group facilitated with the

system dynamies processes would be better able to identify the best solutions, I also

assumed that they would have a higher degree of self confidence in their abilities and

knowledge. However, the control group demonstrated a higher level of confidence in

their knowledge of the issue although they demonstrated a lower level o f understanding

of whieh solutions will be more effeetive in achieving zero waste.

This inversion of self confidence and ability could be related to the idea that the

when one learns new things, it often challenges their previous understanding of how

things work and causes them to doubt themselves. The lower level of confidence in the

experimental group could also mean that the participants do not recognize that they have

improved their understanding. Research by Ajzen (1991) shows that people are often

unaware that they have learned something and they are also frequently are unable to

identify the provenance o f the new knowledge. Since this experiment involved a

computer model with which participants did not have time to become fully familiar, this

lack of familiarity eould have caused participants to have less trust in the output of the

model. And even though on some level the partieipants absorbed the model output

enough to identify better solutions, it is possible that there was not enough time for the

information to truly sink in and transcend from information to a genuine understanding.

I was surprised to find that the results of the three questions related to confidence

in abilities and knowledge showed that the control group had a higher mean seore than

the experimental group, meaning that the control group felt more confident than the

experimental group did in these areas. The level o f signifieanee o f the differences in

responses between groups to these questions is as follows: “We are helping the City of

114

Los Angeles discover the best options for achieving Zero Waste” (p = 0.017); “I feel

confident that my group's suggestions represent the best approach to Zero Waste

planning” {p = 0.001); and “After this morning’s workshop, how much do you know

about the solid waste issue in LA” {p = 0.012). The control group’s mean score was

higher than the experimental group in each of these areas.

When analyzing the results of all questions related to Hypothesis 1, the results

show that system dynamics-based facilitation methods were better at helping participants

identify the more effective solutions, but the standard facilitation methods were better at

helping the participants feel confident about their abilities. The lesson to be learned from

these data are two fold: (1) Just because the system dynamics-based facilitation process

helps participants identify more effective solutions does not automatically mean that they

are confident in their findings, and (2) Just because the standard facilitation process helps

participants feel self confident in their findings does not mean that they have identified

more effective solutions.

One potential explanation for these results is that the control group’s higher level

of procedural satisfaction could have created a positive image o f the process and a false

sense of confidence in the outcome. Conversely, the experimental group’s lower level of

procedural satisfaction could be artificially reducing their self confidence in the outcome.

Given the available data, I cannot determine with certainty the cause of this inverse

relationship between ability and confidence. If I were to conduct this analysis again, I

would design additional questions to more specifically address this issue.

Figure 6 illustrates the findings related to the analysis o f the questions designed to

test the first hypothesis. As you will see, the graphs in 6.1 show that the experimental

115

group had a higher mean seore than the control group, which indicates that the control

group was better able to identify more effective solutions than the control group. The

graphs in 6.2 through 6.4, show that the control group had a higher level o f confidence in

their ability to identify the best options, the best approach, and that felt they knew more

about the solid waste issue than did the experimental group. These graphs help to

illustrate the inversion in actual ability and self confidence between groups.

6.1 The experimental group had a significantly higher mean score {p = 0.028) than the control group in the coding o f the systemic value o f participants post-intervention suggestions for the best things LA should do to achieve zero waste.

Control Group Mean: 3.49

A7/B3 P o attea t Syatamic Valua

Gr<H*;HDRCwitr«l

M ean *3.49 Std.O «v. -2.764

N -7 9

A7/B3 P o ttta it Syttamle Valut

Experimental Group Mean: 4.7

A7/B3 Poattea t S yatem ic Valua

Gtmw UULV EqiHtnMital

A7/B3 P e m a irS y ita in k Valut

Mean -4.7 Std. Oav. >3.436

N -80

116

6.2 The control group had a significantly higher mean score than the experimental group {p = 0.017) relating to their confidence that they had identified the best options for LA to implement to achieve zero wastes.

Control Group Mean: 4.2

B26 B est op tions idantified

GrouiuHORCanirol

M ean- 4 2 S td. D av .-0.833

N -8 0

B t » t o p tle n i idan ttflad

Experimental Group Mean: 3.92

B26 B e s t o p tions Identified

Cf Mi|K IM.V

Mean "3 3 2 Std. D av.-0.84

N -74

B2S B a s t «pTlont Idan tlflad

6.3 The control group had a significantly higher mean score than the experimental group {p = 0.001) relating their confidence that they had identified the best approach for LA to take when striving for zero waste.

Control Group Mean: 3.79

B7 B eat app roach

Crew HDA Cenlrel

Mean -3.79 Std. Dev. -0.807

N -8 0

Experimental Group Mean: 3.39

B 7 B e a t app roach

Group: UNLV taperkiMMal

M ean-3.39 Std. D ay .-0.863

N -7 5

B7 B ast approach

117

6.4 The control group had a significantly higher mean score than the experimental group (p = 0.012) relating to their confidence in their knowledge of the solid waste management challenges after the work session.

Control Group Mean: 3.88 Experimental Group Mean: 3.55

B2 Po«t*K now a b o u t so lid w a s ts

B2 Post-Know about solid wasto

B2 P o st-K n o w a b o u t so lid w a s ts

rB2 Post-Know about solid w asts

M oan * 3 5 6 S td . D sv .* 0 5 3 2

N *78

Figure 6. Findings of Significant Difference Associated with Hypothesis 1

Discussion of Results Related to Hypothesis 2

The goal o f the questions designed to test the second hypothesis was to identify

which group was more focused on relevant information. The idea behind this hypothesis

is that the facilitation method which is more aligned with the ideal classical decision

making practices should be better able to keep its participants focused on the relevant

information presented so that they would be better able to make more fully informed

decisions. I found it was relatively easy to code the post-intervention responses to the

question asking participants to list the best things LA should do to achieve zero waste.

Responses that exactly matched the presented materials, meaning they quoted or used the

same words and/or phrases as the presented materials, or responses that demonstrated a

clear understanding of the content o f those materials were coded higher than those that

118

did not. The results of the coding for level of reference to presented materials showed that

there was a significant difference between groups ip = 0.005) and that the experimental

group scored higher than the control group in making more references to the materials.

Again, because I consistently coded these responses after hiding the identifying

information and sorting them so that I could not tell which group the participant came

from, I was able to reduce coding bias. As a result, I am confident that these results

indicate the true difference in the amount of focus both facilitation methods placed on the

presented materials.

Two additional questions were asked in relationship to this hypothesis in an

attempt to identify which group was more influenced by what they learned during the

process. I asked a question to identify whether participants felt they had learned

something new and had changed their views about the issue, but in both cases no

significant difference was observed between the two groups’ responses.

While I was unable to determine if one group learned more or changed its views

more than the other group, I was able to determine that the group facilitated with the

system dynamics method was more focused on the presented materials than the group

facilitated with standard methods. These results are important because the more a group

of lay stakeholders are focused on relevant information, the less likely they will be to go

off on tangents that will distract participants’ attention away from the core issues. By

focusing on the relevant information, it is also more likely that the participants will be

able to improve their general level o f understanding of the issues, be better able to

improve incomplete or incorrect mental models. By keeping a group o f diverse

participants focused on a common set o f relevant facts, it also helps the facilitator to be

119

able to productively address and resolve any eonfliets that may exist among participants.

Finally, the more foeused participants are on relevant information on the causes and

effects of the problem, the better they will be at making more fully-informed decisions on

the best solutions to the problem.

Figure 7 illustrates the findings related to the results of the coding of responses to

determine the level o f focus on the presented materials. The graphs shown in this figure

show that the experimental group was significantly more focused on the presented

materials than was the control group. As you can see in these graphics, the experimental

group had higher mean seore, as illustrated with the higher level o f bars on the right side

of the graphs, than the control group. This means that the experimental group

participants’ suggestions for the best things that LA should do to achieve zero waste were

more reflective o f the presented materials than the suggestions offered by the control

group.

7.1 The experimental group had a significantly higher mean score {p = 0.005) than the control group on the coding for the influence o f the presented materials on participants’ suggestions for the best things LA could do to achieve zero waste. Influenced by Materials (Post-intervention Only) Control Group

Control Group Mean: 3.04

A7fB3 In fluenced b y M atériels

DrouesHDRCmaiol

A7/B3 Influenced by Meterlele

M ean «3.04 S td. Dev. «2.883

N -B 1

Experimental Group Mean: 4.51

A7/B3 In fluenced b y M aterials

UtouiR IHLV Eiqief «neilt»!

N

M ean «4.51 » d . D ev. " 3 2 9 6

N "8 0

A71B3 Influenced by Materials

Figure 7. Findings of Significant Difference Associated with Hypothesis 2

120

Discussion of Results Related to Hypothesis 3

The analysis o f responses to questions designed to measure the level of procedural

satisfaction o f participants in both groups showed that the group facilitated with standard

methods had a higher level o f procedural satisfaction that is, they were more satisfied

with the overall experience, than did the participants of the group facilitated with the

system dynamics method. This result does not support the third hypothesis of the research

study, which proposed that the system dynamics-based facilitation method would yield a

higher degree of procedural satisfaction.

The questions designed to test procedural satisfaction were divided into three

areas. The first measured satisfaction with interpersonal dynamics, the second set of

questions measured satisfaction with process, and the final set measured the level of

support for the outcome and the zero waste initiative. In each of these areas a significant

difference was observed, and in each case o f significance the control group had a higher

mean score than the experimental group. The specific levels of significance are listed on

the following bar charts.

I was surprised to find that the experimental group did not have a significantly

higher mean score than the control group in response to any of the questions designed to

measure procedural satisfaction. Since I measured procedural satisfaction in three

different ways, through a number o f different questions, and the results consistently

showed that the control groups mean score was significantly higher than the experimental

groups, I am confident that this finding accurately measured the procedural satisfaction of

participants o f this experiment.

121

In my experience, standard and system dynamies-based facilitation processes

prior to this study, I have observed that the use of the simulation model does more to

draw participants into the substance of the decision analysis than I have seen in standard

practices. Therefore, I hypothesized that system dynamies-based facilitation would yield

a higher degree o f procedural satisfaction than would standard methods. However, this

assumption was based on my observation of the system dynamies-based facilitation that

involves a group model-building exercise in which participants help to determine the

assumptions upon which the model is built and the help to test and validate the accuracy

of the model prior to using the model to test alternative solutions. In the experiment

conducted for this study, there was not enough time during the conference to involve the

participants in a group model building exercise. In addition, there also was not enough

time allotted during this work session to provide a thorough introduction and orientation

to participants. Participants did not have sufficient time to understand and trust the

assumptions of the model, nor did they have time to become proficient with running the

model. The time constraints, coupled with the necessity to focus on the computer model

inhibited participants’ ability to interact with one another. While the computer model

provides a neutral platform that can help prevent interpersonal conflicts, in this case, the

participants were so foeused on the model they did not have enough time to interact with

each other and discuss the output with other participants. This model-eentrie focus may

have negatively affected the experimental groups’ procedural satisfaction with the

interpersonal dynamics o f their experience. The use of the computer model could have

also made some participants who were not computer savvy to feel intimidated and

uncomfortable with the experience.

122

Among the anecdotal feedback from participants o f the experimental group which

may shed some light on their lower level of procedural satisfaction is that some wished

that the model had been explained better, that there was not enough information about

how the figures were calculated, and that they didn’t have enough time to get comfortable

with the model. Many of these challenges were a byproduct of insufficient time. In each

of these eases, such comments illustrate they may have felt less satisfied with their

experience.

The relative difference between the control and experimental group eOuld be

interpreted to mean that the control group was better at promoting procedural satisfaction,

or that the experimental groups’ dynamics due to time constraints inhibited the promotion

of procedural satisfaction. In either ease, the results show that the group facilitated with

standard methods yielded significantly higher level of procedural satisfaction than did the

group facilitated with system dynamics methods. In addition to increasing the amount of

time participants have to work with a fully developed model, another thing that may have

helped improve the procedural satisfaction level of experimental group is if I had had

sufficient time to involve participants in a group model-building exercise. Such group

model building exercises are more common in system dynamics-based facilitation, but

with just 90 minutes in which to conduct the experiment, I could not involve participants

in building a model. If I had had time to conduct a group model building exercise I

suspect that the procedural level would have been higher than the experimental groups’

levels measured in this study. In my experience in observing group model building

exercises, the interactive and shared learning experience builds camaraderie and

123

confidence among participants, which can lead to a higher sense o f satisfaction with the

process.

As a result of limitations associated with the time constraints, I am less confident

in my ability to correctly interpret these procedural satisfaction-related findings than I am

of my interpretation of the other findings of this study. However, these results should not

be ignored. If it is true that the system dynamies-based facilitation method yields better

results but less satisfied participants, it may be difficult to implement the solutions.

Likewise, if the standard facilitation method yields happy participants but less effective

solutions the usefulness o f the implementation of these solutions could be limited. I think

it is fair to say that the ultimate goal of involving stakeholders in such decision making

efforts is to promote the development of effective solutions through a process the

participants are satisfied with and will support. These results demonstrate that the

coupling of effective outcomes and procedural satisfaction should not be taken for

granted. It also identifies an area that requires further analysis.

Figure 8 illustrates the findings related to the analysis of the questions designed to

test the third hypothesis. The graphs in Figure 8 help to illustrate the differences in the

mean scores between groups. In each o f the pairs of graphs listed in this figure, the

control group had a significantly higher mean seore than the experimental group. For

instance the graphs in 8.1 show that there were more participants in the control group

who scored a 5, or the highest possible level, than did participants in the experimental

group. In summary, these nine pairs o f graphs illustrate that the control group

participants were more satisfied with their experience than were participants of the

experimental group.

124

8.1 The control group had a significantly higher mean score than the experimental groupip = 0.022) relating to their satisfaction with their ability to share their ideas during thesession.

Control Group Mean: 4.39

B21 S h v a d idaaa

Std. 0 « If.-0 .733

B21 Shared ideas

Experimental Group Mean: 4.16

B21 S h v e d Ideas

II

B21 Shared Ideas

M «an-d.16 Std. Dev. -0.769

N -74

8.2 The control group had a significantly higher mean score than the experimental group ip = 0.022) relating to their satisfaction with their ability to explain their ideas during the work session.

Control Group Mean: 4.33

B22 E xplained ideas

Mean -4.33 S td. D ev .-0.771

N -6 2

Experimental Group Mean: 4.1

B22 E xplained Ideas

J__

Mean -4.1 Std. D ev.-0.748

N -73

125

8.3 The control group had a significantly higher mean score than the experimental groupip = 0.022) relating to their satisfaction with others respecting their views during thesession.

Control Group Mean: 4.34

C r«v: HM Conlral

Mean *4.34 Std. Dev. -0.674

N -8 0

Experimental Group Mean: 4.03

B24 R eapec t

_ iaa.V E^hm *nW

I /A\

/J-,-« . A ---V

V

Mean -4.03 Std. D ev.-0.765

N -71

8.4 The control group had a significantly higher mean score than the experimental group ip = 0.006) relating to their satisfaction with the interactive nature of the session.

Control Group Mean: 4.26

B17 Interactive

e iM f lN D R C w e * * !

Mean «4.26 Std. Dev. «0.766

N -81

B17 In terac tiv e

Experimental Group Mean: 3.91

B17 Interactive

CrMipsniLV beMrtnteMil

Mean «3.91 Std. Dev. «1.068

N -75

B17 In terac tive

126

8.5 The control group had a significantly higher mean score than the experimental group(p = 0.0) relating to their satisfaction that all options for achieving zero waste werediscussed during the work session.

Control Group Mean: 3.74

B12 D Itcu esed all option»

812 D ite u io d all opdent

M»an *3.74 Std. Dev. *0.981

N-B1

Experimental Group Mean: 3.13

8 1 2 D iecusB ed all op tions

Cta^UHUV EwwiliMeiil;

Mean *3.13 Std. Dev. *1.147

N*76

812 DUeueeed all option»

8.6 The control group had a significantly higher mean score than the experimental group (p = 0.014) in their satisfaction that their group worked hard during the work session.

Control Group Mean: 4.05

B13 W orked hard

&MeK HDHCfWol

.0-

«-

/\

//k 5 -

V813 W orked hard

M ean *4.05 S td. Dev. *0.757

N -8 1

Experimental Group Mean: 3.66

B13 W orked h a rd

M ean *3.68 S td. D ev. *1.024

N *74

II

127

8.7 The control group had a significantly higher mean score than the experimental group{p = 0.001) relating to their satisfaction that the discussion was well structured during thework session.

Control Group Mean: 3.79

B14 Well structured

M ean-3.79 Std. Dev. -0.926

N -8 2

Experimental Group Mean: 3.18

B14 Well s truc tu red

(Si UM.V E|VW<>n«e<l

0

M ean-3.10 Std. D ev.-1.151

N -74

8 1 4 Well s tru c tu re d

8.8 The control group had a significantly higher mean score than the experimental group {p = 0.003) relating to their satisfaction that their input will help LA in its planning efforts to achieve zero waste.

Control Group Mean: 3.94

0 8 Input will help

8 8 Inpu t win he lp

Mean -3.94 Std. D ev .-0.891

N -8 0

Experimental Group Mean: 3.58

BB Input will help

Ofcup; wa.vBi|iwim>iii«i

Mean -3.58 S td. D ev .-0.868

N -7 6

128

8.9 The control group had a significantly higher mean score than the experimental group ip = 0.018) relating to their level of support for their group’s recommendations.

Control Group Mean: 3.95

G(O(^HDftC0Mr«l

J r

M«an "3.95 Std. D av ."0S 96

N -7 8

Experimental Group Mean: 3.64

69 Suppon

M«an -3.64 SM. Dev. "0.872

N "73

Figure 8. Findings of significance related to Hypothesis 3

Strengths and Limitations

Strengths

One of the things that helped to strengthen the validity of the findings of this

experiment was the fact that it took place in a real-world setting. Because the experiment

took place during an actual stakeholder group decision-making event regarding a real

public policy issue instead of a simulated exercise the setting was more realistic and the

discussion was more genuine than if I had assembled a group o f students to role play in a

simulated public participation exercise. I was able to bolster the external validity and

applicability of the results beyond this setting and sample population by conducting a

field experiment without having to simulate the problem-solving effort or the stakeholder

participation.

129

The recruitment of participants for my experiment was much easier because of the

fact that my experiment took place during an actual public participation conference. The

extent of my recruitment efforts included inviting all those who attended the SWIPR

conference to volunteer to participate in my experiment. I did not have to send out

invitations to get people to the meeting. The City o f LA sent invitations to all residents to

encourage them to attend this city-wide SWIRP conference. Because the invitation list

was so vast, and the attendees came on their own volition, the pool of people who came

to the conference provided a random and representative sample o f City residents. This

city-wide invitation to encourage residents to attend this conference yielded a far larger

sample size than I could have otherwise generated if I had conducted the recruitment of

experiment participants on my own.

Because this experiment was part of an actual public participation event, it also

made it easier to promote mundane realism. As Aronson and Carl smith (1968) explain,

“mundane realism” is an effort to take the focus off the experiment and make the setting

as normal as possible. The general meeting logistics including invitation method,

location, parking, food, agenda, etc. were coordinated by the City of LA, and were

consistent with the format they have used for past SWIRP meetings. For instance, the

morning agenda included presentations by a number of City officials prior to dividing the

group into small groups for discussion as past SWIRP meetings had been structured.

Another attribute that helped to strengthen the results of this experiment was that

the control and experimental groups were assigned to two different rooms. This was done

so that the experimental and control group could not see the activities of the other group.

This discussion also helped to prevent the participants of both groups from noticing that

130

one group had computers and the other group did not. It also helped to keep the

participants focused on their tasks and to increase the likelihood that their responses were

reflective on their particular group experience, not distracted with a curiosity about how

their group differed from the other group’s experiences.

Perhaps the most important strength of this experiment was related to the

development o f the system dynamics simulation model that was developed in advance of

the SWIRP conference. A great deal o f time and effort went into the development of the

simulation model in advance of the meeting, to ensure that it accurately reflected the

relationship of elements o f the solid waste management system in Los Angeles. In

addition, the model had a very “user-friendly” interface.

Limitations

There were also some limitations to the study. Table 25 lists a sampling of

anecdotal comments o f what participants felt did not go well in the experiment. In total,

71 participants from the control group and 67 participants from the experimental group

responded to this question. The list below provides a representation o f the types of

comments offered and it sheds some light on what the participants o f both groups thought

could have been better in the facilitation of their decision making effort. This list also

identified areas in which I could have improved the testing o f my hypothesis. For

instance, it is possible that time constraints limited the effectiveness o f my ability to test

my hypotheses.

131

Table 25. Sample participant feed back regarding what did not go well

Control Group To many issues, not enough time

Too many divergent ideas

One person tended to dominate the discussion.

Negative “Worksheet focused the substance and emphasis o f the discussion.

We had trouble sticking to the format and kept going o ff on tangents or side discussions. Too loosely structured.

Not enough time for discussion.

Too many suggestions and conflicting views.

Goals o f the disc ussion were unclear.

The facilitator did not keep to the outline and keep the discussion moving.

Experimental Group The time given to complete the workshop with the computer. There are too many

variables that need to be readjusted and that was kind o f challenging and time consuming

I wish we had the model explained better to us.

Technology approach required significant learning for given time and setting.

The computer model was a little weird and vague.

The computer program was cumbersome and wasted our time! It would have been better to be given information that the computer program could generate, and make a decision based on facts.

The computer model should have been able to record inputs. Parameters should have been more obvious.

Not enough infoirmation about how the figures were computed.

I don’t trust the way the program was written and have questions about the variables.

Time constraints were one of the primary limitations of this experiment.

Unfortunately the time allotted for the experiment was only approximately two hours. As

is evidenced in the participant feedback in Table 27, both groups felt that they did not

have enough time to complete their task. While these comments are not representative of

all 197 participants they do help to illustrate the range o f comments related to the things

participants did not think went well during the experiment.

132

Because the experiment took place during a 90-minute workshop, not during a

standard six-month CAC setting like the case study cited by Stave (2002), it was not

possible to conduct a full group model-building exercise.

As a result o f this limitation, the participants o f this experiment did not have the

opportunity to develop shared ownership of and trust in the model. Researchers such as

Akkermans and Vennix (1997) and Rouwette and Vennix (2006) have found to be a

standard byproduct of group model building efforts. In addition, there was not enough

time for the facilitators to sufficiently introduce the model and provide a robust tutorial to

help the participants to become completely familiar with and proficient in the use of the

model.

As such, the experimental group participants did not have enough time to become

completely comfortable with running the model or enough time to truly digest and

discuss the output of the model. This shortage of time in the orientation, the use and

evaluation of the model output may have negatively affected the responses of participants

in the experimental group relating to procedural satisfaction and confidence in their

knowledge and abilities.

If I were to conduct an experiment to test these hypotheses again, I would follow

one o f two design strategies. First, if I were using a model that was developed by experts

in advance o f the use by public stakeholders, I would ensure that the engagement lasted a

full day. This would enable participants to spend a significant amount of time learning

about the model and how it worked, and giving them sufficient time to uses the model to

run different scenarios and still have time to discuss the output o f the model to make

133

policy recommendations. While this would still constitute an abbreviated timeframe, I

believe it would be sufficient for testing the hypotheses.

The second strategy I would use would be to design a full group model-building

exercise over a series o f individual meetings. This would enable participants to

thoroughly be able to define and develop a shared vision of the problem, understand its

causes, and identify the core assumptions that would be included in the formal computer

model. It would also give them more time to use the model to develop, test, and analyze

alternative scenarios prior to making a policy decision.

In both o f these alternative experimental design strategies a companion standard

process would be implemented in the same timeframes to enable direct comparison and

testing o f the hypotheses. In both design strategies, the participants o f the system

dynamics and standard facilitation groups would have more time to understand and

discuss the issues prior to making a policy decision.

In addition to time constraints, resource constraints were also a factor in this

experiment. Because o f the large sample size, it was difficult to find enough trained

system dynamics facilitators to accommodate the individual small groups in the

experimental group. This meant that some system dynamics facilitators had to facilitate

more than one group at the same time. In the control group, the opposite situation existed

and in some cases the control group had more than one facilitator for an individual group.

This limitation of facilitator availability in the experimental group reduced the amount of

one-one-one time that the facilitators could spend with individual participants. This too

could have negatively affected the participants’ satisfaction with their experiment or their

confidence in their abilities to run or interpret the model.

134

Equipment limitations also existed. Laptop computers were used for each

individual group to run the simulation model. Some participants found the smaller laptop

screen view difficult to see, especially with a cluster of people sharing one laptop. Other

participants expressed a desire to have a printer so that they could print the output of each

run to better track the various options for consideration.

Another potential limitation with this experiment relates to the pool of

participants. As seen in the demographic analysis, the makeup of the participants was

relatively homogeneous. In general, the participants were highly educated, long-time

residents o f LA, between the age of 45-65, and claimed to recycle all or most of what

they can, etc. In some ways, this high level o f horriogeneity contributed to the internal

validity o f the experiment. As Campbell and Stanley (1963) explain, internal validity can

control the confounding variables and ensure that the experiment measures what it is

intended to test.

However, this high degree o f homogeneity of participants could also have

negatively affected the external validity o f the results. External validity is the extent to

which findings can be extended outside a particular experimental setting and specific

group o f subjects (Fisher, 1935). The results o f this experiment indicate that when

working with stakeholders who are relatively homogeneous, and generally supportive of

an issue, system dynamics is an effective facilitation tool. However, I must be careful not

to overstate these results. These results could have been very different if the participants

had come from a more diverse group, from a group o f adversaries, or if some o f the

participants were opposed to the objective or the policy options under consideration

(e.g., NIMBYs, NOPEs) instead of those generally supportive o f the initiative.

135

A final potential limitation to the results of this analysis is that the experiment

took place in a particular moment in time on the morning of February 2, 2008. As I write

this analysis in October 2008, amid the recent financial crisis through the nation and the

world, I cannot help but wonder if the results of this experiment would be different if I

were conducting the experiment today instead of last February. For instance, if we

conducted the experiment on asking for input on how best to reduce the amount of waste

sent to landfills, it is possible that some of the participants would have been more

inclined to make suggestions related to reduction of consumption rather due to the more

frugal mindset caused by tight economic times, rather than desire to reduce waste. As

such, it is important to recognize that every experiment is affected by its timing, and the

assessment of the ability to generalize the results should take that into consideration.

Suggestions for Future Research

It is my hope that the experiment conducted for this study provides some useful

insight for other system dynamics or traditional group decision making facilitation

researchers and practitioners. While this analysis has answered some questions, it has left

some unanswered and has raised ones that I had not previously considered.

Confirm Effectiveness o f System dynamics-based facilitation with Public Stakeholders

The results of this experiment yielded some anticipated and some surprising

results. It would be interesting to replicate this experiment under the same basic

conditions o f a real-life public stakeholder engagement (large total sample size, small

group workshop the same experimental design, and the same pre- and post-intervention

survey instruments) to confirm whether the results of this experiment can be replicated.

136

However, if I were to conduct this experiment again I would design it to last a minimum

of eight hours. It would also be beneficial to arrange in advance a follow up interview

with participants six months after intervention to measure if their responses to the same

post-intervention questions change over time.

With the exception of the present experiment and the research conducted by Stave

(2002, 2003, 2008), system dynamics research projects such as those conducted by

Vennix (1996), Huz (1999), Rouwette (2003) and others, tend to focus on the analysis the

use of system dynamics with subject matter experts, rather than lay public stakeholders.

As this experiment illustrates, system dynamics simulation modeling can have a positive

affect on improving public stakeholder participants’ ability to identify and understand the

relative difference between alternative solutions. However, these findings would be

stronger if this experiment could be replicated with another public stakeholder group

decision making effort.

In addition to replicating this exact study with another public stakeholder group, I

would conduct this same experiment with subject matter stakeholders instead of lay

public stakeholders. This experiment could provide one other way to test the relative

effectiveness between traditional and system dynamics-based facilitation methods. It

could also help to measure if participants’ level of subject matter awareness plays a role

in the relative effectiveness of traditional and system dynamics-based facilitation

methods.

137

Study the Effectiveness o f System Dynamics at Different Points Along a Spectrum o f

Involvement Intensity

While this experiment focused on comparing the difference between traditional

and system dynamics-based facilitation methods, it would also be helpful to conduct an

experiment focusing solely on system dynamics-based facilitation methods. One way in

which to approach such a study would be to identify the varying levels of participant

interaction with the simulation model along a spectrum from a low level of involvement

to a high level of interaction. For instance, the experiment I conducted would be placed at

the lower end o f the interaction spectrum since my experiment only lasted 90 minutes,

and the participants were not involved in the development of the model. At the higher end

of this spectrum would be interaction such as the transportation CAC in Nevada (Stave,

2002) in which participants were involve in a comprehensive group model-building

exercise, which took a year o f regular monthly meetings to complete.

The first step in conducting such a study is to identify the different levels of

interaction along the continuum beyond these two examples, to provide examples of the

full range o f levels o f interaction along a spectrum. The next step would be to develop an

appropriate methodology for understanding the similarities and differences among these

different levels of interaction. Identifying the pros and cons o f each step would also be

instructive.

The goal o f this study would be to help system dynamics practitioners to study the

level o f effort and relating efficacy of each type o f interaction along the spectrum. This

information could help them to be better able to prescribed the most appropriate and

effective level o f intervention to address the problem at hand. For instance, in a relatively

138

simple problem within a relatively simple system it may not be necessary to conduct a

full, group model-building effort. Research on this spectrum of participant involvement

in system dynamics simulation modeling would also be helpful for training new system

dynamics facilitators, as well as helping to better manage the expectations of those clients

who engage system dynamics facilitators. The results of the present experiment could

provide a data point on the lower-involvement end o f the spectrum, but clearly more data

is needed to fully understand this spectrum of participant involvement in system

dynamics simulation modeling.

Study the Effectiveness o f Traditional Facilitation Outcomes Independently, Not in

Comparison with System Dynamics

While this study demonstrated that the group facilitated with traditional method

scored lower in its ability to identify effective decision outcomes relative to the control

group, this study did not specifically measure why the groups scores were different in this

area. The findings suggest that the traditional facilitation may overly employ behavioral

decision making techniques which tend to promote sub-optimal decision outcomes.

However, it would be interesting to conduct another experiment focusing solely on the

use of traditional facilitation, to focus on measuring the degree to which the facilitators

use behavior or classical decision-making strategies. For instance, it would be interesting

to specifically measure if they use anchoring and adjusting (Tversky & Kahneman, 1974)

in their discussions, or if they appear to be satisficing (Simon, 1957), or being

hypervigiliant (Janis & Mann, 1977) when selecting the final solutions. This study would

provide important findings for improving traditional facilitation methods

139

One reason why this type of study is necessary is to ensure that stakeholder group

decision-making efforts are rigorously and sincerely administered, not just an effort to

placate participants and check a federal regulatory box. Public stakeholder engagement

processes can help promote better decisions, especially if the public stakeholders are

given all the information and assistance to be able to make fully informed decisions. It is

important to meet the spirit of the law, not just the letter of the law. The results of the

present experiment suggest that the traditional facilitation methods did more to promote

satisfaction and confidence, than decision effectiveness. This suggests that more could be

done in traditional facilitation to help public stakeholders to make more informed

decisions in complex environmental decision making efforts. A study such as the one

proposed here, could provide information about what can be done to improve the

effectiveness of traditional group decision making facilitation methods.

Conclusion

In my 20 years o f work in the field o f stakeholder participation in environmental

and public policy decision making, I have learned a great deal about the importance of

creating an effective and sincere process for soliciting and incorporating stakeholder

input into the final decision. In addition to incorporating information on the technical

feasibility and the financial affordability of alternative solutions in a decision making

process, it is also essential to incorporate public acceptability before making decisions to

solve environmental problems. Stakeholders provide invaluable data which can greatly

improve the quality and effectiveness of the ultimate solutions to the problem at hand.

However, if the stakeholders are not given the proper tools and assistance in accessing

140

and processing the relevant facts related to the causes of the problem and the relative

effectiveness of the alternative solutions, the stakeholders will not be able to make fully

informed decisions. Whien this occurs, it is more likely that the stakeholder participation

process will be insufficient and the outcomes will be ineffective.

The general question posed in this study was related to the examination of how

stakeholder group decision making facilitation could be improved to enhance the

effectiveness o f the decision outcomes of such processes. This analysis confirmed that

standard stakeholder group decision making facilitation methods enable participants to

employ behavioral decision making strategies which are more likely to avoid thorough

decision analysis and result in ineffective outcomes. It also showed that facilitation

methods which stress more classical decision making strategies, such as system

dynamics-based facilitation, are more likely to promote a more thorough decision

analysis and result in more effective outcomes. And finally, the results showed that the

just because a group is better able to identify more effective solutions does not guarantee

that they feel satisfied and self confident as a result of their participation in the decision

making effort.

While not every environmental problem is dynamically complex enough to justify

the extra time and effort needed to use system dynamics-based facilitation methods, this

study demonstrates that for complex, dynamic problem-solving efforts, the system

dynamics-based facilitation methods can help participants to be better able to identify

more objectively effective decision outcomes. The results also provide two cautionary

notes. First, it reminds system dynamics-based facilitators to ensure that the process

promotes satisfaction and confidence in addition to effectiveness. It also reminds to those

141

using standard facilitation methods when involving stakeholders in decision making

efforts to solve complex environmental problems, to ensure that the process is not

focusing too much on the promotion of satisfaction and self confidence, rather than

identifying effective solutions to resolve the problem at hand.

142

X

g<

tI

PL,

§

cê{Xi

Ic/3

Bt/3

III

!g

8C3

1'53

!

10. Make decision

9. Evaluation, discussion

8. Identify consequences

7. Analyze alternative solutions against criteria

6. Establish criteria for solution effectiveness

5. Collect data

4. Generate alternative solutions

3. Identify problem causes

2. Define problem

I. Identify, discuss problem and goalssdoiS SS300J(J

gui3jEy\[ uoispoQ dnojf) |Eopi

O ,Q

§■«

§IIÛJO Ü" Icd

I- I IIII

§ 0o, ^ a. b

ifg .3 1 1

<N

§

3 2

0

1

00

m

m

2Os

00

mOs

m

O O Tl- 8 ^ -3 RS 3

f2 Ik Q

8o

ao

id

§

J t

IO00

143

X X X

X

X

XX

X

X X X X

X

IS isa•II I

I>1

ia 1

ë

h"O cds -ëM e

11

oo

I144

X

X

I1ô ijo

X

I

X

X

cd&

§3.2

1 1 . g s

u ë

X X

X

X

X X

l lI t

.2"oJJzOCL

§

I

oC/5

iI

■ss1

1I

lu

III

O sooO s

145

%

X

X X

!

GIgI

Ii1ed1

Ol u

(.d&

GO

12 CL ed

IGO

3i13ed

X

X

X

%

X

Il i

§-2“o

».

IGO

X

gs

ig"s -a

u

(D_o

ê

I

ë

I

| | H ç

H11IIMi

II

tu

ooooOs

Ico

s

co

IIOo

IS i

146

X

X

X

X

ag)

li

GO

(USiaQ

iicu

U

Ix>ogo-a(U

1,lu

X

u

Q:. d> V 3' T 3

X

1

X

X

sooo<N

I

X

5"8i l

< 2,

X

11t/5 OllII11Q 2

X

II■a3 oS y

II i l I i l

GOLw t n (+-, . t :

1 1(U2Z■s

.a cd cd.2 t:«Il

Lc/3 C/3

Q O 2

§ 111

o<N

147

X

X

i1

ê

IQ

a

1

X

•ê

Io

X

(U

IgoSeê

fNO0e

1X

X

è '501

I I•a

X

5I

t

X

X

c > o a

<o ü c/3 ca

X

IXi

Ë<D

IQ

X

X

II

X

12Ig

1<DCO

X

o g

ilO </} 11

= 1 . ! • = 5 S

I1

148

X

IIIII ë l l

X

ll11s iô l

0

1IIa. a ^ 0

X

Oc

coo <]>o G 13

q a« *-» (]>cd a

0>

T32CL 1

2-b 3 go

jz So u

S

BS8

ilfi

X

X

>.

I

X

CQ

X

I

X

c

X

Lh010 Ü1

I .

X

CJ

liII

X

tc/5 O

iic/5 O

I

S'103

149

X

i tli

X

ifi l lcd

X

§cd Jz o cd ^ >y "2 2o

S <U w

!li

X

I

it!a

I

X)MU

X

TJGcd

edOOJj<]>%3gO

O<L>

IH*LL, o

X

X

X

X

X

X

X

X

X X

aQ

ifGO

I

IGO

l i

S'oa0

1olo

IU3 CL

soO So\

J

I150

X

Ia

aa

g

X

cd_ § <uII

X

§1cd

S0

1ii

X

11cd O

cd

I

X

li5 S

li!

X

ifCd -5

1 S ' »

!

sifIlls

OCd

a aI .

X

IIc "2

tn

. 2 .S '53O -

f | l ta i l i

I.i

X§ . S

11cd Ü

S3 >

cd

111

O SO s

ociJ2k

151

X

X

•S a(U Btlco ü O

oll1 1s ^i §

■S.2 III§ I s

X X

X

X X

XX

X X

X X X

X

ll■asIIl l % ^

I1ü

-Io

2

ê gl a

aa"OI3aï8

a

2IgiD

coCLO

!

■|

Ia

■S)

a

1co

o

a

I1CL

%IQ

I1IO

1\<

>

1a

ë

u

cd•g

aI

kl

Iai a

8l o

I

II

Oo(N

a

oo0ew

1jd

g

unao'5?So

152

X

2(d33>o>

"O Gü O3co 3<]> Sf g«o .s

o CO

II

X

>

1Z"oo

X

X

g g4M oO Cdo o ûo «1

il5 8 8

X

X

>1zoaa

X

cdW)

X

Icrga8

IIII

OOO(N

bI

X

It

X

53e s8 Cw O G O‘î5 X c

II

X

X

lli l

g

g

cdcwO8

g

g3Ogî

i l

X

itli! i

IIilg s

IIl i

153

X X

X

X

X

XX

X

Ioc

I(üI

Icd

!

g

cd

I1

8<4M018cd

ëP

IIl u

u.2A

00OsOs

§■

II

!Is

o\03

l u

X

S 8 U . 2

X

I!

X

II

X

8-

X

JCLCd

IQ

E

I

X

l-SIÎ

X

cI<4M01I

X

X

i<4Mo8

I

, i11CO cd

a gH o

I•5■o

\ t

Icd.25I

154

X

fio

8-o

Ia

IIQ a

X

5

IIII

X

cd

M

II

X

873IIIQ b

X

i01

ü

X

Iâ §■_

illi l l lO fi U

V cd 13W

X

I4M01|1II

X

1g2fiO

13•U

II% D-

X

cd

(fi

_ g2 g>

^ 1 1 Itla ^ zM

X

! . . e i ^ s

o> X (D OCd Cl

M o

X

Cdfi

(fi o»3O 1(1) cd

.fi "Ofi(Ufi

O11 1

g 2

VIo0Ci01âX41cJOÊ

I 03

155

X

X

X

X

IzO0

1

!

g

i

gg

oG

8l à

OsOsOs

CL

I

X

I>aI

(b

X

TJ -wn

g;

TX tk. fi<u oafi

o•£

*o *ofi ficd Cd

N E 8>> •fi

q

< 1

X

ICL

Io

X

ICL

1

X

X

X

lg ? •o ag g

II

X

X

X

X

X

X

tl

fio8

ICL

1

fiOao

31IM

fiO

o{/)

I■§

a

a

i l

Os000\

IIM156

X

X

I- Op

Ii

g

Cd

S'01

II àco

X

cS

ifl i11

I I

11CL, o '

X

So2

f

X

X

X

s iIt

X

X

XX

X

X X

XX

X)

KIII

la

c

i

[ o

Jx>

II

g

II11

IH

Ix>o

i

s

oCl

CQ

I

i

i (/)

S'

ON

SJOS

Isc45J

157

XX

XX

X

üis

I

■Il i1

Xfio

Ë §

1

giIÊ

O

ItI

IQ

X

X

X

S'

ilrg

i lcd

g io

fiO

I>1

>0>a§■1

i00 %[Q.

'2 -2 2 a

e so so s

IO'CL,

tffl

158

I<üa

O'IIux>73

IIIIG

cn

a

I

□ IIQ

□ -s

1 1 o

□-§o

iII

□ i (/]

□ < -

ilI Io o -s

II

iII

&<u

oI—I (Ü

B

J ia-B2 - o § > c3 ca c/3 B u

ca % o u

O '<uU2

(SfS fC

IT)

159

O'

Icd0 u1

I

J3

<D

1

I<+1

3

OQ

I

N

I<U

eo

I§BÜ.2

.&u

I

I

CO

X)

I

3G ;5O& i1 t■(->GO <►Jo .G•o ü"G%r£S ca

!i2■f ^ s a

c/3 CL

It

CT) . 5

(U

n

□ t2 s

IIII I IO cdtiS

3X

00

ft

Iti l I

01

If

nf i• S Ï

<D

I It

o i

s i î !ÎÎX .t;

rS .%X h

ca

160

IO'

.sIIG1ICO

3

I■8

II

II-§

I

G

U U

0Û <u

1/2AG2 T3M <Ui-i caO ‘o& oG CO

âNGOo >.

3

I

I

I

O's

ICO

3

IG0

1 IB

Xr i

(L>(U G3 Be

«c/3

161

I1%o

I 1

ICO

■S)

(U u

illCO O CO

'oO <L>

IIi t

tl

§

II

CO&in

inVIVO

(N "Î s00 VO VO >CN O

<

ca

oCOcau'E.s &a 1)CLo

u

§ICO

III3&(DcxIe

Io\

If I IV) (A

I ICO <o§ a

i i

I

1o<s

162

>>(D

□IM g< -o

JOhIfl

Io> en

I iW) <ü

t% «

P%)&I

en

KI

'ii-SL .

II

□ •S

"Sca c/5 X) o> p > >

□ < -

□Io

□ •s

Oc

□ Vi

00

□ i l

oon î

•s

ît•s,II

f,00 0>g s

I I ,.2 Z3 C>0

O'

l î(U Si

È

11(t! u < S3

§

J

11Z' G

'g - a

Is01

I«s1I

I i

4#3 3c« ^

I."1^

163

□ lî B %H o.

[ ] 2 %

. 4 'U

ê

I

S «

iiu

î l^ 0)

IIIË-S00 G

ÎJfÔ

I IE S (% “■

&> 0)

IICL

I 1 0) 00

EOc/3

O> > J D

OE

(U so

ca R"Ig 0

N g>-G > ^

iK

- I zEO

c/3OD .

O 0>t ^a % ^ a

ca JD -c *35o>Soc/3

I

l l

3

IIg §> ^ N

■s ïII%

t lIf)

Io

ë î§ 1

3G

164

Ig

N

IIC/5d>

1

I•§

I

Î !' s - st3a

I<

c«3a

W £I IC/5 *0

I#Is S’wu O

oj

II# ocu ^

I1I I

c/5 ^IIII

II.bû (u

Î I: § ■ .

If.O c/5

"u &g)

< cn

t . aV) 1)

I>>

B

II1

gN

i1K l M

° "3Î I<U _

"g U

00

□ □

□ □

□ □

□ □

□ □

l . l

IIIIOn

<D

c/5

165

□ □

X)I(U

IIcd(Dc/5

I

D ho

■g

IT3

I5

O

□ □

□ □

□ □

□ □

aIX)

u

§

II3C/5

g

IcnI

1o fO

1)

g ;

I f )

Nc d

e'4 - (o

!aIac/5

IIflo

VO

rII

nI tM

o-wu(D C/j0> CW) Oo 'ac3T31) aeaooo 1)k,VI&.-s=)s &bB 1)>1 >T3

00

l l^ 6

■liif

<u

IT 3

0 \

.2C/5

%

1a

( S

( S

a' i

g

1 1

I f3 |f Sf S

!I Ï

II00 ?

166

T 3

(D&(D

gD ,

II

î>

G

! îc/5 OO ^

il0)

1) bû <uG| l _Æ u, w

II im<N

□C/5

II

C/5

U4-1o

U

I I

vo<N

3.a

4h01

ë

•I11If< N

O 'Iic/5

SG

X )

.gi%

00<N

ti

II'O'S3

167

O'

c/5a

IC/5

fIIX)

1o3

om

O'3

3

IXI3iI<Dc/5

IIm

168

IQ

IC/3<DQT3

IQ I

IQX )

II

Q

I

■I

IsQ

t

oVO00

r-*VI00

v00

VOOs00

oTtvn mv00v^o ovn

Q

OsVO

o(N

Os

oov^

ov00

mO v

OV^

OvVO

Os00

VO o

Os

OsT^

Os

II

II<

ICN

<

Ia.

Nm

s00TT

<

IIw I

.ÏÏ

I

IC

d)X )

e3z

II1 I

112

00ov

Tf"Ov

Ovv^

o

(N

o (N (N VOo Ov Ov o Ov Ov

z

3 3a CoM 1Im

y o <U o oc t3 & u &cd c X c X

o W o WOn O > o >32 B

o QJ

1O H z H

O>3

tùc XOJO

t j cs "o%3 QCU Cé

VO< <

169

Os

8Tî"Os

VO00

8Oo

s

vooOs00

v oOs §:

svovov dOs

Os

Iu

I

3.s

§

Id)

a3

H

IUo i

§

I(L>a3

H

cUai

§

IOS§

(D

8

mgco

<

cu

I

II3S

r o e s ro Os 00 ro 0 \ oro oo e s Os Ov oo ov 00 e svo ro Os Os r o ov OV vd vd o OsOv Ov Os Os OV ov 00 ov ov ov 00

T f vo e s 00 vo ro VO ro ov oo '3- (S O OvOs Ov Ov OV ov OV ov ov OV ov 00 ov Os Os ov 00

1 5 s 3 5c c C C c

(D <u d) (D (D d)S s J . 1 S s

Ln kMu O k O k O k O <U p uo . i s q . i3 q . i3 q . h q . &X c X c X c X c X c

W o W o w o w o w o u> u > u > u > u > u >

1*3 u 3 a i *3 u *3 cd

Q O Q O 9 O Q O 9H X H X H Z H z H z

3o

X(Dc/53

o O

ê 1X. £

OD.B3

o -Sd)E

0> g E o0J5 O 3 u

< X O Z £VO 00 ov o

(S< < < < <

Os

H

170

Ov

S 300

llov9

omrs

d>*o

8N

00mO(N

V) hJ

Ifmmvo

IIOv

ICL, -;3

VOOv

t6

00mvo

.£>c/3CL

^ o o

= l iO |5

.S3 ,uK § r-~

IVO

-O

a IQ

IIUo i§

200

CN

III < 1

o3*00

r -OvTf

Ov

(N

3*OOv

vomV3

voOvTf

(NIT) 00IT) m3*

VO (N VO CN

OvOv Ov Ov

00Ov

voOv

>

s

5 3

d)*o8

•S-N

g00'3-

<

>

Id>êM

t 3Xr-~

<

01Oso

171

VOt'­es

3

r-*es

00Os

OOs

ll5 l

IO

o<

Tfvn00

t-*

VOOs

H

oo

S

es

r-*vnvn

vor-

<a

llII

es

o00

II 3>

i l

172

g .

00<ilo

111T3W

5 -< on

5HÎ=11

53 OX)

g2a

ex)

IU

I

CN

iiiI

I g

PIC4!2~a>

gg

O h

If

§•

oFZ

oo 0 \

ONoo 00rn

o00

oOv’

oo

orW

oNO oo 00m

r-*00 Os00

OsOs

ord

Ovyn Os

OsOO SOm

00 NO oo ynm

I II

IIw>hJ

173

Os»n P OOs Os

OO

SO r-.00 oo

o(N

O(N

»nm

rn m00 nvn

(NOs

(N'(f OO

e2 It2

I

I<

i S

IIU

I

Ü

00cK 2 8

OOos os oO

o 00VI vn

r-‘ ooo vd

vqo oo

00v4 oo

t'-vd

voOsm

00m osm

Odo

voOs

I iSo

g

II

3>

i■gi

174

I l

;yr) vo

Osodo

I;2

vovd

vooo

Iesv4

esvd

t-vd

t-vd

esv-i Oo

.s

ges<

Osm voOs

es

I

u

Io a

vo I

IOmo

K0m1 i■ H I

3> IH

IC

u

I i l

175

d>

IIu

00vd as OO00

O s

a s 8

vo( N t —

( N

VIov O

8

O s

V I

(S t ' ­es

a s a sa s

oo

Io.9-N

Os <N VOesir>O v

VOO v

I U

tq3O00

01H IH

I§uoi il

176

1 1

g

gCL

I

IC6

I1a>I3

3>

II

I

r-vo O vOs

oo

vd

vo00 mm

o(Nm § vo

p 00 Osd o

m

rsm

voOs

oOv

p (Svd

g

g

Iuciœ

■IgO

!8

.S

ï?T3

I6

i

IobICO

II g I

I

oo

W)

3>

II

177

soo v ov

VOOo

O n mQÔm

voOv

moo

s q VI

vom

ov

v->CN

CN OV

O vOO

VOOV

C

II 8s -S

I,1

I IU

i31O

CO

II

Ivo<

I

i

ll1 ^

u

Sg

Oh

3>

IOh

S'

§z

<N OO

Tf00

OV00m

ov00t -

oo

O v

dm

CNdTt

VI§

3 "00 om

oTt CN §

OVd

r -ov(N

Om

vo00m

ovm

00vd

ovdO v

r -ov

p odo

(Ndm

O vCN

VOOvm

o(N

OCN

OVOV

VIOV

Oo

U h

v->CN v->t

v->

VI

VIVO>

O H

H

Ic0

U

1

v->CN

3>

j<L)

i3

v->?VOCN

v->

v->

VIVOgo H I

I

178

.s

I

llu

1CL

2

IOu

II

f

OsvdOs

vom

m

VOm

CUPdz

t-Os

om

i t

11

(N

3>

i l

I g ilu

Bgex,.■2*3>

Iex

§■

I

m00v-> oooo00vo

ooodo 00vo

Osm 8 VOvo

vo voov S

I3>

CU

nac

1 I

g

I

I3>

179

oo

esoo

(Ndm

Os(N

0\vdo\

m0\

§

voOs

I I Is

H

2o

d>

IOs

I gIIu

Ii&: s

3

IO h

i?PU

I

(N

VO

VO

(N

C

U

I

Os

rsd 00

O sooOs

oo

OsOs VO

OsmTf" oo

VI00m

voOs

(Sr -O s

es

m (N

ë

OVI vo Os H I

i i

180

o

îIzOs

I gPIPL,

ioeu•g

I

I

oô 2r- r-oo

00Os oo00yrs Ot-.

Os00Os §

00 «n vdors o

oOvm

Os3*

VOOv

moo

vovdm

r-m

O) 00«d

Oss6

ofd

o( N

00Os

Om

00oo

«nodm

r-m

vqOs

esr-Os

OO

voOs

es ma

HIC/D

2

iO

«n vo OsB

H Ig"

>

c

uoS

oH

oH

181

1 1 IIu

ov oo

gIg

vo(dfN

ovd oo

m oo

II

Ov00 VOOv

ovn(A2voa

oote2

m64

Oo

ës !

g-

H

Îcd

g il182

en

12

I(SîQ

OOod0000 000000 00

s 2 00SO0000 00 00 00

CL

183

vo

H

ils aH

IIli1 1 OQ g r- o.

§■

g'S "QQ . (UO la %<usi<NCQ

$ =

I!ili lP h u

il

<N

0Q

coosOSm

TfSOr**mmoo ooo

<NO 8 mmm'OmTf

oTfOO

rnvn00(N mèo

1O00 0\Tf o<N osr**

vn 0000 oo00 o (NOS Osrn

1— m rd m m m rn

1(N o vn vn vn «n (N o vn vn

•g2

1o o

I.=•SS

à 8 osM o00 o00 8 8 r- O00 o00 Tf vnr**

> z

%Q)Q

.2•o

3.ü

11i l < >

.2 î.2

100

1c3

C >

lC/5i(2

a i

1c0'1

1 PQ vn <NPQ

11PQ

PQ

g3O-g

< 0

(/i311S ^

s1z-o;

1c/5

en1r- c3 < >

1I

■ g .oSPQvn<NPQ

1%uCûCO"bi

§■10u,01g

1w>

184

mN

Ooo

3II

II

(Nm00

»o

oor~

§■go

OsSO

Z"O

I

(NI

en

îiIQ

Ii

\oO S

Or~

ooo

OsOssdr-

%

CN

moo

00 Os»o

II

co

UpPg

1 §UP i

g

13

IU

i

III3

H

i

"s

ilIOS

I

1CQ

185

H

11CO

i i i

g m0 0oo 0 0

Os

IT)s oo

s oOsCN

s o

Os

ooo%CN msQ

V.CO

T fo OS0 0

T fOs IT)

s o0 0

ooo12m m CN T f m m

1o »o »o o IT) IT)

I

11. e2

o o

0 0 oo OO go oo oo

I Z

bu

Q

•g gg • g 1Î

i l

II1os

1w

1z"O1

§llII

1os

15

1Z

" O

Ig5

• §

116o i§

<L>ai

186

men

'en

iIsI

en

H

II

RCQ

S3•o

I(/)S

S:5"T30>

IgyCL

ICNCQ

3tS"mCNCÛ CQ

1•OIOs

5

187

■sI'Os

I

IW)

i1(2m S

g-

■S

CQ

M

îDQ

ii00

s

is

atîDQ

3&

00CQ

188

sosnfd00

00

2

soOsSO00

8 r -O Ssn

os om00

8

ir'Osrn

tn

CNCNodt*^

CN00

Iu

CI(/)Os03

cU0<

gI■§

ICNIp3

II> I

I

II>J

H

O s00moo

OS

. 1

i

soS

c

U

1c^‘

gSmO

IPm00<

IIs3§

o s

iuQ i

g

O

i

I

1:>mJ

H

O ^ Pm E r ^ > CQ N

voooo 00

Ia

3Iu

IN' g

I cii< ^

osm00

o00

CNS O

g

IIiNO

H

189

§ r r ?00

vnvnr00

R: &O oOs

(£m

00rt

1O

Phvnm

!eu<

o

c*os(N

N

c*omR

N

mo

mm00

I'N

OsVOVO

■IO h

<

C.

N

§1

cc

osoos

vom

m

èQ

tCNm 0\m mm Tfm fN

(NvofN S! m :

H3S

Is2<A

g

Q

8 8 O00

fN00

§

273.

1cos

rs

II<NCNlffl

Ico

3wm<NCQ

g

I5

fN00

s

2QOsS

8

<os

£

I

I<

W)

u

Ic/5(DQ I

aI

190

Os

3;KT)00Os

r*- r**-s

voS!

v£>O00

Os00

soO n00

00 Os0000 OsOs

Os<N

O so 00?

o(N ? O

003: Os

O oo mO n g O

00-SO o

so00 s o

0000 (N

00(N00

fN00

(N00

(N

Ioü

I( 2

5 f

IO

ICQ

1• o

O

m

5

Io;

"O

I00S

T3D

5

• t

CQ

3t00Cù

s

IcoosCQ

g3O

- §cj

I(NCQ

Ëcü>

soS

N‘SI*

I<

• §

Ioeu

TTCQ

g0

È

1

Ieu

<

c‘omo(N

II( 2

CQ

C-»osfN

u. 2

z

I

2

1CO

(NCQ

I1Ê:CQ

•IO:

191

pi00VOo

m?

mO S OSo00

T)"T)" (NrnTf"

p iu->

I3iCQ

Im<NCQ

Is

o

I2QOs

S

(N Os

IOQ

I< S

60

192

BIBLIOGRAPHY

Ackoff, R. (1978). The art o f problem solving: Accompanied by Ackoff’s fables. New

York, NY: John Wiley and Sons.

Ajzen, 1. (1991). The theory of planned behavior. Organizational Behavior and Human

Decision Process, 50, 179-211.

Akkermans, H. S., & Vennix, J. A. M. (1997). Clients’ opinions on group model-

building: an explanatory study. System Dynamics Review, 13(\), 3-31.

ALCOA (1989). The Alcoa eight-step quality improvement process. Pittsburgh, PA:

ALCOA.

Allen, P. T. (1998). Public participation in resolving environmental disputes and the

problem of representation. Risk: Health, Safety and Environment, 29(4), 297-308.

Anderson, T., Howe, C., & Tolmie, A. (1996). Interaction and mental models of physics

phenomena: Evidence from dialogues between learners. In J. Oakhill & A.

Gamham (Eds.), Mental models in cognitive science: Essays in honour o f Phil

Johnson-Laird (]3p. 247-275). Erlbaum, UK: Psychology Press.

Andersen, D. F., & Richardson, G. P. (1994). Scripts for group model building. System

Dynamics Review, 13(2), 107-129.

Andersen, D. P., Richardson, G. P., & Vennix, J. A. M. (1997). Group model building:

adding more science to the craft. System Dynamics Review, 13(2), 187-201.

Arnold, H. J., & PeldmEin, D. C. (1986). Organizational behavior. New York, NY:

McGraw-Hill, Inc.

193

Amstein, S. R. (1969). Ladder of citizen participation. Journal o f American Institute o f

Planners, 35, 216-224.

Aronson, E., & Carlsmith, J. M. (1968). Experimentation in social psychology. In G.

Lindzey and Aronson (Eds.), Handbook o f Social Psychology (pp. 1-79). Reading,

MA; Addison-Wesley.

Axelrod, R. (Ed.) (1976). Structure o f decision: The cognitive maps ofpolitical elites.

New Jersey: Princeton University Press.

Bandstatter, H., Davis, J., & Schuler, H. (1978). Dynamics o f group decisions. Beverly

Hills, CA: Sage Publications.

Barnes, J. (1982). Cognitive biases and their impact on strategic planning. Strategic

Management Journal, 5, 129-137.

Barge, K. J. (1996). Leadership skills and the dialectics of leadership in-group decision­

making. In R. Y. Hirokawa & M. S. Poole (Eds.), Communication and group

decision wakmg (2"^ ed., pp. 301-342). Thousand Oaks, CA: Sage Publications.

Bayes, T. (1763). An essay towards solving a problem in the doctrine of chances.

Philosophical Transactions o f the Royal Soc. o f London, 53, 370-418.

Beach, L., Barnes, V., & Christensen-Szalanski, J. J. (1986). Beyond heuristics and

biases: A contingency model for judgmental forecasting. Journal o f Forecasting,

5, 143-157.

Beach, L. R. & Lipshitz, R. (1993). Why classical decision theory is an inappropriate

standard for evaluating and aiding most human decision making. In G.A. Klein, J.

Orasanu, R. Calderwood, & C.E. Zsambok (Eds.), Decision making in action:

Models and methods (pp. 21). Norwood, NJ: Ablex Publishing Corporation.

194

Bediker, K., Mitchell, T., Beach, L., & Beard, D. (1993). The effects o f strong belief

structures on information-processing evaluations and choice. Journal o f

Behavioral Decision Making, 6, 113-132.

Beierle, T. C. (1998). Public participation in environmental decisions: An evaluation

framework using social goals. Washington, D.C.; Resources for the Future.

Retrieved November 5, 2006, from

http ://www. rff. org/rff/About/RFF at5 0/lndex. cfm

Beierle, T. C., & Cayford, J. (2002). Democracy in practice: Public participation in

environmental decisions. Washington, DC: Resources for the Future.

Bentham, J. (1970). An introduction to the principles o f morals and legislation. London:

The Athlone Press. (Original work published 1789.)

Bernoulli, D. (1954). Exposition of a new theory on the measurement of risk.

Econometrica 22, 23-36. (Original work published 1738.)

Bleed, A., Nachtenebel, H. P., Bogardi, I., & Supalla, R. J. (1990). Decision-making

process on the Danube and the Platte. Water Resources Bulletin American Water

Resources Association, 26(3), 479-487.

Bormann, E. G. (1990). Small group communication: Theory and practice (3 * ed.). New

York, NY: Harf>er and Row.

Bordens, K. S., & Abbott, B. B. (1991). Research design and methods: A process

approach, (2"* ed.). Mountain View, CA: Mayfield Publishing Company.

Bordley, R. F. (2001). Naturalistic decision making and prescriptive decision theory.

Journal o f Behavioral Decision Making, 14(3), 53-384.

195

Brehmer, B. (1992). Dynamic decision making: Human control of complex systems. Acta

Psychologica, 81, 211-241.

Brightman, H. J. (1988). Group problem solving: An improved managerial approach.

Atlanta, GA: Business Publishing Division College o f Business Administration,

Georgia State University.

Brilhart, J. K. (1968). Effective group decisions. Dubuque, lA: WM. C. Brown Company

Publishers.

Brilhart, J. K., & Galanes, G. J., (1982). Effective group discussion (4 ’’ ed.). New York:

McGraw Hill.

Bronfman, B. H. (1983), Involvement in social impact assessment: The community-based

technology assessment program. In G. A. Daneke, M. W. Garcia, and J. D.

Priscoli (Eds.), Public involvement and social impact assessment (pp. 215-226).

Boulder, CO: Westview Press.

Brown, V., & Paulus, P. (2002). Making group brainstorming more effective:

recommendations from an associative memory perspective. American

Psychological Society, 11(6), 208-212.

Byrne, R. M. J. (1996). Towards a model theory of imaginary thinking. In J. Oakhill and

A. Gamham (Eds.), Mental models in cognitive science: Essays in honour o f Phil

Johnson-Laird (pp. 155-172). Erlbaum, UK: Psychology Press.

Calaveri, S., & Sterman, J. D. (1997). Towards evaluation o f systems thinking

intervention: A case study. System Dynamics Review 13(2) 171-186.

196

California Legislature (1997). Senate Bill 45. Retrieved on May 5,2002, from

http ://www. leginfo.ca. go v/bilinfo .html

Campbell, D. T. (1989). Foreword. In R.K. Yin (Ed.), Case study research: Design and

methods (2"** ed.). Beverly Hills, CA: Sage.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs

fo r research. Chicago: Rand McNally.

Cannon-Bowers, J. A., & Salas, E. (2001). Reflections on shared cognition. Journal o f

Organizational Behavior, 22, 195-202.

Carley, K. M. (1977). Extracting team mental models through textual analysis. Journal o f

Organizational Behavior, 18, 533-558.

Carmel-by-the-Sea v. U.S. Department o f Transportation (1996). 95 F.3d 892, 9th Cir.

1996. Retrieved May 5, 2002, from

http://www.elawreview.org/summaries/environmental_quality/nepa/city_of_carm

elbythesea_v_unite.html

The Carter Center (2005). Access to information. Retrieved April, 27, 2005, from

www.cartercenter.org/peaceprograms

Carver, S. (2003). The future of participatory approaches to using geographic

information: Developing a research agency for the 21st Century. Urban and

Regional Information System Association Journal, 75(1), 61-71.

Castellan, J. N., Jr. (1993). Individual and group decision making. Hillsdale, NJ:

Lawrence Erlbaum Associates.

197

Chen, D., Huang, T., & Hsiao, N. (2006). Reinventing government through on-line

citizen involvement in the developing world; A case study of Taipei City Mayor’s

box in Taiwan. Public Administration and Development, 26, 409-423.

City o f Los Angeles (2007 a). City o f Los Angeles solid waste integrated resource plan,

phase I overview document. Retrieved July 5,2008, from

www.zerowaste.lacity.org/home/index.html

City o f Los Angeles (2007 h). City o f Los Angeles zero waste overview. Presented at the

February 2, 2008, SWIRP City-Wide Workshop.

City o f Los Angeles (2007 c). Zero waste leverage point workshop handout. Produced by

the City o f Los Angeles for use at the February 2, 2008, SWIRP City-Wide

Workshop. Retrieved July 5, 2008, from

zero waste .lacity.org/files/info/fact_sheet/swirpfaqs .pdf

Clark, T., & Justice, C. (1947). Attorney General's manual on the Administrative

Procedure Act. Retrieved on November, 8, 2007, from

http .-//www.law. fsu.edu/library/admin/1947intro .html

Club Managers Association o f America (1991). Team decision making. Retrieved May

13, 2006, from vmw.cmaa.org/prodev/bmiteam/decision/page2.asp

Cobb, R. W., & Elder, C. D. (1972). Participation in American politics: The dynamics o f

agenda-building (2"^ ed.). Baltimore, MD: The Johns Hopkins University Press.

Cohen M. (1993). Three paradigms for viewing decision biases. In G.A. Klein, J.

Orasanu, R. Calderwood, and C.E. Zsambok (Eds.), Decision making in action:

models and methods (pp. 36). Norwood, NJ: Ablex Publishing Corporation.

198

Cohen, M., March, J., & Olsen, J. (1972). A garbage can model o f organizational choice.

Administrative Science Quarterly 17, 1-25.

Community Action Partnership (2003). Economic Opportunity Act o f 1964. Retrieved

December 9, 2007, from

WWW.fccaa.org/about_us .j sp?pageld_20901188125049973236163

Corder, J., & Thompson, M. (1995). Dispute resolution: Negotiating agreement on public

issues. American Water Works Association, Texas Section Annual Conference

Proceedings. Corpus Christi, TX.

Corder, J. & Thompson, M. (1996). Collaborative problem solving: using collaboration

processes in government decision making. Corpus Christi, TX: Corder/Thompson

and Associates.

Craik, K. (1943). The nature o f explanation. Cambridge, MA: Cambridge University

Press.

Creighton, J. (1980). Public involvement manual: Involving the public in water and

power resource decisions. United States Department o f the Interior Water and

Power Resources Services. Washington DC: U.S. Government Printing Office.

Creighton, J. L. (1983). The use o f values: Public participation in the planning process.

In G. A. Daneke, M. W. Garcia and J. D. Priscoli, (Eds.), Public involvement and

social impact assessment (pp. 143- 160). Boulder, CO: Westview Press.

Creighton, J. (1999). Public participation in federal agencies’ decision making in the

1990s. National Civic Review, 88(3), 240-257.

199

Creighton, J. (2004). Using group processing techniques to improve meeting

effectiveness. Retrieved November 20, 2006, from

http://www.effectivemeetings.com/teams/teamwork/creighton.asp

Creighton, J. (2007). Case study 1: U.S. agency public participation policies. Retrieved

November, 26, 2007, from

http.//www.creightonandcreighton.com/casestudy 1 .html

Creighton, J. (1999). Public participation in federal agencies’ decision making in the

1990s. National Civic Review 88(3), 240-257.

Culik, M. N. (1993). Watershed management through public policy education. In

American Water Resource Associations ’ Symposia on Water Resource Education:

A Lifetime o f Learning and Changing Roles in Water Resource Management and

Policy. Bellevue, WA: American Water Resource Association.

Cuthbertson, I. D. (1983). Evaluating public participation: an approach for government

practitioners. In G. A. Daneke, M. W. Garcia and J. D. Priscoli, (Eds.), Public

involvement and social impact assessment (pp. 101-109). Boulder, CO: Westview

Press.

Dale, V. H., & English, M. R. (Eds.) (1999). Tools to aid environmental decision making.

New York: Springer.

Dale, V. H., & O’Neill, R. V. (1999). Tools to characterize the environmental setting. In

V. H. Dale and M. R. English (Eds.), Tools to aid environmental decision making

(pp. 62-90). New York: Springer.

2 0 0

Daley, D. (2007). Citizen groups and scientific decision making: Does public

participation influence environmental outcomes? Journal o f Policy Analysis and

Management 2(2), 349-368.

Daneke, G. A. (1999). Systemic choices: Nonlinear dynamics and practical management.

Ann Arbor, MI: University o f Michigan Press.

Daneke, G. A., Garcia, M. W., & Priscoli, J. D. (Eds.) (1983). Public involvement and

social impact assessment. Boulder, CO: Westview Press.

Davis, G. (August 1, 2001). California Governor Gray Davis Press Release regarding

Hatton Canyon, State of California Office of the Governor.

Davis, M. (1986). The art o f decision-making. New York: Springer-Verlag.

DeKleer, J., & Brown, J.S. (1983). Assumptions and ambiguities in mechanistic mental

models. In D. R. Gentner and A. L. Stevens (Eds.), Mental models (pp. 155-190).

Hillsdale, NJ: Erlbaum.

Delbecq, A. L., & Van de Ven A. H. (1971). A group process model for problem

identification and program planning. Journal o f Applied Behavioral Science, 7,

466-491.

Department of the Environment, Transportation, and the Regions (2000/ Public

participation in making local environmental decisions: The Aarhus Convention

Newcastle Workshop Good Practice Handbook. London: Eland House.

Dewey, J. (1910). How we think. Boston: D.C. Heath.

Dillman, D. A. (1978). Mail and telephone surveys: The total design method. New York:

Wiley.

2 0 1

Dillman, D. A. (2000). Mail and internet surveys: The tailored design method. John

Wiley & Sons, Inc.: New York.

diSessa, A. A. (1983). Phenomenology and the evolution o f intuition. In D. Gentner and

A. L. Stevens (Eds.), Mental models (pp. 15-33). Hillsdale, NJ: Lawrence

Erlbaum, Inc.

Domer, D. (1980). On the difficulties people have in dealing with eomplexity. Simulation

and Games 77(1), 87-106.

Doyle, J. K., & Ford, D. N. (1998). Mental models concepts for system dynamics

research. System Dynamics Review, 74(1), 3-29.

Drew, C. (2003). Transpareney - considerations for PPGIS research and development.

Urban and Rural Information System Association Journal, 75(1), 73-78.

Dwyer, M. F. (2007). Assessing the effect o f group model building on stakeholder teams

developing urban growth strategies. Unpublished doctoral dissertation. University

of Nevada, Las Vegas.

Eden, C. (1992). A framework for thinking about group decision support systems. Group

Decision and Negotiation, 7(13), 199-218.

Edwards, A. L. (1953). Experimental design in psychological research (5* ed). New

York: Harper and Row.

Eitington, J. (1989). The winning trainer. (2"‘* ed). Huston, TX: Gulf Publishing.

Ensley, M. D., & Pearce, C. L. (2001). Shared cognition in top management teams:

implieations for new venture performance. Journal o f Organizational Behavior,

22, 145-160.

2 0 2

Ephross, P. E,, & Vassil, T. V. (2005). Groups that work (2"** ed.). New York: Columbia

University Press.

Fischhoff, B. (1975). Hindsight does not equal foresight: The effects of outcome

knowledge on judgment under uncertainty. Journal o f Experimental Psychology:

Human Perception and Human Performance, 13, 1-16.

Fischhoff, B., & MacGregor, D. (1981). Subjective confidence in forecasting. Journal o f

Forecasting, 1, 155-172.

Fisher, R. A. (1935). The Design o f Experiments. Edinburgh: Oliver and Boyd.

Flood, R. L. (1995). Solving problem solving: A potent force fo r effective management.

New York: John Wiley and Sons.

Ford, A. (1996). System dynamics and the electric power industry. System Dynamics

Review 73(1), 57-85.

Ford, D. N., & Sterman, J. D. (1998). Expert knowledge elicitation to improve formal and

mental models. System Dynamics Review, 74(4), 309-340.

Ford, G. (1976). Statement on Signing the Government in the Sunshine Act, September

13, 1976. Retrieved on November 9, 2007, from

www.Presidency.ucsb.edu/ws/index.php?pid=6325

Forrester, J. (1961). Industrial dynamics. Portland, OR: Productivity Press.

Forrester, J. (1971). Counterintuitive behavior o f social systems. In Collected papers o f

J. W Forrester (pp. 211-244). Cambridge, MA: Wright-Allen Press.

Forrester, J. (1975). Collected papers o f Jay W. Forrester. Cambridge, MA: Wright

Allen Press, Inc.

203

Forrester J. (1987). Lessons from system dynamics modeling. System Dynamics Review

3(2), 136-149.

Forrester, J. (1994). Policies, decisions, and information sources for modeling. In J. D.

W. Morecroft and J.D. Sterman (Eds./ Modeling fo r learning organizations (pp.

51-84). Portland, OR; Produetivity Press.

Forrester, J. (1991). System dynamics and the lessons o f 35 years. Retrieved March 5,

2004, from

http://pagesperso-orange.fr/patrice.salini/Textes%20/Forrester%20Bilan.pdf

Forsha, H. (1995). Show me: The complete guide to storyboarding and problem solving.

Milwaukee, WI: ASQC Quality Press.

Freudenburg, W. (1983). The promise and peril of publie partieipation in soeial impaet

assessment. In G. A. Daneke, M. W. Gareia, and J. D. Priscoli (Eds.), Public

involvement and social impact assessment (pp. 227-234). Boulder, CO: Westview

Press.

Freudenburg, W. (1999). Tools for understanding the socioeeonomic and politieal

settings for environmental deeision making. In V. H. Dale and M. R. English

(Eds.), Tools to aid environmental decision making (pp. 94-129). New York:

Springer.

Frey, L. (1996). Remembering and “re-membering”: A history o f theory and researeh on

communication £ind group decision making. In R. Y. Hirokawa and M. S. Poole

(Eds.), Communication and group decision making (2"^ ed., pp. 19-51). Thousand

Oaks, CA: Sage Publieations.

204

Friend, J. (2001). The strategic choice approach. In J. Rosenhead and J. Mingers, (Eds.),

Rational analysis fo r a problematic word revisited: problem structuring methods

fo r complexity, uncertainty, and conflict (2"‘* ed., pp. 115-149). New York: John

Wiley and Sons, Ltd.

Fumham, A. (2000). The brainstorming myth. Business Strategy Review, 77(4), 21-28.

Galanes, G., & Brilhart, J. (1997). Communicating in Groups: Applications and skills,

(3' '* ed). Madison, Wl: Brown and Benchmark.

Gale, T. (2006). “Who is the ‘public ’ in public participation? Retrieved November 2,

2007, from www.pollutionissues.eom/PI-Re/Publie-Partieipation.html

Garcia, M. W., & Daneke, G. A. (1983). Introduetion. In G. A. Daneke, M. W. Garcia,

and J. D. Priscoli (Eds.), Public involvement and social impact assessment (pp.

161-176). Boulder, CO: Westview Press.

Gamham, A. (1996). The other side of mental models: Theories o f language

comprehension. In J. Oakhill and A. Gamham (Eds.), Mental models in cognitive

science: Essays in honour o f PhilJohnson-Laird (pp. 175-194). Erlbaum, UK:

Psychology Press.

Garson, D. (1998). Administrative Procedures Act o f 1946. Retrieved December 10,

2007, from

http://ewx.prenhall.eom/bookbind/pubbooks/dye4/medialib/docs/apal946.htm

Gentner, D., & Stevens, A. L. (Eds.) (1983). Mental models. Hillsdale, NJ: Lawrence

Erlbaum, Ine.

Gillham, B. (2000). Cases study research methods. New York: Continuum.

205

Glass, D. C., Singer, J. E., & Friedman, L. N. (1969). Psychic cost of adaptation to an

environmental sector. Journal o f Personality and Social Psychology, 12, 200-210.

Gottlieb, M.R. (2003). Managing group process. Westport, CT: Praeger.

Gouran, D.S., & Flirokawa, R.Y. (1983). The role of communication in decision-making

groups: A functional perspective. In M.S. Mander (Ed.), Communications in

transition, (pp. 168-185). New York: Praeger.

Gouran, D. S., & Hirokawa, R. Y. (1996). Functional theory and communication in

decision-making and problem-solving groups: An expanded view. In R. Y.

Hirokawa and M. S. Poole (Eds.), Communication and group decision making

(2"‘* ed., pp. 55-80). Thousand Oaks, CA: Sage Publications.

Gouran, D.S., Hirokawa, R.Y., Julian, K.M., & Leatham, G.B. (1993). The evolution and

current status o f the functional perspective on communication in decision-making

and problem-solving in groups. In S.A. Deetz (ed.). Communication yearbook,

(pp. 573-600). Newbury Park, CA: Sage.

Greeno, J. G. (1983). Conceptual entities. In D. R. Gentner and A. L. Stephens (Eds.),

Mental models (pp. 227-252). Hillsdale, NJ: Lawrence Erlbaum, Inc.

Gregory, R. (June, 2000). Using stakeholder values to make smarter environmental

decisions. Environment, 42(5), 34-44.

Grofsler, A. (2004). A content and process review on bounded rationality in system

dynamics. Systems Research and Behavioral Science, 21, 319-330.

Grunig, R., & Kuhn, R. (2005). Successful decision-making: A systematic approach to

complex problems. New York: Springer.

206

Hale, E. (1993). Successful public involvement. Journal o f Environmental Health, 55(4),

17-19.

Hart, P. (1990). Groupthink in government: A study o f small groups and policy

failure. Amsterdfim: Swets and Zeitlinger.

Hausman, D. (2008). Philosophy o f Economics. Retrieved July 10, 2008, from

http://plato.stanford.edu/entries/economics/#5

Higgins, J. M. (1991). The management challenge. New York, NY: Macmillan

Publishing Co.

Hirokawa, R.Y. (1983). Group communication and problem-solving effectiveness II: An

exploratory investigation of procedural functions. Western Journal o f Speech

Communication, 47, 59-74.

Hirokawa, R. Y., Erbert, L,, & Hurst, A. (1996). Communication and group decision­

making effectiveness. In R. Y. Hirokawa and M. S. Poole (Eds.), Communication

and group decision making (2"‘* ed., pp. 269-300). Thousand Oaks, CA: Sage

Publications.

Hirokawa, R. Y., & Poole, M. S. (Eds.) (1996). Communication and group decision

making (2"** ed.). Thousand Oaks, CA: Sage Publications.

Hoenig, E. A. (1993). Public involvement and education: A framework for local

government in urbanizing communities, Olympia, Washington. Proceedings from

the American Water Works Association Symposia on Water Resources Education:

A Lifetime o f Learning and Changing Roles in Water Resource Management and

Policy (pp. 273-280). Bellevue, WA.

Hogarth, R. (1987). Judgment and choice (2"*' ed.). Chichester, Eng: Wiley and Sons, Inc.

207

Holmberg, M. L. (1997). Public participation program development: An analysis o f

public participation in the water industry. Unpublished master’s thesis,

University o f Nevada, Las Vegas.

Holmberg, M. L. (2002). “They Lost”: An analysis o f the consequences ofpublicly-

sponsored legal challenges aimed at stopping public works projects. Paper

submitted in fulfillment of Political Sciences 733, University o f Nevada, Las

Vegas.

Holmberg, M., & Michaelson, L. (1998). Solicit, consider, ignore, decide: Manipulation

o f public participation. Proceedings of the 1998 IAP2 Annual Conference

Toronto, Canada.

Hutchins, E. (1990). The technology of team navigation. In J. Galegher, R. E. Kraut, and

C. Egido (Eds.), Intellectual teamwork: Social and technological foundations o f

cooperative work. Hillsdale, NJ: Lawrence Erlbaum Associates Inc.

Huz, S., Andersen, D., Richardson, G., & Boothroyd, R. (1997). A framework for

evaluating systems thinking interventions: an experimental approach to mental

health system change. System Dynamics Review, 73(2) 149-169.

Huz, S. (1999). Alignment from group model building fo r systems thinking measurement

and evaluation from a public policy setting. Unpublished doctoral dissertation.

University at Albany, State University o f New York.

lacofano, D. S. (1990). Public involvement as an organizational development process: A

proactive theory fo r environmental planning project management. New York:

Garland Publishing, Inc.

Ingle, S. (1985). In search o f perfection. Englewood, CA: Prentice Hall.

208

International Association of Public Participation (1996). Principles of Participation.

Interact: The Journal o f Public Participation, 2(1), 5.

International Association of Public Participation (2002/ Public participation toolbox:

Passive public information techniques, active public information techniques,

small group public input techniques, large group public input techniques.

Retrieved November 20, 2006, from www.iap2.org

International Association of Public Participation (2003). IAP2 public participation

spectrum. Retrieved November 6, 2005, from www.iap2.org

Jaboe, S. (1996). Procedures for enhancing group decision making. In R. Y. Hirokawa,

and M. S. Poole (Eds.), Communication and group decision making (2"** ed., pp.

345-383). Thousand Oaks, CA: Sage Publications.

Janis, I. (1972). Victims o f groupthink. Boston, MA: Houghton Mifflin.

Janis, I., & Maim, L. (1977). A psychological analysis o f conflict, choice, and

commitment. New York: Free Press.

Jasanoff, S. (1996). The dilemma of environmental democracy. Issues in Science and

Technology, 73(1), 63-68.

Johnson-Laird, P. N. (1983). Towards a cognitive science o f language, inference, and

consciousness. Cambridge, MA: Harvard University Press.

Johnson-Laird, P. N., Girotto, V., and Legrenzi, P. (1998). Mental models: A gentle guide

fo r outsiders. Retrieved April 9, 2006, from

http://www.si.umich.edu/lCOS/gentleintro.html

Johnson, P. T. (1993). How I turned a critical public into a useful consultant. Harvard

Business Review, 93103, 56-66.

209

Jones D., & Seville, D. (2002). Supporting effective participation in the climate change

debate: The role o f system dynamics simulation modeling. Retrieved December 3,

2007, from http://www.sustainer.org/pubs/siclimate.PDF

Joyce, J. (2003). Bayes' theorem. Retrieved July 10, 2008, from

http://plato.stanford.edu/entries/bayes-theorem

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under

risk. Econometrica, 47(2), 262-292.

Kaner, S., Lenny L., Toldi, C., Fisk S., & Berger, D. (1996). Facilitator’s guide to

participatory decision-making. Philadelphia, PA: New Society Publishers.

Kathlene, L., & Martin, J. (1991). Enhancing citizen participation: Panel designs,

perspectives, and policy formation. Journal o f Policy Analysis and Management,

70(1), 46-63.

Kelly, M. (1992). Everyone’s problem-solving handbook. White Plains, NY: Quality

Resources.

Kiefer, M. (2008). The social functions o f NIMBYism. Retrieved August 29, 2008, from

http://planetizen.com/node/34506

Kim, D. H. (1999). Introduction to systems thinking. Williston, VT: Pegasus

Communications, Inc.

Kish, L. (1965). Survey sampling. New York: Signet.

Kleindorfer, P. R. (1999). Understanding individuals’ environmental decisions: A

decision sciences approach. In K. Sexton, A. A. Marcus, K. W. Easter, and T. D.

Burkhardt (E dsj, Better environmental decisions: Strategies fo r governments,

businesses, and communities. Washington, DC: Island Press.

2 1 0

Kleinmutz, D. N. (1993). Information processing and misperceptions of the implieations

of feedback in dynamic decision-making. System Dynamics Review, 9(3),

223-237.

Korsomo, F. L. (1990). Problem definition and the Alaska natives: Ethnic identity and

policy formation. Policy Studies Review, 9(2), 294-306.

Kuhn, T. S. (1996). The structure o f scientific revolutions (3 ** ed.). Chicago: University

of Chicago Press.

Lane, D. C. (1999). Friendly amendment: A commentary on Doyle and Ford’s proposed

re-definition of ‘mental models.’ System Dynamics Review, 15(2), 185-194.

Langan-Fox, J., & Anglim, J. (2004). Mental models, team mental models, and

performance: Process, development and future directions. Human Factors and

Ergonomics in Manufacturing, 14, 331-352.

Larsen, K., Mclnemey, C., Nyquist, C., Santos, A., & Silsbee, D. (1996). Learning

organizations. Retrieved November 5, 2003, from

http://home.nyeap.rr.eom/Klarsen/leaming.org

LeCompte, M.D., & Goetz, J. P. (1982). Problems of reliability and validity in

ethnographic research. Review o f Educational Research, 52, 31-60.

Lee, K. M. (2004). Presence, explicated. Communication Theory, 14(\), 27-50.

Legrenzi, P. & Girotto, V. (1996). Mental Models in Reasoning and Decision-Making

Processes. In J. Oakhill and A. Gamham (Eds.), Mental models in cognitive

science: Essays in honour o f Phil Johnson-Laird (pp. 95-116). Erlbaum, UK:

Psychology Press.

2 1 1

Levesque, L. L., & Wilson, J. M. (2001). Cognitive divergence and shared mental models

in software development project teams. Journal o f Organizational Behavior, 22,

135-144.

Lewis, J. W. (1999). Decision-maker response. In V. H. Dale and M. R. English (Eds.),

Tools to aid environmental decision making (pp. 59- 62). New York: Springer.

Lichtenstein, S., Fischhoff, B., & Phillips, L. (1982). Calibration o f probabilities: The

state o f the art to 1980. In D. Kahneman, P. Slovic, and A. Tversky (Eds.),

Judgment under uncertainty: Heuristics and biases. New York: Cambridge

University Press.

Light rail option is derailed: Valley transit officials pursue the cheaper, more flexible

‘rapid bus’ plan (2007, March 3). Las Vegas Sun, p. 10.

Littlejohn, S.W. (2002). Theories o f human communication. Belmont, CA: Wadsworth.

Lipshitz, R. (1993). Decision making as argument-driven action. In G.A. Klein, J.

Orasanu, R. Calderwood, and C.E. Zsambok (Eds.), Decision making in Action:

Models and methods (pp. 172). Norwood, NJ: Ablex Publishing Corporation.

Lipshitz, R., Klein, G., Orasanu, J. & Salas, E. (2001). Focus article: Taking stock of

naturalistic decision making. Journal o f Behavioral Decision Making, 14, 331-

352.

Luna-Reyes, L., & Andersen, D. (2003). Collecting and analyzing qualitative data for

system dynamics: methods and models. System Dynamics Review, 19(4),

271-296.

2 1 2

Lyndon, M. L. (1999). Characterizing the regulatory and judicial setting. In V. H. Dale

and M. R. English (Eds.), Tools to aid environmental decision making

(pp. 130 160). New York: Springer.

Maruska, D. (2004). How great decisions get made: 10 easy steps fo r reaching

agreement on even the toughest issue. New York: Amacom.

Mathieu, J. E., Heffner, T.S., Goodwin, G. F., Cannon-Bowers, J. A., & Salas, E. (2005).

Scaling the quality of teammates’ mental models: equifmality and normal

comparisons. Journal o f Organizational Behavior, 26, 37-56.

Maxwell, L. C. (1999). Decision-Maker Response. In V. H. Dale and M. R. English

(Eds.), Tools to aid environmental decision making (pp. 282-284). New York:

Springer.

McDaniel, S. (2003). What’s your idea o f a mental model? Retrieved November 29,

2005, from

http://www.boxesandarrows.eom//archives/whats_your_idea_of_a_mental_model.php

McGovern. J., & Samson, D. (1993). Influence diagrams for decision analysis. In S. S.

Nagel (Ed.), Computer-aided decision analysis: Theory and applications (pp.

108-126). Boulder, Westport, CT: Quorum Books.

McGrath, J. (1984). Groups: interaction and performance. Englewood Cliffs, NJ:

Prentice-Hall, Inc.

McKenna, C. K. (1980). Quantitative methods fo r public decision making. San Francisco,

CA: McGraw-Hill Book Company.

McNamara, C. (1999). Basic Guidelines to problem solving and decision making.

Retrieved June 20, 2002, from www.managementhelp.org/prsn_prd/prb_bsc.htm

213

Meadows, D.H. (1991). System dynamics meets the press: The global citizen. Washington

DC: Island Press.

Meadows D. (1999). Leverage points: Places to intervene in a system. Sustainability

Institute. Retrieved December 3, 2007, from

Sustainer.org/pubs/Leverage_Points.pdf

Meadows, D. H., Behrens, W. W. Ill, Meadows, D. H., Naill, R. F., & Zahn, E. K. 0 .

(1974). Global collapse: Envisioning a sustainable future. Cambridge, MA:

Wright-Allen Press.

Merkhoffer, M. W. (1999). Assessment, refinement, and narrowing o f options. In V. H.

Dale and M. R. English (Eds.), Tools to aid environmental decision making (pp.

231-281). New York: Springer.

Michaelson, L. (1996). Facilitation training workshop. Conducted for the Las Vegas

Valley Water District, March 20,1996, Las Vegas, NV.

Milbrath, L. (1983). Citizen surveys as citizen participation mechanisms. In G. A.

Daneke, M. W. Garcia, and J. D. Priscoli (Eds.), Public involvement and social

impact assessment (pp. 89-99). Boulder, CO: Westview Press.

Mill, J. (1863). Utilitarianism. London: Parker, Sons, and Bourn.

Miller, G. A. (1951). Language and communication. New York: McGraw-Hill.

Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our

capacity for processing information. Psychological Review, 63, 81-97.

Miller, L., & Howard, J. (1992). Managing quality through teams. Atlanta: The Miller

Consulting Group.

214

Minger, J. (2001). Multimethodology- mixing and matching methods. In J. Rosenhead

and J. Mingers (Eds.), Rational analysis fo r a problematic world revisited:

Problem structuring methods for complexity, uncertainty and conflict (2"'* ed.,

pp. 290-309). New York: John Wiley and Sons, Ltd.

Mintz, A. (1951). Nonadaptive group behavior. Journal o f Abnormal and Social

Psychology, 46, 150-159.

Mirochna, J. (1993). Involving the neighborhood before concerns arise: You and your

citizens, partners for success. In Proceedings from the American Water Works

Association Annual Conference, Management and Relations (pp. 497-499). San

Antonio, TX: American Water Works Association.

Mohammed, S., & Dumville, B. C. (2001). Team mental models in a team knowledge

framework: expanding theory and measurement across disciplinary boundaries.

Journal o f Organizational Behavior, 22, 89-106.

Moser, C., & Klalton, G. (1972). Survey methods in social investigation. New York:

Basic Books.

Mowitz, R. J. (1980). The design o f public decision systems. Baltimore: University Park

Press.

Mowrey, M., & Redmond, T. (1993). Not in our backyard: The people and events that

shaped America’s modern environmental movement. New York: William

Morrow.

Nagel, S. S. (1990). Improving public policy toward and within developing countries. In

S. S. Nagel (Ed.), Public administration and decision-aiding software: Improving

procedure and substance (pp. 185-214). New York: Greenwood Press.

215

Nagel, S. S. (Ed.) (1993). Computer-aided decision analysis: Theory and applications.

Westport, CT: Quorum Books.

Nardi, P. (2003). Doing survey research. Boston: Allyn and Bacon.

National Archives (2007). Federal law, policy, and regulation. Retrieved December 12,

2007, from http://www.archives.gov/records-mgmt/laws/index.html

National Archives (2007). Records o f the Federal Water Pollution Control

Administration. Retrieved November 22, 2007, from

http://www.archives.gOv/research/guide-fed-records/groups/3 82.html#3 82.1

National Archives (2007). Records o f the Community Service Organization. Retrieved

November 22, 2007, from

http://www.archives.gov/research/guide-fed-records/groups/381.html

National Archives (2007). National archives and records administration Freedom o f

Information Act (FOIA) reference guide. Retrieved November 22, 2007, from

http ://www. archives. gov/foia/foia-guide .html

Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ:

Prentice-Hall Inc.

Norman, D.A. (1983). Some observations on mental models. In D. Centner and A. L.

Stevens (Eds.), Mental models (pp. 7-14). Hillsdale, NJ: Lawrence Erlbaum, Inc.

Oakhill, J. (1996). Mental models in children’s text comprehension. In J. Oakhill and A.

Gamham (Eds.), Mental models in cognitive science: Essays in honour o f Phil

Johnson-Laird (]Dp. 77-93). Erlbaum, UK: Psychology Press.

216

Oatley K. G. (1996). Emotions, Rationality, and Informal Reasoning. In J. Oakhill and A.

Gamham (Eds.), Mental models in cognitive science: Essays in honour o f Phil

Johnson-Laird (pp. 175-194). Erlbaum, UK: Psychology Press.

Orasanu, J., & Connolly, T. (1993). The reinvention of decision making. In G.A. Klein, J.

Orasanu, R. Calderwood, and C.E. Zsambok (Eds.), Decision making in action:

Models and methods (pp. 3). Norwood, NJ: Ablex Publishing Corporation.

O’Rourke, D., & Macey, G. (2002). Community environment policing: Assessing new

strategies of public participation in environmental regulation. Journal o f Policy

Analysis and Management, 22(3), 383-414.

Osborn, A. F. (1963). Applied imagination: principles and procedures o f creative

problem-solving (3 ' ed.). New York: Charles Scribner's Sons.

Quade, E. S. (1982). Analysis fo r public decisions (2"‘* ed.). New York: North-Holand.

Panagiotou, G. (2003). Upfront best practice: Bringing SWOT into focus. Business

Strategy Review, 14(2), 8-10.

Park, W. (2000). A comprehensive empirical investigation o f relationships among

variables o f the groupthink model. Journal o f Organizational Behavior, 21, 873-

887.

Patton, B. R., & Downs, T. M. (2003). Decision-making group interactions: Achieving

quality (4*'’ ed.). Boston: A and B Publishers.

Peirce, J. C., & Doerksen, H. R. (1976). Water politics and public involvement. Ann

Arbor, MI: Ann Arbor Science Publishers, Inc.

Phelps, B. (1997). Resources fo r Leadership. Retrieved July 30, 2006, from

www.whitestag.org/resources/sb215.htm

217

Phillips, L. D. (1989). Requisite decision modeling for technological projects. In C. VIek

and G. Cvetkovich (Eds.), Social decision methodology fo r technological projects

(pp. 95-110). New York: Dordrecht Kluwer.

Pickton, D., & Wright, S. (1998). What’s SWOT in strategic analysis? Strategic

Change, 7, 101-110.

Poole, M. S. (1991). Procedures for managing meetings: Social and technological

innovations. In R. A. Swanson and B.C. Knapp (Eds.), Innovative meeting

management (pp. 53-110). Austin, TX: 3M Meeting Management Institute.

PQ Systems (1992). Total quality transformation: Improvement guide. Cincinnati: PQ

Systems.

Priston Entertainment Ltd. (2007). The Economic Opportunity Act. Retrieved

December 6, 2007, from www.bIackseek.com/bh/2Gl/33_EGA.htm

Raiffa, H. (1994). The prescriptive orientation of decision making: A synthesis of

decision analysis, behavioral decision making and game theory. In S. Rios (Ed.),

Decision theory and decision analysis: Trends and challenges (pp. 3-13).

Boston: Kluwer Academic Publishers.

Randers, J. (1980). Guidelines for model conceptualization. In Elements o f System

Dynamics Methods. Cambridge, MA: Massachusetts Institute o f Technology

Press.

Rees, F. (2001). How to lead work teams: Facilitation skills (2nd ed.). San Francisco:

John Wiley and Sons, Inc.

Rees, F. (2005). The facilitator excellence handbook (2"^ ed.). San Francisco: John Wiley

and Sons.

218

Renn, O., Webler, T., Rakel, H., Daniel, P., & Johnson, B. (1993). Public participation in

decision making: A three step procedure. Policy Sciences, 26, 189-214.

Rentsch, J. R., & Klimoski, R. J. (2001). Why do ‘great minds’ think alike?: Antecedents

of team member schema agreement. Journal o f Organizational Behavior, 22,

107-120.

Richardson, G. P., & Andersen, D. F. (1995). Teamwork in Group Model Building.

System Dynamics Review, 13(2), 113-137.

Richardson, G. P., Anderson, D. F., Maxwell, T. T., & Stewart, T. R. (1994).

Foundations of mental model research. Proceedings o f the 12th International

System Dynamics Conference (July 11-15). Stirling, Scotland.

Richardson, G. P., & Pugh, A. (1981). Introduction to System Dynamics Modeling with

DYNAMO. Cambridge, MA: Massachusetts Institute o f Technology Press.

Richardson, G. P., & Pugh, A. (1989). Introduction to System Dynamics Modeling.

Williston,VT: Pegasus Communications.

Rios, S. (Ed.) (1994). Decision theory and decision analysis: Trends and challenges.

Boston, MA: Kluwer Academic Publishers.

Roberts, N., Andersen, D., Deal, R., Grant, M., & Shaffer, W. (1983). Introduction to

computer simulation: The system dynamics modeling approach. Reading, MA:

Addison-Wesley.

Robson, C. (1993). Real world research: A resource fo r social scientists and

practitioner-researchers. Oxford, UK: Blackwell.

Robson, M. (2002). Problem-solving in groups (3' '* ed). Burlington, VT: Grower.

Rosander, A. (1989). The quest fo r quality in services. Milwaukee: ASQC Quality Press.

219

Rosenhead, J., & Mingers, J. (Eds.) (2001). Rational analysis fo r a problematic world

revisited: problem structuring methods fo r complexity, uncertainty and conflict

(2" * ed.). New York: John Wiley and Sons, Ltd.

Rosenthal, R. (1976). Experimenter effects in behavioral research. Applied Social

Research Methods. Beverly Hills, CA: Sage.

Rouwette, E. (2003). Group model building as mutual persuasion. Unpublished doctoral

dissertation, Nijmegen University, Nijmegen, the Netherlands.

Rouwette, E., and Vennix, J. A. M. (2006). System Dynamics and Organizational

Interventions. Systems Research and Behavioral Science, 23, 451-466.

Rouwette, E., Vennix, J.A.M., & Mullekom, T. V. (2002). Group model building

effectiveness: a review o f assessment studies. System Dynamics Review, 75(1),

5-45.

Rubenstein, A. (1998). Modeling bounded rationality. Cambridge, MA: Massachusetts

Institute o f Technology Press.

Scheidel T. C., & Crowell, L. (1964). Idea development in small discussion groups.

Quarterly Journal o f Speech, 50, 140-145.

Schaible, K., Davis, M., & Harris, C. (2007). Public Health Ground Rounds Facilitator’s

Guide. Retrieved September 19, 2007, from

http://www.publichealthgrandrounds.unc.edu/sitereg/fac_guide.pdf

Schotes, P. (1988). The team handbook. Madison, WI: Joiner Associates.

Schumacher, R., & Czerwinski, M. (1992). Mental models and the acquisition of expert

knowledge. In R. Hoffman (Ed.), The psychology o f expertise. New York:

Springer-Vaerlag.

2 2 0

Schwartz, A. E. (1994). Group decision-making. The CPA Journal. Retrieved November,

14, 2005, from www.nysscap.org/capjoumal/old/15703015.htm

Senge, P. (1990). The firth discipline: The art and practice o f the learning organization.

New York: Doubleday.

Shafer G. (1996). The art o f causal conjecture. Cambridge, MA: Massachusetts Institute

of Technology Press.

Shapiro, M. (2006). A Golden anniversary?: The Administrative Procedures Act of 1946.

Cato Review o f Business and Government Regulation. Retrieved November 2,

2007, from www.cato.org/pubs/regulation/regl9n3i.html

Sheridan, C. E. (1979). Methods o f experimental psychology. New York: Holt, Rinehart,

and Winston.

Simon, H. A. (1945). Administrative behavior. New York: McMillan.

Simon H. A. (1955). A behavioral model o f rational choice. Quarterly Journal o f

Economics, 69, 99-\\%.

Simon, H. A. (1956). Theories of decision making economics and behavioral science.

American Economic Review, 49, 253-283.

Simon H. A. (1957). Administrative behavior: A study o f decision-making process in

administrative organizations (2"* ed.). New York: Macmillan.

Simon, H. A. (1959). Theories of decision making economics and behavioral science.

American Economic Review, 49, 253-283.

Simon, H. A. (1976). From substantive to procedural rationality. In S.L. Latis, (Ed.),

Methods and appraisal in economics (pp. 129-148). New York: Cambridge

University Press.

2 2 1

Sink, D. (1983). Using the nominal group technique effectively; The nominal group

technique helps group generate ideas and reach consensus through a five-stage

structured process. National Productivity Review, 2(2) 173-184.

Smith-Jentsch, K. A., Campbell, G. E., Milanovich, D. M., & Reynolds, A. M. (2001).

Measuring teamwork mental models to support training needs assessment,

development, and evaluation: Two empirical studies. Journal o f Organizational

Behavior, 22, 179-194.

Stave, K. (2001). Dynamics of wetland development and resource management in the Las

Vegas Wash, Nevada. Journal o f the American Water Resources Association

37(5), 1369-1379.

Stave, K. (2002). Using system dynamics to improve public participation in

environmental decisions. System Dynamics Review, 18(2) 139-167.

Stave, K. (2003). A system dynamics model to facilitate public understanding of water

management options in Las Vegas, NV. Journal o f Environmental Management,

67, 303-313.

Stave, K. (2008). City o f Los Angeles zero waste system dynamics simulation model.

Proceedings o f the 26‘ International Conference o f the System Dynamics Society.

Athens, Greece, July 20-24, 2008. Retrieved on October 1, 2008 from

http ://www. sy stemdynamics. org/conferences/2008/proceed/index .htm

Steed, C. (2001). Theory o f functional group decision-making. Retrieved July 25, 2006,

from www.colostate.edu/Depts/Speech/rccs/theory59.htm

Sterman J. D. (1994). Learning in and about complex systems. System Dynamics Review

10(2, 3), 291-330.

2 2 2

Sterman, J. D. (2000). Business dynamics: Systems thinking and modeling fo r a complex

world. Boston: Irvin McGraw-Hill.

Sterman, J. D. (2001). System dynamics modeling: tools for learning in a complex world.

California Management Review, 43(4), 8-25.

Sterman, J. D. (2002). All models are wrong: reflections on becoming a systems scientist.

System Dynamics Review, 75(4), 501-531.

Stewart, R. (1975). The reformation of American administrative law. Harvard Law

Review 88{%), 171-1813.

Stiftel, B. (1983). Dialogue: Does it increase participant’s knowledgeability and attitude

congruence? In G. A. Daneke, M. W. Garcia, and J. D. Priscoli (Eds.), Public

involvement and social impact assessment (pp. 61- 89). Boulder, CO: Westview

Press.

Straubel, R., Holznagel, A., Wittmuss, A., & Barmann, U. (1994). Heuristic solving of

NP-Complete job-shop scheduling problems by multicriteria optimization. In S.

Rios (Ed.), Decision theory and decision analysis: Trends and challenges (pp.

243-258). Boston: Kluwer Academic Publishers.

Straus, D. A. (1999). Designing a consensus building process using a graphic road map.

In L. Susskind, S. McKeaman, and J. Thomas-Larmer (Eds.), The consensus

building handbook. Thousand Oaks, CA: Sage.

Tabossi, P. (1996). The interpretation of words. In J. Oakhill & A. Gamham (Eds.),

Mental models in cognitive science: Essays in honour o f Phil Johnson-Laird,

(pp. 19-32). Erlbaum, UK: Psychology Press.

223

Thomas, J. C. (1995). Public Participation in public decisions: New skills and strategies

fo r public managers. San Francisco: Josey-Bass Publishers.

Thompson, M. S. (1982). Decision analysis for program evaluation. Cambridge, MA:

Ballinger Publishing Company.

Tversky, A., & Kahneman, D. (1971). The belief in the Taw of small numbers.’

Psychological Bulletin, 76, 105-110.

Tversky, A., & Kahneman, D. (1973). Availability: a heuristic forjudging frequency and

probability. Cognitive Psychology, 4, 207-232.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases.

Science, New Series, 755(4157) 1124-1131.

Urbaniak, G.C., & Pious, S. (2008). Research Randomizer. Retrieved January 20, 2008,

from www.randomizer.org

US Title 40, Code o f Federal Regulations (2007). Public Participation Guidelines. Title

40 Code of Federal Regulations, Section 25.4., pp. 284-293. Retrieved on

November 2, 2007 from US Governmental Printing Office

http://ecfr.gpoaccess.gov/cgi/t/text/text-idx?c=ecfr&tpl=/ecfrbrowse/Title40/40cfr

25_main_02.tpl

U.S. Department o f Energy: Environment, Safety and Health Office o f NEPA Policy and

Assistance (1998). Effective public participation under the National

Environmental Policy Act (2"‘* ed.). Retrieved November 26, 2007, from

http://www.eh.doe.gov/nepa/tools/guidance/pubpart2.html

224

U.S. Environmental Protection Act, Chemical Emergency Preparedness and Preservation

Office (2008). The Emergency Planning and Community Right-to-Know Act.

Retrieved December 10, 2007, from www.epa.gov/ceppo

U.S. Environmental Protection Agency (1973). EPA issues public participation

regulations. Retrieved December 6, 2007, from

www.epa.gov/history/topics/fhwa/04.htm

U.S. Environmental Protection Agency (1979). Press release: EPA increases public role

in environmental programs. Retrieved December 6, 2007, from

www.epa.gov/history/topics/pubIic/01 .htm

U.S. Environmental Protection Agency (1988). Common Sense Initiative Council: Report

o f the Common Sense Initiative Council's stakeholder involvement work group.

Washington, DC; Governmental Printing Office.

U.S. Environmental Protection Agency (2002). National EnvironmentalJustice Advisory

Council: Model plan fo r public participation. Retrieved November 20, 2006,

from www.epa.gov/ProjectXL/nejac.htm

U.S. Environmental Protection Agency (1996). Washington DC Office o f Solid Waste,

Permits Branch, RCRA Public Participation Manual. Retrieved April 29, 2007,

from http://www.epa.gov/epaoswer/hazwaste/permit/pubpart/chp_2.pdf

U.S. Environmental Protection Agency (2005). Public involvement. Retrieved April 29,

2007, from www.epa.gov/publicinvoIvement

U.S. Department of Fish and Wildlife Services (2008). Digest o f the Federal Resource

Law o f Interest to the US Fish and Wildlife Services. Retrieved January 5, 2008,

from http://www.fws.gov/laws/lawsdigest/FWATRPG.HTML

225

U.S. Department of Transportation (2007). Urban transportation planning in the United

States: An historical overview: fifth edition: Chapter 6, The environment and

citizen involvement. Retrieved December 6, 2007, from

Http ://tmip. fhwa.dot. gov/clearinghouse/docs/utp/ch6. stm

U.S. Department o f Transportation, Federal Highway Administration (2005). Public

involvement techniques fo r transportation decision-making. Retrieved November

20, 2006, from www.fhwa.dot.gov/reports/pittd/bridge2b.htm

Belt van den, M. J. (2000). Mediated modeling. Unpublished doctoral dissertation.

University o f Maryland.

Vari, A., & Joanne, C. (Eds.) (1999). Public participation in environmental decisions:

Recent developments in Hungary. Budapest; Akademiai Kaido.

Vennix, J. A. M. (1996). Group model building: Facilitating team learning using system

dynamics. New York: John Wiley and Sons.

Vennix, J.A.M (1999). Group model building: Tracking messy problems. System

Dynamics Review, 75(4), 379-401.

Videira, N., Antunes, P., Santos, R., & Lobo, G. (2005). Public stakeholder participation

in European water policy: a critical review of project evaluation processes.

European Environment 16, 19-31.

Vlek, C., Timmermans, D., & Otten, W. (1993). The idea o f decision support. In S. S.

Nagel (Ed.), Computer-aided decision analysis: Theory and applications

(pp. 33-68). Westport, CT: Quorum Books,

von Newmann, J., & Morgenstem, 0 . (1947). The theory o f games and economic

behavior. Princeton, NJ: Princeton University Press.

226

Walker, P. (2001). Citizen advisory boards and public involvement: The role of citizens

in public decision-making, post-cold war demilitarization, and environmental

cleanup. Federal Facilities Environmental Journal, 117-133.

Wargo, B. (2006, June 12). Not in my back yard. Las Vegas Sun, p. 3.

Weiss, J. A. (1989). The powers of problem definition: The case of government

paperwork. Policy Science, 22, 97-122.

Weldon, C. (1993). Enhancing public participation through a consensus process: recent

experience of the Los Angeles Department o f Water and Power. Proceedings o f

the American Water Works Association 1992 Annual Conference (June 6,1993),

San Antonio, TX.

Wilcox. D. (1994). The guide to effective participation. Retrieved April 29, 2007, from

http://www.partnerships.org.uk/guide/mainl.html

Williams, B., & Matheny, A. (1995). Democracy, dialogue, and environmental disputes:

The contested languages o f social revolution. New Haven, CT: Yale University

Press.

Williams-Cloud, S. (1998). Testing the potential o f system dynamics models for

improving public participation in resource management. Unpublished master’s

thesis. University of Nevada Las Vegas.

Williamson, A., & Fong A. (2004). Public deliberation: Where are we and where can we

go? National Civic Review, 3-15.

Wilson, G. L. (2005). Groups in context: Leadership and participation in small groups.

(7*'’ ed.). New York: McGraw-Hill.

227

Wittgenstein, L. (1922). Tractatus logico-philosophicus. London: Routledge and Kegan

Paul.

Yaremko, R.M., Harari, H., Harrison, R. C., & Lynn, E. (1982). Reference handbook o f

research and statistical methods. New York: Harper and Row.

Yin, R. (2003a). Applications of case study research. Applied Social Research Methods

Series, 43. ThousEind Oaks, CA: Sage Publications.

Yin, R. (2003 b). Case study research design and methods. Applied Social Research

Methods Series, 5. Thousand Oaks, CA: Sage Publications.

Young, R. M. (1983). SuiTogates and mapping: Two kinds o f conceptual models for

interactive devices. In D. Centner and A.L. Stevens (Eds.), Mental models

(pp. 35-53). Hillsdale, NJ: Lawrence Erlbaum, Inc.

Zagonel, A. (2004). Reflecting on Group Model Building Used to Support Welfare

Reform in New York State. Unpublished doctoral dissertation. University at

Albany.

Zakay, D. (1984). The evolution of managerial decision quality by managers. Acta

Psychologica, 56, 49-57.

228

VITA

Graduate College University o f Nevada, Las Vegas

Marcia Lynne Turner

Home Address:1301 Birch StreetLas Vegas, Nevada 89102

Degrees:Bachelor o f Arts, Philosophy and Speech Communication (double major), 1988 University of San Diego

Master of Arts, Speech Communication, 1997 University o f Nevada, Las Vegas

Special Honors and Awards:Phi Kappa Phi National Honor Society, 1998Public Relations/Advertising Manager of the Year 1996: Las Vegas Women in

CommunicationBronze Quill Award, Integrated Marketing, International Assn. Business

Communicators National Health Policy Fellow, National Assn. Public Hospitals

Dissertation Title: Evaluating the Use of System Dynamics for Improving Stakeholder Decision Making

Dissertation Examination Committee:Chairperson, Dr. Krystyna Stave, Ph.D.Committee Member, Dr. Anthony Ferri, Ph.D.Committee Member, Dr. Timothy Famham, Ph.D.Graduate Faculty Representative, Dr. Jerry Simich, Ph.D.

229


Recommended