+ All Categories
Home > Documents > Solving multistage stochastic networks: An application of scenario aggregation

Solving multistage stochastic networks: An application of scenario aggregation

Date post: 24-Feb-2023
Category:
Upload: ucy
View: 0 times
Download: 0 times
Share this document with a friend
25
Solving Multistage Stochastic Networks: An Application of Scenario Aggregation John M. Mulvey and Hercules Vladimirou Department of Civil Engineering and Operations Research, School of Engineering and Applied Science, Princeton University, Princeton, New Jersey 08544 The scenario aggregation algorithm is specialized for stochastic networks. The algo- rithm determines a solution that does not depend on hindsight and accounts for the uncertain environment depicted by a number of appropriately weighted scenarios. The solution procedure decomposes the stochastic program to its constituent scenario sub- problems, thus preserving the network structure. Computational results are reported demonstrating the algorithm's convergence behavior. Acceleration schemes are dis- cussed along with termination criteria. The algorithm's potential for execution on paral- lel multiprocessors is discussed. 1. INTRODUCTION Numerous planning problems can be represented as linear or nonlinear net- works with stochastic elements. Examples include financial modeling [ 18. 121. facility location and planning [ 14, 381 and dynamic vehicle allocation [29], to name just a few. For the most part, real-world applications have been restricted to deterministic models despite the fact that deterministic analysis based on point forecasts or "worst-case" values of the uncertain quantities often yields solutions that differ substantially from the optimum of the underlying stochas- tic problem (e.g., see Birge 131 and Kalberg et al. 1151). A primary obstacle to the development of effective stochastic optimization software has been the size and computational complexity of stochastic pro- grams; as a result, computational work has largely focused on programs with special structures. Consideration of uncertainty inevitably leads to programs whose size grows multiplicatively with the number of decision stages and the number of realizations of the uncertain quantities. Moreover, network struc- This research was supported in part by National Science Foundation Grant #DCR-86 I - 4057 and IBM Grant #5785. This study is Technical Report SOR-88-1, Dept. of Civil Engineering and Operations Research, Princeton University. NETWORKS, Vol. 21 (1991) 619-643 GJ 1991 by John Wiley & Sons, Inc. CCC 0028-30451911060619-25$04.00
Transcript

Solving Multistage Stochastic Networks: An Application of Scenario Aggregation John M. Mulvey and Hercules Vladimirou Department of Civil Engineering and Operations Research, School of Engineering and Applied Science, Princeton University, Princeton, New Jersey 08544

The scenario aggregation algorithm is specialized for stochastic networks. The algo- rithm determines a solution that does not depend on hindsight and accounts for the uncertain environment depicted by a number of appropriately weighted scenarios. The solution procedure decomposes the stochastic program to its constituent scenario sub- problems, thus preserving the network structure. Computational results are reported demonstrating the algorithm's convergence behavior. Acceleration schemes are dis- cussed along with termination criteria. The algorithm's potential for execution on paral- lel multiprocessors is discussed.

1. INTRODUCTION

Numerous planning problems can be represented as linear or nonlinear net- works with stochastic elements. Examples include financial modeling [ 18. 121. facility location and planning [ 14, 381 and dynamic vehicle allocation [29], to name just a few. For the most part, real-world applications have been restricted to deterministic models despite the fact that deterministic analysis based on point forecasts or "worst-case" values of the uncertain quantities often yields solutions that differ substantially from the optimum of the underlying stochas- tic problem (e.g., see Birge 131 and Kalberg et al. 1151).

A primary obstacle to the development of effective stochastic optimization software has been the size and computational complexity of stochastic pro- grams; as a result, computational work has largely focused on programs with special structures. Consideration of uncertainty inevitably leads to programs whose size grows multiplicatively with the number of decision stages and the number of realizations of the uncertain quantities. Moreover, network struc-

This research was supported in part by National Science Foundation Grant #DCR-86 I - 4057 and IBM Grant #5785. This study is Technical Report SOR-88-1, Dept. of Civil Engineering and Operations Research, Princeton University.

NETWORKS, Vol. 21 (1991) 619-643 GJ 1991 by John Wiley & Sons, Inc. CCC 0028-30451911060619-25$04.00

620 MULVEY AND VLADlMlROU

tures in the underlying stochastic problem are destroyed in the explicit repre- sentation of the deterministic equivalent.

One class of stochastic programs that has been extensively studied is that of two-stage stochastic linear programs (LPs) with recourse. These programs pro- vide for fixed recourse action at some cost, to attain feasibility once the values of the uncertain quantities are revealed. Uncertainty is typically restricted to the right-hand-side vector of the constraints. Van Slyke and Wets [35] intro- duced the L-shape decomposition method for this problem class. This algo- rithm is based on cutting-plane procedures (also known as outer linearization or dual decomposition).

The most widely known case of stochastic LPs with recourse involves the form referred to as simple recourse, whose special structure allows the use of rather efficient solution algorithms [39, 4 I]. Simple recourse formulations for short-term cash management problems have been used by Kallberg et al. [15] and Kusy and Ziemba [ 161. Simple recourse programs can accommodate con- tinuously distributed, independent stochastic parameters in the right-hand-side vector of the constraints or nonlinear objective functions. Such programs have been used for transportation problems with stochastic demands. Solution pro- cedures for these problems include variants of the Frank-Wolfe method [S], piecewise linearization [14], and the forest iteration method [30]. Even for these simplified problems, relaxing the assumption of independence in continu- ously distributed right-hand-side parameters requires the onerous computation of multiple integrals or knowledge of the marginal distributions.

Variants of the L-shape decomposition have been applied to stochastic LPs with general fixed recourse. Wallace [37] used basis partitioning techniques to exploit the structure of programs with network recourse, Birge and Louveaux [51 examined a multicut version of the method, while Ruszczynski [34] com- bined dual decomposition with a penalty approach. Multistage extensions to dual decomposition have been developed by Birge [4] and Olsen [28] for sto- chastic LPs and by Louveaux [ 171 for quadratic programs.

The stochastic quasi-gradient method of Ermoliev [ I I , 121 and the nested decomposition of Noel and Smeers [27] can be applied to problems involving nonlinearities or uncertainties in the constraint coefficients. The Lagrangian dual decomposition methods proposed by Dempster [ 101 could lead to an alter- native solution approach. The computational efficiency of these algorithms has not yet been demonstrated.

An alternative approach is scenario analysis. Herein, a few realizations of the stochastic parameters are used to model uncertainty. A multiscenario rep- resentation of uncertainty is adequate and, perhaps, most appropriate when no probabilistic law can be derived for the stochastic elements or when uncer- tainty affects the constraint coefficients. Even if stochastic models can be formulated, the mathematical programs that they give rise to can be extremely hard to solve. “Scenario analysis,” as i t was practiced until now, involved an ad-hoc examination of the solutions of the scenario subproblems in an attempt to identify similarities and trends o n which a satisfactory solution to the overall problem could be based. Rockafellar and Wets 1331 (see also Wets [40]) intro-

SOLVING MULTISTAGE STOCHASTIC NETWORKS 621

duced the scenario aggregation algorithm (SA) to systematically combine the scenario solutions to a robust decision policy.

We adapt the scenario aggregation algorithm for network programs under uncertainty. A weighting scheme imposed on the scenarios reflects the uncer- tain environment. The primary aims of our study are ( I ) an empirical investiga- tion of the algorithm’s convergence properties and potential as a stochastic programming tool, and (2) a formalization of the practical aspects of implemen- tation in the context of stochastic networks.

The remainder of the paper is organized as follows: In Section 2, we describe the two-stage generalized network under uncertainty and cite practical applica- tions for this model form. The scenario aggregation algorithm as it applies to stochastic network programs is the subject of Section 3 . In Section 4, we describe the algorithmic implementation and present computational results. Finally, conclusions and directions for further research are discussed in the last sect ion.

2. MULTISCENARIO NETWORK PROGRAMS AND APPLICATIONS

We are concerned with stochastic generalized networks possessing a fixed topology and deterministic arc flow bounds. Hence, in their most general form, the stochastic networks admit uncertainty on the arc multipliers, on the node supply/demand values, and on the coefficients of convex flow cost functions. The uncertain quantities in the program take a finite number of discrete realiza- tions. The assumption of discrete distributions for the uncertain elements is common in most stochastic programming approaches.

For brevity and simplicity in notation and exposition, we restrict our atten- tion to two-stage stochastic networks. We emphasize. however, that the sce- nario aggregation procedure-as it was introduced by Rockafellar and Wets [33]-and our implementation of the algorithm can address multistage stochas- tic programs [20].

Multistage stochastic programs fully capture dynamic aspects inherent in many hierarchical decision problems under uncertainty. Yet, two-stage formu- lations of multiperiod problems can partially capture dynamic effects and have often been used in practical setting; see, for example, Kusy and Ziemba [ 161 and Mulvey and Vladimirou [ 181. To clarify the distinction between multiple periods and multiple stages in a stochastic program. we note that stages repre- sent time points at which information about stochastic parameters becomes known, that is, times at which the values of some uncertain quantities are revealed.

Two-stage stochastic models for multiperiod problems involve decisions over several planning periods, but assume that all uncertainties are resolved at the same time, following the selection of first-stage decisions. The assumption of complete information becoming available at the end of the first decision stage simplifies the stochastic program and reduces its size. Yet. i t considers long- term planninggoals and dynamic effects through the use of multiple periods. In two-stage stochastic networks, the flows on a set of first-stage arcs must first be

622 MULVEY AND VLADlMlROU

decided; stochastic arc multipliers may apply on some of these arcs. Once the first-stage decisions are made, the values of all uncertain quantities are re- vealed and the flows on the second-stage arcs must be set. The second-stage (recourse) variables depend not only on the first-stage decisions but also on the realization of the uncertain parameters. Still, all realizations yield generalized network subproblems with a fixed topology.

Let % = {V", d} denote the underlying network graph, where Zr is the set of nodes and d is the set of directed arcs. For every node i E V", the sets A: = {(i, j ) E Se} and A; = {( j , i) E d} represent the outgoing and incoming arcs at node i, respectively, while A; = AT U A; is the set of all arcs incident at node i. The set of arcs d is partitioned in disjoint subsets as follows:

dl , the subset of arcs representing first-stage decisions for which all associated parameters are deterministic;

d; , the subset of arcs representing first-stage decisions that have associated stochastic parameters (particularly stochastic arc multipliers);

d2, the subset of arcs corresponding to second-stage decisions.

We have d = d, U d; U d2. The set of nodes is similarly partitioned in subsets Zr, = {i E 'If: A,: d,} and V 2 = cV\LYl. 'Y, represents the nodes with determi- nistic flow balance equations (first-stage constraints), while 'Y2 covers the sec- ond-stage constraints. Uncertain supply/demand is allowed on nodes in Vr2.

All realizations of the uncertain quantities collectively specify the set of scenarios S with cardinality L. A deterministic scenario subproblem is gener- ated by fixing the uncertain parameters to the corresponding values for a partic- ular realization. Thus, each realization yields a scenario s E S. Each scenario subproblem is a generalized network with the fixed topology of graph %. The subproblem for scenario s E S is stated as

subject to

SOLVING MULTISTAGE STOCHASTIC NETWORKS 623

A clarification of notation is in order. We define by

./;(.I. a convex. separable objective function of the decision variables under

x i ; ( .$) . the flow on first-stage arc ( i , . j ) E {.dl U :A;} under scenario s E S: yil(>s). the flow on second-stage arc ( i , , j ) E .d2 under scenario s E 5': r , / , the deterministic multiplier on arc ( i . , j ) E z/ I : [,;(.s), the value of the stochastic multiplier on arc ( i , j ) E { :dl U d?} under

h,. the deterministic external supply/demand at node i E L ' , ; d , ( . s ) , the value of the stochastic supply/demand at node i E L'? under scenario

/ ; , ( ~ i ; ~ ) . the deterministic lower (upper) bound on first-stage arc ( i , , j ) E {:d I u

the deterministic lower (upper) bound on second-stage arc ( i , ,;) E : / I2 .

scenario s E S :

scenario s E S:

s E s: ::1; } :

.T(.P) = {.v,,(.s): ( i , j ) E {.dl U .d;}} and y(s) = {.v;/(.s): (i, j ) E .d2} are the first- and second-stage decision vectors under scenario s E S , respectively.

An example that demonstrates our notation and illustrates the structure of a two-stage stochastic network is depicted in Figure I . Herein the basic form of a network model for dynamic portfolio management is depicted. The problem concerns the allocation of funds to alternative investment instruments. The initial portfolio composition is represented by the supplies hi,ll at the nodes .$ = {i,!,. = I , . . . , 1 7 ) C L ' , : each of these nodes corresponds to an asset cate- gory. The first-period asset sales and purchases are represented by arc flows .v,,,,, I and .t-( corresponds to a cash node. Arc multipli- ers I ' , , ,~ , 1 I . and I', lii,, 5 I are associated with these arcs to account for transaction costs-which are assumed deterministic. The basic accounting constraint for asset class /n is expressed in terms of flow conservation at the corresponding node i,,, E 1.

Arc flows .K,,,~,;,~~(WI = I , . . . , n ) represent asset holdings in the revised portfo- lio, which are carried forward to the next period. The returns on these invest- ments are uncertain and are modeled as scenario-dependent arc multipliers <,, ,zj , , , ( .s) ( m = 1 , . . . , n; s E S ) . The same decision process is repeated in each of the subsequent periods in the planning horizon. This particular example involves portfolio revisions in three periods.

Depositsiwithdrawals are included as supplies/demands at the appropriate cash nodes. The objective is to maximize the expected util i ty of net wealth (y,,,,) at the end of the planning horizon. Util i ty is expressed as a concave. nonde- creasing von Neuman-Morgestern function 1361.

The multiperiod investment planning problem is formulated as a two-stage stochastic generalized network. A complete realization of all uncertain quanti-

respectively: c I E

624 MULVEY AND VLADlMlROU

arcs in dl @ nodes in VI

- + arcs In A; - arcs In A? 0 nodes in vz

FIG. 1. Structure of a scenario subproblem for a two-stage stochastic network.

ties is assumed to be revealed following the portfolio revision in the first period. Formulations of stochastic network models for financial planning can be found in Mulvey and Vladimirou [18, 221.

Denote by

%s = { (~(s) , y ( s ) ) : satisfying (2)-(5)) (6)

the feasible region of the generalized network subproblem generated by sce- nario s E S. The scenario subproblem is restated compactly as

The parameters in the subproblem (1)-(5) have been indexed by s to indicate their specific values under scenario s E S. A separate set of decision variables has been defined for each scenario subproblem. Subproblem [PSI considers only a single scenario; unfortunately, we cannot know a priori which scenario will materialize. Consequently, all scenarios must be jointly considered in the solution procedure. Furthermore, the values of the first-stage decisions must be scenario invariant. Thus, we must have

SOLVING MULTISTAGE STOCHASTIC NETWORKS 625

.\-(s) = x(s') vs, s ' E s; s # S ' . (7)

This logical requirement of tzonan/ic.ipr//iuit?. is necessary to avoid the hindsight that goes into the selection of a particular scenario.

We associate with each scenario s E S a weight r, that represents its proba- bility of occurrence (r, > 0, s = I , . . . , L ; Ct-, x,, = 1 ) . An overall program that accounts for all postulated scenarios is now formulated. By defining a single set offirst-stage decision variable z = { z i i : (i,,j) E {.dl U .d;}}. we state the overall program as follows:

subject to

The optimal solution Z* = { z * , y*(s), s E S} to this problem hedges against all eventualities as defined by the set of scenarios S . Nonanticipativity is im- plicitly enforced in the deterministic equivalent [DEP] through the definition of a single set of first-stage variables. As a result, program [DEP] contains fewer variables than does the collection of all scenario subproblems [ P , ] . However, the network structure of the problem is destroyed with the replication of the second-stage constraints in (10) for each scenario.

The size of program [DEP] increases significantly with the number of sce- narios, requiring the use of large-scale optimization techniques. For two-stage programs. the constraint matrix of [DEP] has a dual block diagonal structure- which in the case of linear programs can be exploited by decomposition, basis

626 MULVEY AND VLADlMlROU

factorization, or partitioning. For multistage problems, the deterministic equiv- alent has an arborescent structure (a sparse block triangular pattern), which cannot be easily exploited. Moreover, program [DEP] does not have fixed recourse, a fact that hinders the efficient use of dual decomposition methods.

We close this section by pointing out that stochastic generalized networks encompass a wide range of practical problems; relevant applications have al- ready been mentioned. Network formulations have been proposed for invest- ment planning [18] and currency exchange problems 171. Important elements of uncertainty in these problems are the rates of return on investments, o r cur- rency exchange rates, affecting the arc multipliers in the network models. Planning problems for water reservoir releases and power generation schedul- ing can also be formulated as networks. These models must account for uncer- tainty in climatic conditions-which affect system inputs and capacities-and in spatial and temporal electric consumption demands. Similarly, network models have been used for integrated production, inventory, and distribution problems 1131. Stochastic elements in these cases include throughput capacities at system components and market demands.

A survey of nonlinear network models can be found in Dembo et al. [9]. Most of these programs-even in their deterministic form-become large when used for long- or even medium-term planning. Consideration of multiple periods significantly increases the problem size; inclusion of stochastic elements adds another order of complexity.

3. THE SCENARIO AGGREGATION ALGORITHM

In this section we summarize the algorithmic derivation in Rockafellar and Wets [33], adapted for stochastic networks. Again, the restriction of the discus- sion to two-stage problems does not represent a limitation of the algorithm or our implementation; it is, rather, followed for simplicity in exposition.

3.1. General Framework

Given the collection of all scenario subproblems [PSI, Vs E S , our aim is to determine a “well-hedged” solution to the underlying problem, that is, a deci- sion policy that should perform well in the uncertain environment represented by the appropriately weighted scenarios. The solution must also satisfy the logical requirement of nonanticipativity stated in (7). Decision policies that satisfy nonanticipativity belong to the subspace

SIT = {(x(s), y ( s ) ) : x(s) = x(9’); s , s ’ E s; s # s‘}

and are called implementable policies. Conversely, policies belonging to the set

each component of which satisfies the constraints of the corresponding sce-

SOLVING MULTISTAGE STOCHASTIC NETWORKS 627

nario subproblem, are termed admissible policicJs. The set % is the Cartesian product of the sets % \ defined in (6) and covers the collection of feasible solu- tions to the scenario subproblems. Decision policies that satisfy the constraints of the subproblems (i.e., admissible) and also have scenario invariant first-stage decisions (i.e., implementable), belong in %: f7 N'. These policies correspond to the feasible set of program (9)-(12).

Admissible policies can be readily obtained by solving appropriate versions of the scenario subproblems. The subproblems relax the nonanticipativity re- striction in (7). Let (x(s), v(.~)) E %,, be the solution of a version-to be made clear later-of subproblem [ P , ] . The collection of these solutions X = { ( d l ) . y( 1 ) ) . . . . , (.v(L), y(L))} E '6 constitutes an admissible policy: L = /SJ is the cardinality of the scenario set S . A simple projection produces scenario invari- ant values for the first-stage decisions that are used to construct an implement- able solution as follows:

2 satisfies the nonanticipativity condition. The simple "weighted averaging" of the first-stage scenario responses ds) in (IS) produces a linear projection from 't: to the subspace .K. The trggrqyrtion opcrtrtor

described by (15) and (16) depends only on the scenario probabilities and com- bines the first-stage decision recommendations from the collection of subprob- lems to a single compromise policy. Still, the result of the aggregation operation is not guaranteed to be admissible, as the components (.T. y(s)) of X will not generally be feasible in all subproblems.

We seek a decision policy (admissible and implementable) that best responds to the relative importance of the scenarios as expressed by the weights i ~ , . The optimal policy X':: = {(.v'Y.s). y"( .~) ) : ( . v" (s ) , y:':(s)) E 'f,, . s E S ) E .I' solvcs the prob I e m

This problem is equivalent to [DEP] and, thus, significantly larger than the individual scenario subproblems [ P , ] . The only difference from [DEP] is the use of a separate set of first-stage variables for each scenario, with nonantici- pativity enforced through an additional set of equality constraints. Thus, X " relates to the optimal solution Z" of [DEP] as follows:

628 MULVEY AND VLADlMlROU

Because of its large size, program [PI is not addressed directly. The scenario aggregation method decomposes the program to its constituent scenario sub- problems and requires only the capability to solve the individual subproblems. For networks, highly efficient algorithms capitalize on the graph structure of the subproblems. A sequence of admissible solutions { X ” } C %, v = 0, I , . . . is generated whose compliance with the nonanticipativity condition is progres- sively demanded.

The operator J in (17) yields an implementable policy from an admissible solution with an easily computable transformation. A systematic procedure that generates a converging sequence { X u } + X * to the optimal solution of the overall stochastic program is constructed based on the theory of the proximal point algorithm [31, 321. The solution procedure relies on an augmented La- grangian approach to iteratively enforce the nonanticipativity requirement; see Bertsekas [2] for a coverage of Lagrangian methods.

Consider the following augmented Lagrangian:

where r > 0 is a penalty parameter; Xis an estimate of the optimal values for the first-stage variables; W = {w(s): s E S } is a price vector [w(s) has conformable dimension with x(s)]; and 1 . 1 denotes the ordinary Euclidean norm. The qua- dratic term in (18) penalizes deviations in the first-stage decision variables from the solution estimate X. The augmented Lagrangian problem

is iteratively solved with the prices w(s) and solution estimates X appropriately updated at each iteration. With a good choice of r > 0 and prices w(s), s E S , the solution of (19) yields a good approximation to the optimum X * . It might seem that the iterative solution of the augmented Lagrangian offers no advan- tage, since the problem in (19) has a comparable size with program [PI and is further complicated with the introduction of multipliers and penalties. How- ever, (19) decomposes across scenarios, yielding generalized network subprob- lems.

Iterative subproblem solutions progressively enforce the implementability restriction with the use of multipliers and penalties in the augmented Lagran- gian function. A series of simpler subproblems are solved instead of the large- scale program in [PI. The independence of the subproblems allows for their concurrent solution via parallel processors.

3.2. Description of the Algorithm

We assume that the feasible regions % \ for the scenario subproblems are nonempty, compact polyhedral sets generated by generalized network con- straints as indicated in (6). The scenario aggregation algorithm is then described as follows:

SOLVING MULTISTAGE STOCHASTIC NETWORKS 629

Step 0 Initialize the iteration counter v c 0; Generate an admissible policy X ” = {(.rl’(.s), ~ ~ ~ ’ ( ~ s ) ) : s E S } E %; where (.r”(s), y ” ( s ) ) is the solution to subproblem [ P , ]

Obtain the corresponding implementable policy X I , = (C.7’. y’(s)); s E S } E .N using the projection X I J = J X “ described in ( I S ) ; If Xi’ = jL’ , the optimal solution is found; Initialize the penalty parameter r” > 0 and the price vectors nqs) = 0, VLs E S (or to user specified values satisfying CIES 7r,w’(s) = 0). Step 1 If the termination criteria (discussed later) are satisfied, exit: Generate a new admissible policy X”‘ I = {(.r1’+l(.s), y”*’ (s)); s E S } from the collection of solutions to the subproblems

X”” E % is admissible but not necessarily implementable. Step 2 Obtain an implementable policy tion operator fLtt I = JX’” I (R1jt I E .h’ is not necessarily admissible); Update the multiplier vectors according to

I = ((Sl”’, y”+’(.s)); s E S } with the aggrega-

Increment the iteration counter v t v + 1 ; adjust the penalty parameter rl’ (if desired) and go to Step I . Note that C , t . ~ T , W ” ( S ) = 0 (i.e., W ” E N l ) .

The scenario aggregation algorithm generates at every iteration both an ad- missible policy X” and an implementable policy &. The norm

measures at every step the violation of the nonanticipativity constraints. Nec- essary condition 0” = 0 (i.e.. XI’ E % n H’) indicates that a feasible (admissible and implementable) solution has been reached. Examples can be constructed for which convergence of { X u } to the subspace X (i.e., 0 ” - 0) may be achieved short of optirnalit y .

The price updating in (20) is a steepest ascent step in maximizing the dual Lagrangian functional. At optimality, the multipliers W* = {w*(.s): s E S } represent the solution to the Lagrangian dual of [PI. For details on the role of the prices W and for convergence proofs (under the condition (4 n H f 0) . the

630 MULVEY AND VLADlMlROU

reader is referred to Rockafellar and Wets [33] (also Wets [40]). If % r l X = 0, then x” converges to the point in % closest to X; conversely, X u converges to the point in A” closest to %.

Left open are the issues of appropriate termination criteria and convergence monitoring measures. 8” = 0 is a necessary but not a sufficient condition for optimality. Optimality entails convergence both in the space of the primal variables X, as well as the space of the dual prices W; convergence cannot be observed in one of these spaces in isolation of the other. Convergence is moni- tored by means of the norm

The monotonic decrease in the value of this norm is shown by Rockafellar and Wets [33] in their convergence proof that requires only mild regularity condi- tions. Similarly,

can also be used to establish a termination condition. Note that 8” = 0 (i.e. X u = 8.3 x”(s) = XU, vs E S) implies zsES rTT,wu(s) . xu($) = o since zsES 7 ~ , ~ w v ( s ) = 0. In such a case, w y ( s ) = wU-I(s). Vs E S. The algorithm is terminated if either 8: or y” falls within user specified tolerances. Though the two termination criteria are similar, in practice, the second condition truncates the tail in the conver- gence of the algorithm.

3.3. Multistage Programs

The scenario aggregation procedure can be extended to multistage stochastic programs. Here, a scenario tree represents the information structure. Each node of the scenario tree is associated with a conditional realization of the stochastic quantities at a certain stage of the program. A different scenario is formed for each distinct root-to-leaf path through the tree. A corresponding subproblem is obtained by fixing the stochastic parameters to the values of the specific realizations observed along such a path.

Nonanticipativity dictates that scenarios that share common realizations up to a certain stage must yield the same decisions at that stage. The scenario tree readily identifies the subsets of scenarios that share common realizations up to a certain stage. These aggregations of scenarios at each stage lead to additional multiplier and penalty terms, used in the augmented Lagrangian to impose the nonanticipativity restriction. The projection operator J in (17) and the price

SOLVING MULTISTAGE STOCHASTIC NETWORKS 631

updates in (10) are also appropriately modified to conform to the aggregations of scenarios at each stage, as those are dictated by the structure of the scenario tree [XI.

Overall, the basic form ofthe algorithm remains unchanged 1331. Separability and independence among the scenario subproblems is maintained. The network structure is also preserved in the subproblems. These properties of the (SA) algorithm are critical in multistage problems since the program size grows exponentially with the number of stages and the number of discrete realizations of the uncertain quantities. The potential for execution on parallel processors and exploitation of special structures offer some promise for coping with the computational requirements of multistage stochastic programs in practical set- tings.

3.4. Acceleration Schemes

We introduce a modification of the algorithm. The basic steps remain un- changed. but we modify the computation of implementable solutions at inter- mediate (spacer) steps. Note that the implementable policy ,!? of (16) is con- structed from first-stage subproblem solutions and is obtained from the unconstrained quadratic program.

This orthogonal projection 2.I' = JX'' onto subspace .,A' is not always the best alternative. Special knowledge about a particular program often allows the identification of other implementable solution(s) preferable to j l ' .

The scheme we propose is constructed based on specific knowledge about the structure of stochastic networks for portfolio management problems used as test cases in this study. These models were described in Section 1. We will use the notation indicated on the network of Figure I .

The network allocates funds to a number of asset categories. Asset sales and purchases in the first period are represented by arc flows x,,~,~ I 1 0 and .vc 2 0. i,,, E 9 through the cash node c I . In the presence of transaction costs-which are deterministic-the corresponding arcs have multipliers r,,,,, < I , r,.,;,,, < I . This results in net flow loss when there is flow on the circuits ( c , , i,,,, c I ) (i.e., for .Y,,,,< I > 0, .Y+,,, > 0).

At the optimal solution there can be no flow on these circuits as long as for every scenario there exist uncapacitated paths from the corresponding node i,,, E :) or c I E 'I', to the sink node h E If2 that receives the final net wealth y/,,,. Sufficiently high upper bounds on the arc flows ensure the existence of such uncapacitated paths. Elimination of the flow around such a circuit (by setting .vj,,,( I = 0 or .Y~.~;~ , , = 0) generates an inflow surplus either at node i,,, or ( '1 that must be dissipated through an uncapacitated path to the collection arc ( h , h ) E .A2. With a nondecreasing utility function on arc flow y/ , l j , an improvement in

632 MULVEY AND VLADlMlROU

the objective value is obtained. Therefore, solutions containing flows around absorbing cycles are suboptimal. Hence, the conditions

must hold at the optimum. While the above conditions are satisfied by the values of the first-stage vari-

ables x(s) at the solution of the individual scenario subproblems, they are often violated by the corresponding “average” solution x” of (24). At an arbitrary iteration v, the violation of the required conditions by the first-stage solution estimate x” can be measured by means of the quantity

@(?) = 0 constitutes along with the nonanticipativity requirement necessary conditions for optimality. An implementable policy with first-stage decisions that minimize the total violation of these necessary conditions can be obtained from the solution of the problem

The objective function in the above problem is convex and bounded below by zero in the nonnegative orthant. Given the assumptions stated earlier, the optimal value of the convex program in (26) is identically zero at the optimum of the stochastic program [i.e., when .r”(s) = X“:(.S) = .Y’’ Vs E S ] .

Again, we compute at every iteration the first-stage solution estimate x” according to (24) and construct the corresponding implementable policy &, = {(X. y (s ) ) ; s E S } as specified in (16). The price updates are also performed according to (20). Periodically, the program in (26) is solved to obtain an alter- native solution estimate 1” for the first-stage variables, providing a different implementable policy. Note that X u = X” if @(P) = 0, leaving the resulting implementable policy unaltered. The algorithm is then restarted with X” used in the quadratic penalty term of the augmented Lagrangian in the next iteration. Convergence is maintained since this additional projection is applied only at spacer steps, and the algorithm is known to be convergent for any starting point (yo, W o ) [33]. Acceleration is achieved because {(a”, y” (s ) ) ; s E S } is the projection of the admissible solution X” = {(~”(s) , y ” ( s ) ) ; s E S} at iteration v onto the subspace of implementable policies N, with the least violation of conditions necessary at the optimum.

4. COMPUTATIONAL EXPERIMENTS

We have implemented the (SA) algorithm-specialized for multistage sto- chastic networks. The computer code accommodates generalized networks

SOLVING MULTISTAGE STOCHASTIC NETWORKS 633

TABLE 1. Test problems. Problem size

Scenario Deterministic equivalent subproblems [P,) I DEPI

# Problem Scenarios # Nodes # Arcs # Constraints # Variables

A 21 45 I16 725 1996 B 21 45 I16 725 I996 C 21 45 I18 I 2 5 I998 D 25 45 I16 86 I 2372 E 25 45 I I6 86 I 2372 F 29 45 I18 997 2750 G 19 67 I78 I075 2986 H 33 45 I I 6 I133 3124 I 37 45 I I6 I269 3 5 0 0 J 23 67 I78 I ?YY 3610 K 41 45 I I 6 I405 3876 L 27 67 17X I523 4234 M 35 67 I78 1971 5482 N 39 67 I78 2195 6 I06 P 53 61 I63 245 3 6922 Q 51 91 248 389 I I 1048 K 80 91 248 6095 17312 S I00 91 248 7615 21632 T 120 91 34x 9135 25952

involving uncertainty in the arc multipliers, the node supplyidemand values, and the coefficients of separable, convex arc cost functions (see Mulvey and Vladimirou 1201). The information structure is specified in the form of a sce- nario tree that is interpreted internally to determine the required scenario ag- gregations. The nonlinear generalized network optimizer GENOS [24] solves the resulting scenario subproblems. Network versions of the primal truncated Newton [ I ] and the simplicia1 decomposition 12.51 algorithms are employed. Hence, the network structure of the subproblems is exploited. Since each iteration of the (SA) algorithm involves only perturbations of the subproblem objectives, the subproblems are restarted from their respective solutions in the previous iteration: this tactic yields significant computational savings.

The test cases used for empirical testing represent portfolio management problems [IS]. A list of test problems is given in Table 1. A linear and a nonlinear instance of each problem was generated. Note the rapid growth in the size of the deterministic equivalent with the number of scenarios.

An objective of this study is to examine the convergence performance of the algorithm under various internal tactics. In this section, we present a summary of our findings. As already mentioned, the norm 8’’ defined in (21) measures a distance between the admissible X and implementable X u policies generated in iteration v. The variation in the value of this norm between successive itera- tions measures the “progress” made in the primal space to meet the nonantici-

MULVEY AND VLADlMlROU

Iteration nuri i twr ( v )

0 1 0 20 30 40 50 60 70 80 9 0 1

0.1

0 .01

0.001

.E 0 .0001

t 0.00001

0.000001

0.0000001

0.00000001

FIG. 2.

- E c e

pativity condition, but ignores the corresponding step in the space of the dual prices W. 6” = 0 indicates feasibility but not necessarily optimality. Because of the prirnal-dual character of the scenario aggregation algorithm, the conver- gence of this norm to zero is not necessarily monotonic (see Fig. 2 for an example).

The (SA) algorithm generally exhibits fast initial convergence. Computa- tional results indicate that the overall rate of convergence is sensitive to the choice of the penalty parameter r. A low value of r-such as <0.02-results in small price updates in (20) and a small curvature of the penalty term in the subproblem objectives. This reflects a weak enforcement of the nonanticipativ- ity restrictions. In these cases, convergence to the set of nonanticipative poli- cies X typically requires many iterations and is achieved close to optimality. Conversely, high values of r-such as >3.0-produce large updates in the prices w”(s) (at least initially) and impose steep penalties for deviations of the first-stage variables from the solution estimate 2’ in each iteration. As a result, faster convergence to the subspace N is observed, but often at suboptimal solutions. Subsequent iterates X u typically remain on the subspace X, but the steep quadratic penalties then allow only small steps in the first-stage variables x(s). Hence, the apparent benefit of faster initial convergence for higher values of Y is subsequently negated by the long convergence tails observed with such settings of the penalty parameter. This behavior is evident in the typical exarn- ples shown in Figures 3 and 4.

Theoretical analyses on the impact of the penalty parameter values on the

SOLVING MULTISTAGE STOCHASTIC NETWORKS 635

u,

't: -0.66 w $ -0.68 d c

p. r-0.5 / '8.

Execution time (Set) 0 200 400 600 800 1000 1200 1400 1600

FIG. 3. the penalty parameter I'.

Variation of the objective value i n problem Q.NLP with different settings of

Execution time (sec)

0 5 0 100 150 200 250 SO0 350 400 4 5 0 500

1 - > L 'c v

E 0.1 E

2 0.01

2

g 0.001

c

a.

>

L

0.0001

0.00001

r.O.01

, .. ..

t r-0.05 1

FIG. 4. Convergence behavior of the scenario aggregation algorithm in problem H.NLP for different values of the penalty parameter I'.

636 MULVEY AND VLADlMlROU

numerical behavior of proximal point and augmented Lagrangian algorithms are incomplete. Empirical investigations are important to gain insights on al- gorithmic performance. For the scenario aggregation algorithm, the choice of very low or very high values for the parameter r proved grossly inefficient; values of Y in the range 0.05-0.5 were the most effective in our test cases. Within this range, lower values of r initially favor progress in the primal se- quence { X u } at the expense of slower progress in the sequence of the prices {W”}; the converse is true for higher values of r . This tradeoff in the conver- gence of the sequences { X ” } and { W ” } suggests the prospect of using dynamic strategies for adjusting the penalty parameter in order to accelerate conver- gence.

Next, various schemes for dynamic adjustment of the penalty parameter were studied. Strategies that initially reduce-before stabilizing-the value of r did not prove efficient. All empirical evidence indicates that considerably high values of r are not preferred during the early stages of execution. Increasing sequences of the form

proved to be the most effective acceleration schemes, reducing total execution time by 20-3074 on average and substantially higher in some cases. Rapid and continuous increase of the penalty parameter is unnecessary in augmented Lagrangian methods, as also verified by our results. The rate of increase in { r ” ] can be controlled with appropriate selection of A and -q in (27) (particularly 7 < I ), to prevent ill-conditioning that may arise with very large values of I‘. Even more successful were strategies that followed the dynamic increase of I”’ with a reduction in the value of the penalty parameter when the subspace of nonantici- pative policies was approached (i.e.. 0’’ -- 0). This reduces the curvature ofthe quadratic penalty term in the subproblem objectives, in an attempt to allow larger steps in the primal variables x(s). Though dynamic adjustments of the penalty parameter might disturb at times the monotonic convergence of the norm S,V, these strategies exhibit, in general, fast convergence to the optimum. Convergence to the subspace N is first achieved at a near optimal solution. The effect of dynamic strategies for the penalty parameter r on convergence perfor- mance for a typical case is exhibited in Figure 5. Note in particular the rapid convergence at the later stages of execution.

A heuristic procedure that proved effective in accelerating convergence in- volves a periodic modification of the projection operator J defined in (17). This modification involves the elimination of flows around circuits, as discussed in Section 3.4. At specific iterations, the quantity p(?) defined in (25) is com- puted; if p(?) > 0, then a new solution estimate i’ for the first-stage variables is heuristically determined to satisfy p( i ” ) = 0. By requiring that

SOLVING MULTISTAGE STOCHASTIC NETWORKS 637

10

1

0.1

0.01

0.001

0.0001

0.00001

0.000001

0.0000001

0.00000001

Execution time (sec)

0 100 200 300 400 500 600

1 I I

f - 0 . 0 5

i.,

IV I

FIG. 5 . with various dynamic adjustment strategies for the penalty parameter I'.

Convergence behavior ofthe scenario aggregation algorithm in problem H.LP

we ensure that i" is closer than X'' to satisfying necessary conditions for opti- mality of the primal variables, as discussed in Section 3.4. The new solution estimate X I ' for the frist-stage variables is then used in the quadratic penalty term of the subproblem objectives in the next iteration. Problem specific knowl- edge is, therefore, employed in heuristically updating the implementable poli- cies at spacer steps. Figure 6 shows the impact of the heuristic projections on algorithmic performance for one of the sample problems. The observed reduc- tion in the number of iterations and execution time for this example are typi- cally of the savings realized with the heuristic acceleration scheme. Savings in excess of 30% were commonly achieved with periodic cycle removals.

The viability of the (SA) algorithm as an effectie procedure can only be established by comparison with competing methods. Cutting plane approaches are not especially attractive options for the stochastic nonlinear generalized network programs considered here. Uncertainty in the constraint coefficients indicates that the recourse matrix is not fixed. The absence of a fixed recourse

638 MULVEY AND VLADlMlROU

Execution time (sec)

0 200 400 600 800 1000 1200 1400 1600

T 1 0

1

0.1

0.01

0.001

0.0001

0.00001

0.000001

0.0000001

0.00000001

0.000000001

i I I

IV ’

: , . No acceleration

I . r = 0.250; heuristic projection every 4 iterations 11. r = 0.500; heuristic projection every 6 iterations

111. r = 0.100; heuristic projection every 8 iterations IV. r = 0.075; heuristic projection every 15 iterations

FIG. 6. N.NLP with periodic heuristic projections for circuit removal.

Convergence behavior of the scenario aggregation algorithm in problem

matrix and the nonlinearity of the objective function deprive dual decomposi- tion methods of the benefits of parametric programming techniques-termed bunching and sifting-on which they rely for computational efficiency.

We compare the performance of the (SA) algorithm with direct solution of the linear and nonlinear deterministic equivalent programs [DEP]. The determi- nistic equivalents were solved with MINOS [26]; partial pricing was used in MINOS. All tests were performed on an IRIS4D/70 workstation running Unix V3.1. The reported solution times exclude I/O operations. Minimal compiler optimization was used with both codes that are written in FORTRAN77.

The results of the comparison for linear and nonlinear programs in Figures 7 and 8 demonstrate the competitiveness of the scenario aggregation algorithm for solving stochastic networks. In particular the decomposition procedure becomes more attractive as the size of the stochastic program increases. Direct solution of large-scale deterministic equivalents with general purpose optimi- zers becomes a formidable task, especially for nonlinear programs. However, in network programs, the scenario subproblems can be sdved efficiently. The

SOLVING MULTISTAGE STOCHASTIC NETWORKS 639

- v z 1003 - 5 800 Scena- la Aggregat lon T lme r MINOS T ime - - - - -

600

400 ,< - 200

0 A B C D E F H I K L N P O R S T

LP test problem

FIG. 7. linear network programs.

Comparison of the scenario aggregation algorithm with MlNOS on stochastic

scenario aggregation method systematically combines the scenario solutions to solve the overall problem. Decomposition of large stochastic programs to man- ageable-size subproblems makes possible the use of the algorithm in practical applications. Special knowledge about specific program forms can potentially be exploited. but even without any acceleration schemes the (SA) algorithm

N R

o o 300DC

25000 R a ' i c

Scenar lo A g g r e g a t l o n T l m e winos t l rne

20000

V

A B C D E F G H I J K L M N P O R S l

NLP test problem

Scenarlo A g g r e g a t l o n T l m e 0 MINOS T l m e

N

0

-

FIG. 8. nonlinear network programs.

Comparison of the scenario aggregation algorithm with MlNOS on stochastic

640 MULVEY AND VLADlMlROU

n n 0 1 0 1

3500 I - -

3000

u w x 1000

500

0

A B C D E F G H I J K L M N P O R S T

9 L P test problem

Observed speedup of the parallel hedging algorithm executed on a shared FIG. 9. memory multiprocessor (IRIS4DI220 GTX).

outperforms the direct solution of the deterministic equivalents as the program size grows. In another comparison [21], the (SA) algorithm also proved com- petitive-particularly in nonlinear test cases-against interior point methods that address alternative formulations of the deterministic equivalent.

The scenario aggregation procedure becomes especially attractive when con- sidering its degree of parallelism. The bulk of the computational effort is spent in solving the scenario subproblems (Step I ) , while the projection operations and price updates generally account for less than 1% of total execution time. The fraction of time for serial computations reduces with increasing program size. Separability and independence of the scenario subproblems can be ex- ploited for concurrent execution on multiprocessor computers.

We had previously analyzed the parallel potential of the (SA) algorithm through simulations of multiprocessing environments and estimated that near linear speedup could be achieved by increasing the number of processors [19]. An implementation on a shared memory, 2-processor computer (IRIS4D/220 GTX) substantiates these expectations. In our parallel implementation, each processor is statically assigned a subset of the subproblems to solve in each iteration. Synchronization is enforced only in the computation of the projec- tions X u = JX” and the price updates at the end of each iteration. The results summarized in Figure 9 demonstrate the superior parallel performance of the scenario aggregation algorithm.

SOLVING MULTISTAGE STOCHASTIC NETWORKS 641

5. CONCLUSIONS

This study shows the effectiveness of the (SA) algorithm for solving stochas- tic network programs. By decomposing the stochastic program. the algorithm addresses large-scale stochastic problems. Computational burden inevitably increases with the number of subproblems. but n o increasing trend in the n u n - her of iterations was observed with increasing number of scenarios. The re- quired iterations depend more o n the variance of the individual scenario solu- tions than on the number of scenarios. 1;urthermore. n o significant increase in computational effort was noted with the introduction of nonlinear ob.iective iunctions: the quadratic penalties of the subproblem objectives require the use of nonlinear programming methods in any case.

The rclative efficiency of the decomposition method improves with incl-ca\- ing program size. Exploitation of the netn!ork subproblems is critical for c o n - putationul efficiency. Application of the ( S A ) algorithm to other problem Cl i tbSeS will hinge on the ability to solve efficiently the scenario subproblem\. Still. the co;irsc level of pai-alleli\m remains a significant advantage. This fact alone makes scenario iiggl-t'gat i o n an important stochastic progrmming altern- live a s parallel computer architectures become uide\pread. The results of our pnrallel implementation signify the algorithm's parallel potential. This potential has been reulizcd with implementations on various sharecl-memory and distr-ih- utcd multiprocessor systems 1331.

An important component of this study has been the design of heuristic accel- eration schemes and strategies f o r dynamic adjustment of the penalty parame- ter. Other internal tactics can also be exploi-ed. Inexact subproblem soluticm at the initial shges of execution can 1-educe the computational effort 12 I I . Benefits can be derived from effective proccdui-cs for generating initial estimates (A'". EV") biiced o n special knowledge ;ibout the structure of particiilai. problem\. Heuristic schemes can exploit observed trend\ in the sequences i ,$ l . ) and i . S t ead y paths cii n be pe r iod i c ii I I y ex t rapo I ;I t ed to o h t a i n i mprov ed appro Y i rna - tions of the optimal vulues for the implcmcntable policies mcl the cnrrc\pond- ing prices. Oscillatory pattern\ in or { 1.2" } can a l s o be exploited by wtting thc "center" of oscillation : i j an estimate of the optimal solution.

The use 01' effective acceleration \themes is particularly critical when 1';i\t

identitication of approximate solutions i s the primary concern. I n such cahcs. the algorithm mny be terminated prematurely. Note that in premature termina- t ion the collection of scenario solutions (admissible policy) might n o t hatisfy the nonan t i ci pa t i v it y cond i t ion. M hi le t he corresponding i m plenie 11 t ;t bl c policy inah not satisfy each subproblem's constraints. Hence. appropriate condition3 mu\t he developed for early termination. including an effective method for ensuring satisfaction of the feasibility conditions. One option is to use the approximate solution for advanced start in a procedure that directly addresses the determi- nistic equivalent-such as an interior point method 16j-when explicit consid- eration of the deterministic equivalent is computationally feasible.

Other varintions from the class of augmented Lagrangian algorithms cun hc explored. Improved schemes for- price updating based poshihly o n second-order information could bc euoniined 121, Alternative penaltv form\ c;in also he inve4-

642 MULVEY AND VLADlMlROU

tigated in the augmented Lagrangian functionals. Still, decomposition into in- dependent subproblems should be maintained to allow the use of parallel com- puting in large-scale problems. Preservation of the underlying network structure is also important to allow the use of specialized algorithms for compu- tational efficiency.

REFERENCES

D. P. Ahlfeld, R. S. Dembo, J. M. Mulvey, and S. A. Zenios, Nonlinear program- ming on generalized networks. A C M Trans. Math. Software 13 (1987) 350-367. D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (1982). J. R. Birge, The value of the stochastic solution in stochastic linear programs with fixed recourse. Math. Program. 24 (1982) 314-325. J. R. Birge, Decomposition and partitioning methods for multistage stochastic linear programs. Operations Res. 33 (1985) 989-1007. J. R. Birge and F. V. Louveaux, A multicut algorithm for two-stage stochastic linear programs. Eur. .I. Operutions Res. 34 (1988) 384-392. T. C. Carpenter, I. J. Lustig, J. M. Mulvey, and D. F. Shanno, A primal-dual interior point method for convex separable nonlinear programs. Report SOR-90- 2, Dept. of Civil Engineering and Operations Research, Princeton University (1990). N. Christofides, R. D. Hewins, and G. R. Salkin, Graph theoretic approaches to foreign exchange operations. Combincrtoricrl Optimization (N. Christofides, A. Mingozzi, P. Toth, and C. Sandi, Eds.) Wiley, New York (1979). L. Cooper and L. J. LeBlanc, Stochastic transportation problems and other net- work related convex problems. N m a l Res. Log. Q. 24 (1977) 327-337. R. S. Dembo, J. M. Mulvey, and S. A. Zenios, Large-scale nonlinear network models and their application. Opereitions Res. 37 (1989) 353-372. M. A. H. Dempster, On stochastic programming: 11, Dynamic problems under risk. Stochastics 25 (1988) 15-42. Y . Ermoliev, Stochastic quasigradient methods and their application in systems optimization. Stochastics 9 (1983) 1-36. Y. Ermoliev and A. Gaivoronski, Stochastic quasigradient methods and their implementation. Working Paper WP-84-55, IIASA, Laxenburg, Austria (1984). F. Glover, J. Hultz, D. Klingman, and J . Stutz, Generalized networks: A funda- mental computer-based planning tool. Mrrnagement Sci. 24 ( 1978) 1209- 1220. S. R. Gregg, J. M. Mulvey, and J. Wolpert, A stochastic planning system for siting and closing public service facilities. Environ. P I m . A20 (1988) 83-98. J. G. Kallberg, R. W. White, and W. T. Ziemba, Short term financial planning under uncertainty. Mancrgement Sci. 28 ( 1982) 670-682. M. I . Kusy and W. T. Ziemba, A bank asset and liability management model. Operations Res. 34 (1986) 356-376. F. V. Louveaux, A solution method for multistage stochastic programs with application to an energy investment problem. Operutions Res. 28 (1980) 889-902. J . M. Mulvey and H. Vladimirou, Stochastic network optimization models for investment planning. Ann. Opercztions Res. 20 (1989) 187-217. J . M. Mulvey and H. Vladimirou, Evaluation of a parallel hedging algorithm for stochastic network programming. Impcicts ($Recent Computer Aduances on Op- erations Research (R. Sharda, B. L. Golden, E. W a d , 0. Balci, and W. Stewart, Eds.) North-Holland, New York (1989) 106-1 19. J . M. Mulvey and H. Vladimirou, Documentation for multistage stochastic gener- alized network optimization codes. Report SOR-89-15, Dept. of Civil Engineering and Operations Research, Princeton University ( 1989).

SOLVING MULTISTAGE STOCHASTIC NETWORKS 643

J. M. Mulvey and H. Vladimiriou, Applying the progressive hedging algorithm to stochastic generalized networks. Ann. Operations Res., to appear. J. M. Mulvey and H. Vladimirou, Stochastic network programming for financial planning problems. Report SOR-89-7, Dept. of Civil Engineering and Operations Research, Princeton University (1990). J. M. Mulvey and H. Vladimirou, Parallel and distributed computing for stochas- tic network programming. Report SOR-90- I I , Dept. of Civil Engineering and Operations Research, Princeton University ( 1990). J. M. Mulvey and S. A. Zenios, GENOS I .O user’s guide: A generalized network optimization system. Working Paper 87-1 2-03, Dept. of Decision Sciences, The Wharton School, University of Pennsylvania (1987). J . M. Mulvey, S. A. Zenios, and D. P. Ahlfeld. Simplicia1 decomposition for convex generalized networks. J. /nfi)r. Optiiii. Sci. 11 (IWO). B. A. Murtagh and M. A. Saunders, MlNOS 5.1 user’s guide. Report SOL-83- 20R, Systems Optimization Laboratory, Stanford University ( 1987). M. C. Noel and Y. Smeers, Nested decomposition of multistage nonlinear pro- grams with recourse. M ~ t h . Progrrrm. 37 (1987) 131-152. P. L. Olsen. Multi-stage stochastic programming with recourse. Ph.D. thesis, Dept. of Operations Research. Cornell Univesity ( 1974). W. B. Powell, An operational planning model for the dynamic vehicle allocation problem with uncertain demands. Trtrnsport. Res . 21B ( 1986) 217-232. L. Qi. Forest iteration method for stochastic transportation problem. M(ith. Pro- grcim. Stridv 25 (1985) 142-163. R. T. Rockafellar, Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14 ( 1976) 877-898. R. T. Rockafellar. Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Mrith. Opercitions Res. 1 (1976) 97-1 16. R. T. Rockafellar and R. J.-B. Wets, Scenarios and policy aggregation in optimi- zation under uncertainty. Mtrth. Opcwtioris Res. 16 ( 1991) 1-29. A. Ruszczynski, A regularized decomposition method for minimizing a sum of polyhedral functions. Math. Program. 35 (1986) 309-333. R. Van Slyke and R. J.-B. Wets, L-shaped linear programs with applications to optimal control and stochastic programming. S I A M J . Appl. Math. 17 (1969) 638- 663. J. von Neuman and 0. Morgestern, Theory of’ Games and Economic Behavior. Princeton University Press, Princeton, NJ (1947). S. W. Wallace, Solving stochastic programs with network recourse. Networks 16

S. W. Wallace, A two-stage stochastic facility location problem with time-depen- dent supply. Niimerical Techniques for Stochristic Optimization (Y. Ermoliev and R. J.-B. Wets, Eds.) Springer-Verlag, Berlin (1988). R. J.-B. Wets, Solving stochastic programs with simple recourse. Stochastics 10 (1983) 219-242. R. J.-B. Wets, The aggregation principle in scenario analysis and stochastic opti- mization. Algorithms and Model Formidations in Muthematical Programming (S. W. Wallace, Ed.) Springer-Verlag, Berlin (1989) 91-113. W. T. Ziemba, Computational algorithms for convex stochastic programs with simple recourse. Operations Res. 18 (1970) 414-431.

(1986) 295-317.

Received April 1988 Accepted February 1991


Recommended