A GROUP THEORETIC PARTIAL ENUMERATION ALGORITHM
FOR ALL-INTEGER PROGRAMMING
by
ABBAS T. ZAATARI, B.S., M.B.A.
A DISSERTATION
IN
BUSINESS ADMINISTRATION
Submitted to the Graduate Faculty of Texas Tech University in
Partial Fulfillment of the Requirements for
the Degree of
DOCTOR OF BUSINESS ADMINISTRATION
Approved
August, 1985
©1986
ABBAS TAHA ZAATARI
All Rights Reserved
•' r :
ACKNOWLEDGEMENTS
I would like to express my daap appreciation to Professor Larry M.
Austin, who sarvad as my dissertation advisor and committee chairman.
In spita of his busy schedule, he always had time to offer help and
suggestions and never failed to provide support and ancouragamant whan
most naadad. His contributions to this research and my professional
development ara also appreciated.
Daap appreciation and gratitude ara extended to Professor James
Burns as a committee member and as one of my bast teachers. Sincere
appreciation is also extended to Professor Paul Randolph for his
valuable discussion and comments. Special thanks go to Associate
Professor Roy Howell and Associate Professor Timothy Koch for serving
as readers of this dissertation.
I also would like to express my warmest appreciation to my mother,
wife, sons, and two sisters for their constant support, inspiration,
and understanding.
Finally, I would like to dedicate this dissertation to those who
have always directly or indirectly influenced and inspired my whole
life--to my mother and father.
11
TABLE OF CONTENTS
ACKNOWLEDGEMENTS ii
LIST OF TABLES v
CHAPTER
I. INTRODUCTION 1
Integer Programming in Perspective 2
II. LITERATURE REVIEW 5
Branch-and-Bound 6
Implicit Enumeration 7
Cutting Plane Algorithms 8 Fractional Algorithms 8 All-Integer Algorithms 10
Group Theoretic Methods 13 The Asymptotic Theory 13 The Group Minimization Problem (GMP) 16 Solving the GMP or the Group Knapsack Problem .... 19
III. THE GROUP THEORETIC PARTIAL ENUMERATION ALGORITHM 25
The Transformation of the Group Knapsack Problem 27
The Algorithm 27 Discussion of the Algorithmic Approach 28
Example Problem 32
The Steps of the GTPEA 33
IV. COMPUTATIONAL TESTING 38
General Aspects 39 Difficulty of Comparison 39 Computer Code 40 Definition of Symbols •^O
111
Standard Sets of Test Problems 42 Austin's Problem Sat 42 Trauth and Wool say's Problem Sat 42
Randomly Generated Test Problems 46
Analysis of Computational Results 48 The Fixed-Charge Problems 56 Tha 0-1 Allocation Problems 57 Tha General Problems 58 Tha 0-1 Problems 59 Tha Knapsack Problems 60
V. SUMMARY, CONTRIBUTIONS, LIMITATIONS, AND RECOMMENDATIONS FOR FURTHER RESEARCH 63
Summary 63
Contributions 64
Limitations 66
Recommendations for Further Research 67
REFERENCES 69
APPENDICES 11
A. THE SPECIAL KNAPSACK PROBLEM 78
B. THE PARTIAL DYNAMIC ENUMERATION ALGORITHM (PDEA) 80
C. FLOW-CHART OF THE GTPEA 84
IV
LIST OF TABLES
TABLES
1 Results of Computational Tasting for Trauth and Woolsay's Test Problems (Tha Fixad-Charga Problems) 43
2 Results of Computational Tasting for Trauth and Woolsay's Test Problems (Tha 0-1 Allocation Problems with Explicit Upper Bound Constraints) 44
3 Results of Computational Tasting for Trauth and Woolsay's Test Problems (The 0-1 Allocation Problems without Explicit Upper Bound Constraints) 45
4 Classification of the Randomly Generated Test Problems . . . . 47
5 Results of Computational Tasting for Randomly Generated Problems of Category 1 49
6 Results of Computational Testing for Randomly Generated Problems of Category 2 50
7 Results of Computational Tasting for Randomly Generated Problems of Category 3 51
8 Results of Computational Testing for Randomly Generated Problems of Category 4 52
9 Results of Computational Testing for Randomly Generated Problems of Category 5 53
10 Results of Computational Testing for Randomly Generated Problems of Category 6 54
11 Results of Computational Testing for Randomly Generated Problems of Category 7 55
CHAPTER I
INTRODUCTION
Operations rasaarch/managamant science (OR/MS) is concerned with
optimal decision making in deterministic and/or probabilistic systems
that are extracted from real-life decision environments. The need to
allocate limited resources to competing activities so that cost is
minimized or profit is maximized often arises in business, government,
engineering, economics, and tha natural and social sciences. In these
situations, the scientific analyses that characterize OR/MS techniques
provide considerable insight and, frequently, alternative courses of
action for the decision maker.
Mathematical programming is the most developed and important group
of available OR/MS techniques. It includes a group of mathematical
techniques such as linear, integer (otherwise linear), nonlinear, and
network programming. The algorithmic approach of these techniques is
either heuristic or optimizing. Unlike optimization approaches,
heuristic algorithms do not necessarily yield optimal solutions.
This research contemplates the all-integer integer (otherwise
linear) programming model (ILP), which is a sub-class of integer
programming. The parameters and the decision variables of this model
are restricted to be integers and non-negative integers, respectively.
Integer Programming in Parspactiva
Linear programming (LP) is a mathematical modal designed to
optimize a linear function (objective function) of non-negative con
tinuous variables (decision variables), while satisfying a system of
linear aquations or inequalities (constraints). A linear programming
modal that restricts soma of tha variables to assume only non-negative
integer values is known as mixed integer (otherwise linear) programming
(MIP). When all tha variables ara integer constrained, we have an all-
intagar integer (otherwise linear) programming model (ILP).
Many practical situations yield linear programming formulations
with decision variables that must take on integer values. Such appli
cations are categorized as "direct" integer models, because the
variables are naturally quantifiable but fractional values are inad
missible. Allocation of men -or machines to production activities and
allocation of buses to provide services on a given route may be cited
as illustrative examples. It should also be mentioned that the con
tinuous solutions of a class of direct integer models are automatically
integer. These models possess what is called the "unimodular property.'
Tha transportation and assignment problems are well-known examples of
this class. In other situations a binary variable may be used to code
a qualitative relationship that implies "yes-no" or "go/no-go" tyoes of
decisions or to transform an important class of problems with non
linear properties into ILP models. Cited applications of coded models
include capital budgeting, sequencing, scheduling, knapsack, traveling
salesman, and set-covering problems. The fixed-charge, dichotomies.
and separable programming problems ara well-known special types of
reformulated or transformed ILP models. In most of tha above cases,
integer programming provides a powerful means of adding realism to the
application of linear programming. It allows managers to include addi
tional considerations that cannot be incorporated in tha simple LP
modal.
In cases where tha solution variables take on relatively large
values, it is common practice to approach tha ILP problem by solving
tha relaxed (associated) LP problem and than rounding the solution
values to integers. This is a cost-justified practice, because the
SIMPLEX method is computationally very efficient, and it may produce an
optimal or near optimal solution. However, in cases where the integer
solution variables are binary and/or small in value, this above approach
is not appropriate. In fact, it is likely to produce a non-optimal and
perhaps infeasible solution. Many examples that demonstrate this phe
nomenon ara found in tha literature [e.g., Taha, 1975, p. 5].
In tha past two decades, a great deal of research in mathematical
programming concentrated on the development of efficient algorithms for
solving all types of ILP problems. Several algorithms that use dif
ferent approaches were developed; unfortunately, however, none of them
exhibits the generality and/or the computational efficiency of the
SIMPLEX method. The remarkable features of the SIMPLEX method, as
explained by Gomory [1965], stem from the theoretical properties of the
linear programming formulation and the method's systemaMc evaluation
of basic feasible solutions. In particular, the algorithm moves alonq
tha edges of the convex sat, from a corner-point (vertex) feasible
solution to a batter adjacent corner-point, until it reaches the opti
mal feasible solution. Thus, it examines only a relatively small
number of basic solutions. On tha other hand, the integer linear form
ulation does not provide insight into the form or other properties of
tha solution, because the faces of the convex hull of the feasible
integer points are not known and cannot conveniently be determined.
Devising an affective ILP algorithm analogous to the SIMPLEX
method is still tha objective of many researchers. This research
effort may be described as one further step toward the achievement of
the goal. Tha developed ILP algorithm is a group theoretic partial
enumeration approach. It first identifies, asymptotically, the convex
hull of tha integer points, and then partially enumerates a manageable
number of integer solutions.
CHAPTER II
LITERATURE REVIEW
Over 30 years ago, Dantzig, Fulkarson, and Johnson [1954] realized
tha importance of solving integer models and worked on solving tha
traveling salesman problem by constraint generation. However, it was
Gomory [1958, 1963a] who developed tha first finite integer programming
technique. Since than, many algorithms have been and are still being
developed. Tha basic approaches of tha optimizing algorithms that
attempt to solve ILP may be categorized as branch-and-bound, implicit
enumeration, cutting planes, and group theoretic. The first two
approaches are considered to be search methods. They consider expli
citly a small number of points while implicitly discarding the
remaining non-promising points. Cutting plana methods seek to generate
additional constraints known as "cuts," and append them to the original
set of constraints to produce eventually a linear programming problem
whose optimal solution is integer. The group theoretic approach is
inspired by the cutting plane techniques. It derives from the optimal
basic solution of the relaxed LP solution a set of congruence
aquations, which intends to identify the convex hull of the feasible
integer points. The following is a brief discussion of each approach
and its principal contributors.
Branch-and-Bound
Tha branch-and-bound technique is currently tha most widely used
method for solving ILP and MIP problems. Tha method consists of four
basic steps; initialization, branching, bounding, and fathoming. In
tha initialization step, tha associated LP problem is solved. If the
solution obtained is integer, than tha ILP problem has been solved;
otherwise tha solution forms what is called tha "starting node." The
branching step partitions this sat of solutions into several subsets.
Partitioning is a recursive process. The bounding step obtains a bound
for each new subset; this bound is actually the value of the objective
function of the LP problem of a subset. In the fathoming step, a
decision is made at the end of each branch (node) either to exclude
some subsets from further consideration (fathom them) or to continue
partitioning. The tree enumeration procedure ends when all possible
branches or subsets are fathomed.
The branch-and-bound concept was introduced by Land and Doig
[I960]. Later, Dakin [1965] modified the Land and Doig algorithm by
using inequalities rather than equalities for the branching step at
each unfathomed node. Mitten [1970] gives a general description of
branch-and-bound approaches.
High computer storage requirements, especially for large-scale
problems, is a major drawback of branch-and-bound methods. These
techniques are also prone to computer round-off error.
Implicit Enumeration
Implicit enumeration algorithms ara motivated by tha fact that the
integer solution space of combinatorial problems consists of a finite
number of points. Accordingly, tha approach seeks to examine explicitly
only some solutions and implicitly rule out tha rest of tha solutions
as being infeasible or sub-optimal. It also uses a tree enumeration
procedure and implements tha basic concept of branching, bounding, and
fathoming of branch-and-bound methods. The most useful type of
implicit enumeration technique is applied to tha special class of 0-1
(binary) integer problems.
Balas [1965] was the first to propose the use of implicit enumera
tion to solve binary problems. The arithmetic operations of his
"additive algorithm" involve only addition and subtraction. Glover
[1965, 1968b] developed a "surrogate constraint" that combines all or
some of tha original constraints of the problem without eliminating any
of the original feasible integer points. Thus, it enforces restric
tions upon the optimal solution that cannot be determined from any of
the individual constraints. Geoffrion [1969] used an imbedded linear
program to compute surrogate constraints slightly different from those
developed by Glover. His approach synthesizes Balasian implicit
enumeration with the approach typified by Land and Doig [I960]. Zionts
[1972] showed that using bounds on the variables together with B^las'
structure of implicit enumeration might produce a simpler and more
powerful algorithm.
8
Computational exparianca indicates that tha solution time of
binary implicit enumeration varies exponentially with tha number of
variables. Moreover, tha prospect of generalizing tha 0-1 additive
approach to all-integer problems is not promising. As stated by Zionts
[1973, p. 444], "Computional results using all-integer implicit
enumeration algorithms have been meager to data, and the results do not
appear vary promising." Usually, enumeration algorithms that are used
in solving general all-integer models employ a dynamic programming
(recursive) procedure [e.g., Greenbarg, 1969a, 1969b].
Cutting Plana Algorithms
Cutting plana algorithms may be classified into two distinctive
classes: fractional, and all-integer.
Fractional Algorithms
The first dual fractional cutting plane algorithm was developed by
Gomory [1958, 1963a]. The algorithm relaxes integer requirements and
solves the associated LP problem, using the SIMPLEX method. If the
solution values of all variables are integral, the original ILP problem
has been solved. Otherwise, a secondary inequality constraint called a
"cut" is generated from the optimal basic solution and added to the
bottom of the tableau, making the tableau primal infeasible. The dual
SIMPLEX algorithm is then used to restore primal feasibility. This
procedure is repeated until the ILP problem is solved.
The fractional cut originally proposed by Gomory could come from
any of the rows corresponding to non-integer variables or from any
linear combination of these rows. Tha function of tha cut is to
aliminata or cut off soma non-intagar solutions of tha convex sat, and
consequently force tha solution to be integral. Tha strength of the
inequality defining a cut depends directly on tha source row(s) from
which it is generated. Other types of inequalities, sometimes referred
to as convexity or intersection cuts, ware proposed for ILP and MIP
problems by Balas [1971], Glover [1972], Burdat [1972, 1973], and many
others. These cuts ara obtained from a geometrical analysis for the
purpose of deriving stronger cuts.
Martin [1963] proposed an interesting variation of Gomory's basic
approach, called tha Accelerated Euclidean Algorithm. The technique
consists of solving the relaxed LP problem, deriving a composite Gomory
cut to return the optimal SIMPLEX tableau to one with an all-integer
(not necessarily feasible) primal solution, re-solving the LP problem,
and so on. The process terminates when re-optimization is not possible
or when the optimal integer solution is found.
Gomory [1958] provided a theoretical proof of the finiteness of
his dual fractional algorithm. However, practical experience later
revealed the following two major disadvantages. First, the round-off
errors that evolve in computer calculations may yield an incorrect
integer solution. Avoiding decimal calculations (by separately
storing the numerators and the denominators of all fractions) may
exceed the available capacity of the computer. Second, no feasible
integer solution is obtained until the optimal integer solution is
10
reached. This means that there is no feasible integer solution if the
computations are stopped prematurely.
All-Integer Algorithms
Gomory [1963b] developed a dual all-integer algorithm to rectify
tha round-off error difficulty of the fractional cutting plane method.
Tha basic approach of tha algorithm is different from the fractional
technique. It starts with an all-integer and lexicographically dual
feasible tableau. At each iteration, an inequality constraint (cut)
is constructed with integral coefficients and "-1" pivot element, in
order to maintain all-integer tableaux throughout. The cut is then
appended, and iterations ara continued until an optimal solution is
obtained.
In dual all-integer algorithms, primal feasible solutions are
never obtained until tha optimal integer solution is found. This means
that the disadvantage of having no primal feasible integer solution, if
the computations are terminated prematurely, still exists.
Accordingly, Young [1968] developed the first finite primal all-
integer algorithm. The algorithm is called the Simplified Primal
Algorithm (SPA), and it is a modification of previous primal algorithms
proposed by Young [1965] and others. The approach of the SPA is analo
gous to the primal SIMPLEX method, except that the coefficients of
every tableau remain all-integer throughout. At each iteration the
algorithm constructs a secondary constraint (cut) that is adjoined to
the bottom of the tableau. The generated cut has integral coefficients
also and "+1" pivot.
11
Simultaneously, another primal all-integer algorithm was developed
by Glover [1968a]. Although there ara soma differences between the two
methods used by Glovar and Young, the approaches are built on the same
foundations.
Tha major problem with primal all-integer algorithms is the
existence of what are called "stationary cycles"--long series of
iterations in which the basis remains unchanged.
Recently, substantial research has evolved at Texas Tech
University in tha area of all-integer cutting plane methods. Austin
[1979] and Austin and Hanna [1983] developed a dual all-integer
algorithm called tha Bounded Descent Algorithm (BDA) in which upper
bounds on the variables are computed and used to generate a dual-
feasible initial tableau. A version of the dual all-integer algorithm
of Gomory is used as the computational vehicle for the BDA. In addi
tion, certain modifications ara used to avoid dual degeneracy and to
speed convergence by using cuts derived from the objective function.
Computational testing of the BDA yielded very good results.
Ghandforoush [1980] and Ghandforoush and Austin [1981] developed a
hybrid algorithm called tha Constructive Primal-Dual Algorithm (CPDA),
which is a modification of Glover's [1967] primal-dual algorithm. The
CPDA produces feasible integer solutions if the computations are
terminated or stopped prematurely. Computational results of the CPDA,
especially for fixed-charge problems, were also very encouraging.
Hanna [1981] and Hanna and Austin [1984] exploited the BDA
approach of Austin and devised an Advanced Start Algorithm (ASA) that
12
generates upper bounds on only those variables that ara basic in the
relaxed LP solution, in an attempt to start at a "good" initial
infeasible solution. Austin and Ghandforoush [1983] extended Hanna's
approach by developing tha Reduced Advanced Start Algorithm (RASA).
Tha technique employs an advanced infeasible start and initial con
straint reduction based on tha optimal solution to tha LP relaxation.
It uses either dual or primal all-integer cutting planes as the basic
computational approach. Austin and Ghandforoush [1985] further
extended their RASA by developing what is called the Surrogate Cutting
Plana Algorithm (SCPA), which also incorporates the primal-dual concept
of tha CPDA discussed above. In tha SCPA algorithm a single surrogate
constraint is constructed from tha binding constraint set of the solu
tion to the LP relaxation, in the manner of Glover [1968b] and others,
to produce a simplified surrogate model. Then, the "objective cut" of
Austin and Hanna [1983] and Ghandforoush and Austin [1985] is generated
and appended.
The computational tasting of RASA and SCPA algorithms on very
difficult test problems indicates that the two algorithms have a
promising potential.
Currently, Austin and Ruparel [1985] are investigating the
computational efficiency of the Mixed Cutting Plane Algorithm (MCPA),
which employs both fractional and all-integer cutting planes. The MCPA
exploits the All-Integer Simplex Algorithm developed by Crown [1982a],
to avoid round-off error in the fractional cutting plane phase.
13
Group Theoretic Methods
Group theoretic integer programming is an algebraic approach
developed initially by Gomory [1965, 1967, 1969]. Unlike the aforemen
tioned approaches, which reveal little or nothing about the structure
of tha integer program, this approach shows that tha relation between
integer and non-intagar solutions can be characterized by a wall-
dafinad Abalian (commutative) group.^ Tha order of this group is "D,"
which is tha absolute value of tha determinant of the optimal linear
programming basis (B). Furthermore the group is called a "cyclic
group" if it can be generated by at least one element in the group;
otherwise, it is called a "cyclic sub-group." In practice, ILP models
of each type ara experienced. Having defined the necessary group
terminology, we shall now explain briefly Gomory's "asymptotic theory."
The matrix notation used is similar to that of Hu [1969].
The Asymptotic Theory
Consider the integer program
Maximize C'X'
Subject to A'X' < b (D
X' >_ 0 and integer,
where C is an n-integer vector, X' is an n-variable vector. A' is an
mxn-integer matrix, and b is an m-integer vector. After adding a
1 In this context, the group is defined as a finite set that is closeii under the operation of addition when the arithmetic operations are taken modulo 1. Hence it is sometimes called a finite additive group,
14
non-nagativa slack variable s for each inequality, an equivalent
formulation of problem (1) is given by
Maximize CX
Subject to AX = b (2a)
X >̂ 0 and integer,
where C = ( C , 0) is an n+m-integer vector, 0 is an m-zero vector, X =
(X', S) is an n+m-variabla vector, A = (A', I) is an mx(n+m)-integer
matrix, and I is an mxm-identity matrix. Partitioning A as [B,N],
problem (2a) can be written as
Maximize CBXB + CNXN
Subject to BXg + NX^ = b (2b)
Xg, XN >̂ 0 and integer,
where B is an mxm-nonsingular matrix. Expressing vector Xg in terms of
X^ (i.e., XB = B~^b - B ' ^ N X N ) , we can write problem (2b) as
Maximize CsB-lb - (CBB-^N - CN)XN
Subject to XB + B-1NXN = B-lb, (2c)
XB, XM >_ 0 and integer.
Relaxing the integer restriction on XB and XN, problem (2c) becomes an
ordinary LP problem. If B is considered to be the optimum basis of the
LP problem, then the optimum solution to the LP problem is given by the
vectors
XB = B-lb, XN = 0,
where C B B - 1 N - C^ >; 0. If B-^b happens to be an integer vector, then
15
tha same solution is also tha optimum solution to tha integer problem
(2c). Whan B"^b is not an integer vector, than X^ niust increase from
zero to soma non-nagativa integer vector such that
XB = B-lb - B-^NXN >, 0, and integer.
Mathematically, tha necessary and sufficient conditions for obtaining
a feasible solution to tha ILP problem (2c) are ensured by the
following two relations, given in Hu [1970].
B-1NXN = B-lb (mod 1) and
B-lb - B-1NXN >: 0.
Tha congruence relation ensures that X^, and hence XB, will be integers;
and tha second equation ensures that XB will be non-negative.
The asymptotic approach of Gomory [1965] suggests that we neglect
the non-negativity condition on XB and solve the following problem:
Minimize C^XN
Subject to B-1NXN = B-^b (mod 1) (3)
Xf\j >_ 0 and integer,
where C^ = CM - CBB'^N >̂ 0. If the optimum solution of problem (3)
satisfies the feasibility condition of XB, then the optimum solution of
X(̂ together with XB = B-l(b-NXN) produce the optimum solution of the
integer programming problem. It should be noted that the ILP problem
(3) is a relaxation of the corresponding ILP problem (2c), in which
tha non-negativity restriction on XB and the constant term CBB'^b of
16
problem (2c) are omitted. Tha convex hull of tha feasible solutions of
problem (3) is an unbounded polyhedron, called tha corner polyhedron.
Relaxing tha integer restrictions of problem (3) results in an
unbounded polyhedral set of feasible solutions for the corresponding LP
problem, called a "cone." Figure 1 illustrates these two sets in
addition to tha convex hull of tha original ILP problem and the convex
sat of its relaxed LP problem for the following numerical example given
by Taha [1975, p. 231].
Maximize Z = Xi + 2X2
Subject to Xi -I- 2X2 1 8
2X1 < 7
-2X1 + 4X2 1 9
Xi, X2 >̂ 0 and integer.
The continuous solution of the relaxed LP problem is given by the basic
variables Xi = 7/2, X2 = 9/4, and S3 = 7.
The Group Minimization Problem (GMP) n
In problem (3) above, B"^NX|^ is equivalent to I ctjXj, where aj
is tha jth column of B'-'-N and Xj is the jth nonbasic variable. If
B-^b is denoted by the column BQ, then the congruence relationship of
problem (3) becomes
n I ctiXj = Bo (mod 1).
j = l
There are actually m congruence equations. Another fact is that
17
Relaxed intecer ootimum
True integer optimum
Continuous optimum
Continuous soace
Feasible integer convex hull
Corner polyhedron Linear programming cone
o Feasible Integer Points
D Infeasible Integer Points
Figure 1: Graphical Solution of Taha's Example
18
multiples of Xj = 0 (mod 1) may be added or subtracted (since they are
non-nagativa integers) to each aquation above without destroying tha
congruence relationship; because wa ara interested only in tha
fractional parts of tha components of tha vectors aj and BQ, which is
danotad by 0(aj) = aj, and 0(Bo) = BQ. AS a result, problem (3) is
equivalent to
Minimize C|^XN
n _ _ Subject to I ajXj = Bo (mod 1) (4)
j=l
Xj >̂ 0 and integer (j=l, ..., n).
Problem (4) is called the group minimization problem (GMP) and is some
times referred to as the "correction model." The coefficients in each
congruence equation are exactly those of Gomory fractional cuts.
Moreover, tha coefficient vectors of the cut constraints form a finite
additive Abalian group of order "D" or less, but if less, exactly
divides "D." Thus, the group is finite. When the order of the group is
equal to "D," the group is "cyclic"; when the group is not equal but
exactly divides "D," it is a "cyclic sub-group."
Another form for writing problem (4), in terms of group elements
G(a), may be obtained by using a function that maps a non-basic column
vector aj in problem (2b) to a group element g. More details about
this can be found in the original work of Gomory or in Hu [1969, 1970].
Using the concept of mapping, the constraints of the GMP may
be transformed into an equivalent form, which is more suitable for
19
analysis. This too was suggested by Gomory [1965], and exploited by
many others. Smith's normal form of tha basis (a diagonal matrix "B")
is used for tha transformation. Tha resulting equivalent GMP is called
tha factor group minimization problem (F6MP) or tha group knapsack
problem. Both tha GMP and tha FGMP can be formulated as a shortest
route network problem, in which each element in the group 6(a)
corresponds to a distinct node (explained by Gomory [1967] and Shapiro
[1968a]).
Solving tha GMP or tha Group Knapsack Problem
As mentioned above, if the GMP or its equivalent the group knap
sack problem*is solved and its solution yields non-negative values for
the variables of the original problem, then the ILP problem has been
solved. Gomory [1965] has provided the necessary condition that
guarantees that this will always occur. Gomory [1969] has also shown
that, unless the optimal basis (B) is degenerate, the optimum X^ of
problem (3) will always yield non-negative XB provided that b in
problem (2) is sufficiently large. This ILP problem is a member of a
class of problems called "asymptotic" or "steady-state," in which the
GMP and the ILP problem are "cost-equivalent" in the sense that at
least one optimal solution is common between them. An extension of
this algebraic characterization to all ILP problems was discussed by
Shapiro [1970]. In his "turnpike theorems," he explains that each ILP
problem is "cost-equivalent" to a group optimization problem, where
the relevant group represents a refinement of the group originally
identified by Gomory.
20
Gomory [1965] proposed group recursion (in tha usual manner of
dynamic programming) to solve tha group knapsack problem. White [1966]
extended tha dynamic programming method proposed by Gomory, and deve
loped an algorithm that will search for a sub-optimal solution for tha
group problem that solves tha ILP problem, when the optimal solution
fails. In such cases, of course, tha ILP problem under consideration
is non-asymptotic, and further restriction is applied to produce an
optimal feasible solution.
Glovar [1966] developed an algorithm in a general framework that
also permits a direct application to solving the general integer
problem and certain non-linear integer problems. The algorithm can
solve tha group knapsack problem, which is capable of accommodating a
variety of other constraints in addition to its group constraints. A
truncated enumeration method, that is characterized by an abbreviated
search over a tree of all possible solutions, was used. The method
derives its efficiency from the ability to exclude certain branches of
the tree from consideration. Glover argues that such an enumerative
method may be more effective (in terms of the amount of computation
and memory requirements) than dynamic programming methods whenever
more restrictions (additional constraints) are required.
Shapiro [1968a] introduced a dynamic programming recursion to
solve the group knapsack problem, which is formulated as a shortest
route problem. The algorithm is similar to Gomory's algorithm in the
sense that it finds only the optimum of the group knapsack problem.
However, in a subsequent paper, Shapiro [1968b] employed a tree search
21
algorithm to impose mora restrictions and to find a sub-optimal solu
tion for tha group knapsack problem, which in turn produces an optimal
solution to tha ILP problem, whan tha first algorithm fails.
Tha reported computational results of White [1966] and Shapiro
[1968b] on a standard sat of ILP problems imply that tha efficiency of
their algorithms decreases whan tha size of tha Abalian group
increases. In fact, both algorithms failed to solve certain fixed-
charga problems.
Hu [1970] developed two altarnativa algorithms to be used for the
transformation of the matrix into Smith's normal form and solving the
group minimization problem. He showed that the number of operations
of these two algorithms are fewer than those originally used in the
Gomory asymptotic algorithm.
Gorry and Shapiro [1971] proposed to exploit group theory to
integrate a variety of ILP methods into one common computational
procedure, and that an adaptive integer programming algorithm should be
controlled by a supervisor that performs four main functions: set-up,
directed search, sub-problem analysis, and prognosis. The set-up
function seeks to structure a given problem during the early stages of
computation so that the methods to be applied will be more effective.
The methods include group optimization, cutting planes, surrogate
constraints, Lagrangian, and search methods. If an enumeration is
required, then the directed search function guides the search, and at
each computational stage selects the most promising sub-problem to be
analyzed. The sub-problem analysis function chooses a sequence of
22
analytic methods to ba applied to tha selected sub-problem. Finally,
tha prognosis function maintains upper and lower bounds on tha cost of
an optimal solution, and racommands termination whan the predicted
change in tha objective function is marginal. Tha supervisor makes
decisions primarily on tha basis of structural insights derived from
the group theoretic approach. Tha reported computational experience is
encouraging.
Wolsay [1971] attempted to extend the group theoretic approach
and showed that by relaxing tha non-negativity constraints on any set
of variables, a structurally simpler problem is obtained. This can be
regarded as a shortest route problem over an Abelian (not necessarily
finite) group. In particular, if the non-negativity constraints are
relaxed on all but one of a set of the basic variables, a knapsack (or
close to knapsack) problem is obtained. Its solution either gives the
solution of tha ILP problem, or whan solved by dynamic programming
provides bounds at least as strong as those provided by the group
minimization problem. In the same framework, he also considered the
difficulty that arises whan tha order of the groups encountered is very
large. By further relaxation, Wolsey showed how the groups can be
reduced to manageable size.
Gorry, Shapiro, and Wolsey [1972] suggested several procedures for
controlling the size of the group, since they consider this a major
drawback in solving the ILP problem using a group theoretic approach.
They applied some simple number theoretic procedures to transform
(relax) the original ILP problem. The relaxation is done by dividing
23
all tha coefficients of an inequality by a rational divisor greater
than 1, and than taking integer parts. Thay also discussed heuristic
procedures for choosing tha divisor so that tha relaxed problem will
have all tha feasible solutions of tha original ILP problem.
Computational exparianca on a very limited number of problems indicates
that tha size of tha group can ba reduced; however, such relaxation of
tha original data is difficult for a particular class of ILP problems.
Chan [1970] and Chan and Zionts [1976] exploited the special
properties of tha shortest route problem, constructed from the group
problem. Thay presented five simplified algorithms, and compared the
algorithms' mean computation times for randomly generated "cyclic" pro
blems with the mean computation times of Gomory's, Hu's, and Shapiro's
algorithms. This comparison is found in Chen and Zionts [1976].
Denardo and Fox [1979] presented a fast algorithm to solve
"cyclic" group knapsack networks: those having only unbounded
variables, and those having bounded variables. An implicit enumeration
scheme is used to solve the latter.
Ruparel [1983] and Ruparel, Austin, and Kuzdrall [1985] attempted
to solve the ILP problem by using a hybrid technique called the Bounded
Enumeration Algorithm (BEA). It synthesizes the cutting plane
approach, group theoretic methods, and a dynamic enumeration scheme.
Fractional cutting planes (especially, monotonic cuts) are used mainly
to reduce the size of the Abelian group. However, computational
experience indicates that the existence of some ck = 1 (where Ck is the
numerator of the k^h non-basic variable's coefficient of the objective
24
function equation in tha LP optimal tableau) leads immediately to an
all-intagar solution whan Austin's [1979] objective cut is appended.
After incorporating a small number of cuts, tha dynamic enumeration
schema is invoked to find all feasible solutions to an equation derived
from tha objective function. If any of these solutions solves the
original ILP problem the search is terminated; otherwise the right-hand
side of tha equation is increased by an amount equal to the size of the
Abelian group (D). Tha dynamic enumeration scheme is invoked again and
this iterative search is continued until the optimal solution to ILP is
found.
The computational testing of the BEA on sets of standard "hard"
problems given by Trauth and Woolsey [1969] and Austin [1978], and
on randomly generated ILP problems of increasing size and complexity
indicates that the algorithm solves the "asymptotic" or "steady-state"
ILP problems efficiently. However, when more restriction is required
to solve some of tha "non-asymptotic" ILP problems, especially those
that have large "D" (e.g., fixed charge problems), the efficiency of
the algorithm declines. In addition, the attempts to reduce the size
of the Abelain group (D), by using fractional cuts, were limited
because of the difficulties caused by round-off error and/or dual
degeneracy.
CHAPTER III
THE GROUP THEORETIC PARTIAL ENUMERATION ALGORITHM
This research effort has developed a group theoretic partial enu
meration algorithm (GTPEA) to solve tha following all-integer integer
programming modal (ILP). In matrix notation, tha model is of the form
Maximize CX
Subject to AX = b (2a)
X >̂ 0 and integer,
where C = ( C , 0) is an n+m-vector, 0 is an m-zero vector, X = (X', S)
is an n+m-variable vector, A = (A', I) is an mx(n+m)-integer matrix, and
I is an mxm-identity matrix. If the m non-negative slack variables are
removed, an alternative representation of the above form is given by
Maximize C X '
Subject to A'X' < b (1)
X' >̂ 0 and integer,
where C and X' are n-vectors, and A' is an mxn-integer matrix.
Tha GTPEA synthesizes the approach of group theory and a partial
dynamic enumeration. Its theoretical concept is based on the necessary
condition of Gomory's asymptotic theory [1965] and the sufficient
25
26
condition imposed by tha restrictions of tha Partial Dynamic
Enumeration Algorithm^ (PDEA), developed by Austin and Zaatari [1984].
Tha necessary condition of tha asymptotic theory guarantees that
if tha GMP (or its equivalent, the group knapsack problem) is solved
and its optimal solution yields non-nagativa values for the basic
variables (XB) of tha original problem, than tha ILP problem is solved.
Unfortunately, however, an optimal solution to tha group problem may
yield in soma cases an infeasible integer solution. Thus, further
generation of sub-optimal solutions to the group problem is required.
The PDEA has been developed especially to achieve this purpose and
always ensures that all the values of XB are non-negative. It solves
the group knapsack problem by generating an optimal and/or sub-optimal
subset of feasible solutions, and then checks the non-negativity of XB
for each solution. If tha vector XB is infeasible for all the solution
points generated, another subset is generated and checked. Such par
tial enumeration is continued until an optimal and feasible solution
for the ILP problem is obtained.
Ifhe Partial Dynamic Enumeration Algorithm (PDEA) is a modified numerical technique developed to solve the group knapsack problem and, if required, to generate some or all of its feasible solutions. The PDEA consists of two phases. Phase I exploits important basic results of the classical theory of numbers. Phase II extends and further improves the Dynamic Enumeration Scheme, originally developed by Kuzdrall and Ruparel [1982]. It intends to enumerate only the desired non-negative integral solutions of any bounded variables linear equation. The upper bounds are used to enhance the speed of the enumeration, because fewer solutions are generated. Moreover, only subsets of the solutions are generated as needed. This in turn reduces storage requirements substantially.
27
Tha Transformation of tha Group Knapsack Problem
Tha core of this research approach is tha derivation of a special
knapsack problem from tha group knapsack problem of Gomory. This is
accompllshad by tha transformation of tha m̂ '̂ modular constraint^ into
a linear aquation using a non-nagativa integer variable "I," where
0 _< I < D (saa Appendix A for a detailed explanation). The transfor
mation is in fact a simple implementation of number theory, and is
executed in Phase I of tha PDEA. Tha bounded solutions of the special
knapsack problem ara generated in Phase II of the PDEA. The importance
of this transformation will be stressed in the discussion of the
algorithmic approach.
The Algorithm
The major steps of the GTPEA are:
I - Solve tha relaxed LP problem. If the solution is all-integer,
stop ; the ILP problem is solved. Otherwise, go to Step II.
II - Transform the optimal basis of the LP problem (B) into the Smith
normal form (B). Go to Step III.
Ill - Construct the group knapsack problem of Gomory [1965]. Go to
Step IV.
IV - Obtain, either structurally or heuristically, upper bounds for the
non-basic variables in the optimal LP solution. Go to Step V.
V - Invoke the Partial Dynamic Enumeration Algorithm (PDEA) and
iterate until an optimal solution is found.
^Without loss of generality, it is assumed that the group is "cyclic."
28
Discussion of tha Algorithmic Approach
Step I and II of tha GTPEA ara analogous to Step I and II of
Gomory's asymptotic integer algorithm [1965]. As explained in Chapter
II, all algorithms that adopt tha group theoretic approach start with
Step I. Tha SIMPLEX method or any of its modified versions can be used
to solve tha relaxed LP problem. In this research, tha generalized
Primal SIMPLEX Algorithm of Crown [1982b] is used to solve the LP
problem. This algorithm avoids tha necessity of using artificial
variables in ">_" and "=" constraints by employing pivot selection rules
that are generalizations of those in standard SIMPLEX. Obviously, if
tha relaxed LP problem has no feasible solution, then the same is true
for the ILP problem. In this step the basic variables and the defining
equations of the optimal basis are identified.
Step II, however, is not common to all techniques, because of the
computational effort required to diagonalize the LP optimal basis (B),
and hence transform it into the Smith normal form. Hu [1970] provides
an algorithm that can be used for such transformations. He also shows
that the number of operations of his algorithm are fewer than those
required by Gomory's algorithm. The integer programming algorithms
developed by Shapiro [1968a] and their extensions [1963b] exploit this
transformation. This is because the resultant group knapsack problems
are easier to solve. Bradley [1971], and Bradley and Wahi [1973J
discuss in more detail the advantages of transforming any integer-
programming problem to an equivalent canonical problem.
29
As reported by Kannan and Bacham [1979], none of tha wall-known
algorithms that transform an integer matrix into tha Smith normal form
is known to ba polynomially bounded in its running time. Realizing
this fact, tha GTPEA computes this step by using an efficient procedure
developed by Worm, Fiskaaux, and Wludyka [1981]. Tha Smith normal form
of tha basis matrix has non-zaro elements only on the diagonal, each
element being lass than or equal to its successor. Since the product
of tha diagonal elements in such a matrix is its determinant D, it is
also tha order of tha associated Abelian group.
In matrix notation, the relationshio of the two matrices, B and B
is given by
RBC = B,
where R and C are two mxm-unimodular integer matrices.
Step III is a direct implementation of matrix algebra and number
theory. Tha diagonalizad matrix B is substituted for B in the set of
solutions of tha constraints
BXB + NXN = b (2b)
XB, Xf̂ >_ 0 and integer.
This is done by relaxing the non-negativity requirements on XB and
multiplying the equality constraint by R, which yields
RBXB + RNXN = Rb
XB integer, X^ >_ 0 and integer
30
Using RB = BC"! and YB = C'^XB, tha following is obtained:
BYB + RNXN = Rb
YB integer, X^ >_ 0 and integer.
In this solutions sat, YB may take on any integer value if and
only if
['• RNX|4 ^ Rb mod.
X|̂ >̂ 0 and integer, m
where n di = "D." This is an equivalent set of constraints for (2b), 1=1
with the non-negativity requirements on XB relaxed. Garfinkel and
Namhausar [1972] provide simple examples to demonstrate numerically the
construction of the group knapsack problem.
Step IV exploits an important conclusion reached by Zionts
[1969]: "It appears therefore, that there may be possible fruitful
gains in research with upper bounds." Austin [1979] and Austin and
Hanna [1983] have also demonstrated the usefulness of upper bounds to
create an advanced starting point for their dual all-integer algorithm.
Whenever explicit upper bounds are unavailable and/or the size of
the Abelian group is relatively large, the techniques suggested by
Austin [1979] or Austin and Hanna [1983] are used to generate implicit
bounds. In the few cases in which structural techniques fail, upper
bounds can be obtained judgmentally from experienced decision makers.
This is a very important step of the GTPEA, since having tight upper
31
bounds for the non-basic variables enhances dramatically the speed of
tha partial dynamic enumeration in the following step.
In Step V, tha Partial Dynamic Enumeration Algorithm (PDEA) is
invoked to generate tha optimal or tha sub-optimal solution of the
group knapsack problem, which solves tha ILP problem. Phase I of the
PDEA is composed of four basic steps, which provide the link between
tha group theoretic approach and tha partial dynamic enumeration scheme
(Phase II) of tha PDEA. Step 1 arranges the non-basic variables of the
m^^ modular constraint of the group knapsack problem in descending
order, according to the values of their coefficients in the objective
equation. This is done in order to accelerate the generation of opti
mal and/or "good" sub-optimal solutions. Step 2 transforms the m̂ ^̂
modular constraint into a linear equation, using a non-negative integer
variable (I) that is lass than "D." Despite its simplicity, it is the
key step. Step 3 seeks to obtain an upper bound less than "D" to
change the sign of the variable "I." Step 4 is an initialization of
parameters and variables.
The partial dynamic enumeration scheme used in Phase II is a
modification of the Dynamic Enumeration Scheme developed by Kuzdrall
and Ruparel [1982]. Upper bounds on the non-basic variables are used
to reduce the number of solutions generated. Some steps are also added
to allow for the checking of feasibility of XB, after the generation of
a number of solutions. Enumeration will terminate when the optimal
solution for the ILP problem is found. For technical details .̂̂" the
operation of the PDEA, see Appendix B.
32
Within tha context of group theoretic methods in integer
programming formulation, tha GTPEA would ba considered an enumerative
algorithm. In this parspactiva, it comas under tha same classification
as those anumarativa algorithms developed by Shapiro [1968b], Gorry and
Shapiro [1971], Glover [1966], and Ruparel [1983]. However, this
algorithmic approach has not bean explored before. Some of its steps
have either been developed or modified in order to be exploited in the
described methodology.
Example Problem
Wa solve the fixed-charge problem #5 of Haldi, given in Trauth and
Woolsey [1969], to illustrate the operation and the effectiveness of
tha GTPEA. Tha order of the Abelian group "D" of the problem is equal
to 19,400. Tha solution of this problem among several other standard
small problems, which have large order Abelian groups, has not been
reported in the computational testing of either White [1966] or Shapiro
[1968b].
Max. X3 + X4 + X5
S. t.
20X1 + 30X2 + X3 + 2X4 + 2X5 < 180
30X1 + 20X2 + 2X3 + X4 + 2X5 < 150
-6OX1 + X3 < 0 -75x2 + X4 < 0
Xi < 1
X2 < 1
Xj >_ 0 and integer for j = 1, n^n
33
Tha steps of tha GTPEA
Step I: Tha optimal solution of tha relaxed LP problem is given in the
following condensed tableau.
'^^^.^NB.V B.V^^--^
X2
XI
X3
X4
xio
Xll
Xo
X5
.007
.009
.526
.541
-.009
-.007
.067
X6
.008
-.005
-.294
.580
.005
-.008
.286
X7
-.004
.009
.557
-.309
-.009
.004
.247
X8
.001
-.014
.180
.004
.014
-.001
.219
Xg
-.011
.001
.031
.149
-.001
.011
.180
Solution
0.773
0.510
30.619
57.990
0.490
0.227
88.608
(Note that Xe, ..., to X n ara slack varables.)
Step II: Tha transformation of the optimal basis B into B, resulted
B =
tha following.
/ 20
/ 30
-60
0
\ 1
0
30
20
0
-75
0
1
1
2
1
0
0
0
CVJ
1
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
34
B =
R =
1
0
0
0
0
0
1
0
0
0
-2
761
0
1
0
0
0
0
0
0
0
0
1
380
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
5
0
0
1
0
0
3
1142
0
0
0
0
0
3880
\
\
j 1 0
0
1
0
10
3880
° \ 0
0
1
265 1
100880 /
Matrix C is not written, because it is not needed for
constructing the group knapsack problem. The group is a
"cyclic sub-group," and D = 5 X 3880 = 19,400.
Step III: From the original problem, matrix N, vector X^, and vector b
are obtained for the construction of the group knapsack
problem.
N =
CVJ CVJ
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
35
XN = (X5 Xe X7 X8 Xg)
b^ = (180 150 0 0 1 1)
Tha multiplication of RNX^, Rb, and tha diagonal elements of A
B yield tha following sat of constraints.
1 0 0 0
0
0
0
-2
0 0 0
0 0 0
0
-2
0 0
1 0
0
0
X5'
X6
X7
-762 -761 380 1 1142
1 180 \
o\ 1
1
65 /
24780 /
mod.
1
1
1
1
>
\388O
Thus, the group knapsack problem is:
Min. .067X5 + .286X5 + .247X7 + .219X8 + .I8OX9
S.t. 3X5 + 3X5 + X7 + 3X9 = 0 (mod 5)
3II8X5 + 3119X6 + 380X7 + Xs + 1142X9 = 1500 (mod 3880)
X6, X7, X8, Xg, & X5 >̂ 0 and integer.
The first four rows of the constraints above are eliminated,
because the coefficient of each variable is zero.
Step IV: Since the order of the Abelian group is very large (19,400),
the upper bounds of the non-basic variables X5, Xs, X7, and
Xg are obtained from the original set of constraints,
according to the following computations:
36
Xs = min [180/2, 150/2] = 75
Xe = Si = 180
X7 = S2 = 150
Xg = S3 = 60
Xg = S4 = 75
If tha order of tha Abalian group were 100 instead of 19,400,
than 100 would be used as an upper bound for the slack-
variables X5 and X7, since it is smaller than 180 and 150.
Step V: Tha two phases of tha PDEA ara invoked in this step.
Phase I. Steps 1 and 2 yield this special knapsack problem.
Min. .286X6 + .247X7 + .219X8 + .I8OX9 + .067X5
S. t. 3119X5 + 380X7 + X8 + 1142X9 + 3II8X5 - 38801 = 1500
I < 3880
Xe, X7, X8, Xg, X5, & I >̂ 0 and integer.
From the upper bound of the non-basic variables (X|\j)
and the mth congruence equation of the group knap
sack problem. The upper bound of I is equal to
I = Min [240, 3880] = 240.
Thus the special knapsack problem can be written as
Min. .286X6 + .247X7 + .219X8 + .I8OX9 + .067X5
S. t. 3119X5 + 380X7 + X8 + 1142X9 + 3II8X5 + 38801' = 932700
X6, X7, X8, Xg, X5, & r >: 0 and integer.
37
Phase II. Tha partial dynamic enumeration yields tha following
solution by investigating 55 integer solutions,
X6 = 0, X7 = 0, X8 = 38, Xg = 23, X5 = 2, and I' = 232.
Ignoring tha solution of I' and substituting the solu
tion of X^ in tha relation below
XB = B-lb - B-1NXN,
which is available in equations form in the condensed
tableau, tha following optimal all-integer solution for
the ILP problem is obtained (in 1.534 seconds of
VAX 11/780 CPU time),
Xi = 1, X2 = 1, X3 = 22, X4 = 52, X5 = 2.
It should be noted that if tight upper bounds such as
10, 10, 50, 50, 10 were available for the non-basic
variables Xs, X7, X8, Xg, and X5, respectively, the
upper bound of I would be 30. Accordingly, the
constraint of the special knapsack problem would be;
3119X5 + 38OX7 + X8 + 1142X9 + 3II8X5 + 38801' = 117900.
The partial dynamic enumeration yields the same solu
tion above for XN by investigating only four integer
solutions (requiring only 0.181 seconds of VAX 11/780
CPU time).
CHAPTER IV
COMPUTATIONAL TESTING
Several algorithms have bean developed and proven theoretically to
solve tha ILP modal in a finite number of steps. However, in practice
these algorithms have expariancad limited success due to the following
factors: (a) round-off errors that evolve during computations;
(b) high computer storage requirements; and/or (c) amount of CPU time
required to obtain the solution. Thus, it is a standard practice in
this area of research to test the efficiency and/or the effectiveness
of any theoretical approach by implementing it on a computer for actual
computational testing. On tha basis of the computational results a
good judgment can be made of the performance of the algorithm in
solving different classes of ILP models. Past experience indicates
that although some techniques are very efficient in solving certain
classes of ILP formulation, they exhibit limited or poor efficiency
with other ILP formulations. For example, all-integer cutting plane
algorithms are known to be efficient in solving scheduling problems,
but they encounter extreme difficulties in solving highly degenerate
models. Many other examples that expose the limitations of other ILP
algorithms ara available in the literature.
38
39
General Aspects
Difficulty of Comparison
Unfortunately, there is no absolute basis for evaluating tha effi
ciency of ILP algorithms. Also, establishing a relative basis of com
parison between tha efficiency of a certain algorithm and other
existing algorithms is not appropriate in most cases, because such a
comparison is dependent on many factors. This is particularly true
whan tha algorithm under consideration is hybrid and/or does not have
distinct "SIMPLEX-lika" iterations. Nemhauser and Pierskalla [1979]
have suggested CPU time and storage requirements as possible bases of
comparison. However, CPU time depends on several factors such as tha
computer type used, the state of the system, the skill of the
programmer, and tha type of language used for coding.
In order to overcome this difficulty and also to avoid the effect
of the above factors, we report the CPU time of the iterative steps
(Step I, II, and V) of the GTPEA. These steps will be referred to as
LP, TR and PDEA respectively. The CPU time of Step I (solving the
relaxed linear programming problem) could be used as a basis for
comparing the efficiency of Step II, Step V, and the overall effi
ciency of the GTPEA. But it should be understood that the theoretical
structures of LP and ILP problems are completely different. As
explained before, this makes ILP problems much more difficult to solve
(NP-complete: Garey and Johnson [1979]). Therefore, we strongly
40
beliava that the focus should ba on tha ability of the GTPEA to solve
general ILP problems in "reasonable" CPU times, and also to alleviate,
to tha extant possible, tha major inherent shortcomings of the afore-
mantionad group theoretic algorithms.
Computer Coda
For tha purpose of computational tasting, the GTPEA is coded in
FORTRAN 77, on the time-sharing system of the VAX 11/780 at Texas Tech
University. Tha program is "user-friendly" and permits interactive
experimentation with Step IV of tha algorithm. Explicit upper bound
values for tha non-basic slack and/or decision variables can be entered
by the user. More detail about this code is given in the flow-chart of
tha algorithm in Appendix C.
Definition of Symbols
The symbols used in the tables' headings are defined here for the
reader's convenience.
n = number of decision variables
m = number of constraints
I det.| = absolute value of the determinant of the optimal
linear programming basis (order of the associated
Abelian group)
41
Sol. = tha optimal integer solution obtained^ for the
problem using implicit upper bounds and/or
explicit tight upper bounds
Gap = tha diffarance between tha integer part of the
optimal LP relaxation and the integer optimum
I-LP = time for Step I of the algorithm to solve the
LP problem relaxation
II-TR = time for Step II of the algorithm to transform
the optimal basis into the Smith normal form
V-PDEA = time for the PDEA to obtain the optimal solution;
in some tables this is further sub-divided into:
1st Sol. = time to obtain the first sub-optimal solution and
Opt. Sol. = time to obtain the optimal solution
The following sections contain description of the sets of ILP
problems that have been solved, the tables, and the analysis of com
putational results.
1 We double-check on this by re-solving the same problem several times using different values of tight upper bounds for the non-basic slack variables. This is done easily, because Step IV of the algorithm is interactive.
42
Standard Sets of Test Problems
Austin's [1979] Problem Sat
Tha problems in this sat ara rather small, but they constitute a
wide variety of problems that ware designed to present difficulties for
cutting plana algorithms. Soma of them are highly degenerate. This
set was used only to debug the computer program and to ensure that the
GTPEA generates the correct optimal solution(s) for each problem. The
algorithm solved all problems correctly and generated alternative opti
mal solutions for many of them.
Trauth and Woolsey's [1969] Problem Set
This set is well known in the literature. Several ILP algorithms
have been tested on the problems of this set. Despite their relati
vely small size, these problems are characterized as "hard" by Trauth
and Woolsey. Ten of the problems are fixed-charge problems, developed
by Dr. John Haldi at Stanford University in 1965, and nine are 0-1
allocation problems. Table 1 contains the computational results for
the fixed-charge problems. Tables 2 and 3 report the computational
results for the allocation problems. The allocation problems were
solved twice: first, the upper bound constraints on the decision
variables are considered explicitly; then the same set is solved
without explicitly including these upper bound constraints in the
model. We found that we could solve this type of problems (one
constraint) easily by modifying the stopping criteria in Step V of the
GTPEA.
43
Table 1
Results of Computational Tasting for Trauth and Woolsay's Test Problems
(Tha Fixad-Charga Problems)
P#
1
2
3
4
5
6
7
8
9
10
Properties of the ILP
n
5
5
5
5
5
5
5
5
5
12
m 1 dat.|
4 183
4 258
4 320
4 205
6 19400?
6 320000
4 194000
4 320000
6 20000
10 731808000
Sol.
7
8
10
8
76
106
76
106
9
17
Gap
0
0
0
0
12
12
12
12
3
1
CPU tima^ of
I-LP
14
16
15
14
18
17
15
15
19
40
II-TR
2
2
2
2
6
6
3
2
5
46
tha iterative steps
V-PDEA
1st Sol. Opt. Sol.
20
22
25
17
185 1538
112 2632
182 1526
114 2633
18
251053*
t CPU time is given in milliseconds. 0 indicates that the group is a cyclic sub-group. * a common upper bound of 15 is used for the non-basic slack
variables.
44
Table 2
Results of Computational Tasting for Trauth and Woolsay's Test Problems (Tha 0-1
Allocation Problems with Explicit Upper Bound Constraints)
P#
1
CVJ
3
4
5
6
7
8
9
n
10
10
10
10
10
10
10
10
10
Properties of
m 1 det.l
11 20
11 18
11 18
11 18
11 18
11 25
11 25
11 25
11 25
tha ILP
Sol.
50
52
57
62
67
68
70
75
85
Gap
0
2
1
0
0
2
4
2
0
CPU time"'' of the
I-LP II-TR
36 18
36 23
37 23
36 23
35
37 19
35 20
37 20
37
iterative
V-PDEA
17
19
18
14
-
16
19
20
-
steps
Total
71
78
78
73
35
72
74
11
37
t CPU time is given in milliseconds. - indicates that the LP solution is all-integer.
45
Table 3
Results of Computational Tasting for Trauth and Woolsay's Test Problems (The 0-1
Allocation Problems without Explicit Upper Bound Constraints)
p#
1
CVJ
3
4
5
6
7
8
9
Properties of
n n
10 ]
10 1
10 ]
10 ]
10 ]
10 1
10 ]
10 :
10 :
1 1 dat.|
20
L 18
I 18
L 18
L 18
L 25
L 25
L 25
L 25
tha ILP
Sol.
50
52
57
62
67
68
70
75
85
Gap
0
2
1
0
0
2
4
2
0
CPU tima^
I-LP
13
14
13
14
15
13
15
15
13
of tha iterative
V-PDEA
17
22
20
25
24
25
34
31
36
steps
Total
30
36
33
39
39
38
49
46
49
t CPU time is given in milliseconds.
46
Randomly Generated Test Problems
Tha VAX 11/780 random number generator function was used to
generate random numbers within tha spacifiad interval of tha parame
ters from a uniform distribution. The density of tha generated matrix
of tha technological coefficients is always sat equal to 100% (unless
zero is an admissible coafficiant), thus increasing the difficulty of
tha problem. However, a simple statistical technique is incorporated
in tha coda to allow tha user to sat the density at any desired level
(100%-0%).
As shown in Table 4, the first four types of problems generated
ara general; that is, the variables can assume any non-negative
integer. Thay ara also arranged in increasing degree of difficulty.
Category 4 is considered to be the most difficult because the signs of
tha cost coefficients (cj) and the technological coefficients (aij) are
both allowed to ba negative. The right hand side (r.h.s.) values of
all four categories are chosen within a small range in order to
increase tha difficulty of tha generated problems. As shown in the
example given in Chapter II, tightening the r.h.s. values of the
constraints might yield non-asymptotic problems that are hard to solve.
Tha order and the range of these values are kept the same in all four
categories in order to minimize the effect that might result from
variations.
The type of problems generated in categories 5 and 6 are bin.iry;
that is, the variables can only assume values of 0 or 1. Category 6 is
considered harder to solve because it allows some aii to be negative.
47
Table 4
Classification of tha Randomly Generated Test Problems
Category #
1
CVJ
3
4
5
6
7
Problem Type
General
General
General
General
0-1
0-1
Knapsack
Cj
(0, 30)
(-5, 30)
(0, 30)
(-5, 30)
(1, 10)
(1, 10)
(10, 60)
bi
(25, 125)
(25, 125)
(25, 125)
(25, 125)
(5n/2, 5n)
(6n/2, 6n)
(100, 120)
aij
(0, 15)
(0, 15)
(-5, 20)
(-5, 20)
(1, 10)
(-4, 16)
(10, 60)
Density of Matrix A
100%
100%
100%
100%
100%
100%
100%
48
Tha spacifiad interval of tha r.h.s. of these two categories is a func
tion of tha average of aij and tha number of decision variables (n).
Tha general anumarativa algorithm of Hi 1 liar [1969] considers 0-1
problems that rasambla those in category 5.
In category 7 wa generated and solved a one-constraint knapsack
problem in order to analyze tha affect of increasing the number of
decision variables, on tha CPU time of the GTPEA. For each n, ten
problems ware solved and tha average figures are reported. Similar
analysis has bean dona on an algorithm developed by Cabot [1970] to
solve this kind of knapsack problem, and many other algorithms in the
literature have been developed and devoted especially to this purpose.
Incidentally, tha maximum number of variables (n) considered in Cabot's
study is 80. His results show that the average solution time appears
to grow as nl«75.
Tables 5 to 11 contain the computational results for all seven
categories.
Analysis of Computational Results
Tha analysis of Trauth and Woolsay's problem set revealed the
following:
49
Table 5
Results of Computational Tasting for Randomly Generated Test Problems of Category 1
P#
1
CVJ
3
4
5
6
7
8
9
10
11
12
13
14
n
10
10
10
10
25
25
25
25
50
50
50
75
75
100
Propart
m
5
10
25
50
5
10
25
50
5
10
25
5
10
5
las of th
1 dat.|
1898
5463
130760
21608
20
67
80
11271
7451
978
9396
39320
66792
3130
a ILP
Sol.
136
85
63
78
112
83
70
13
216
167
124
381
306
510
Gap
7
6
11
12
3
11
2
13
3
6
9
11
8
10
CPU
I-LP
22
32
70
127
41
65
134
262
74
120
250
111
183
136
tima^ of
II-TR
5
40
1406
24479
4
30
1089
21428
4
41
1109
4
52
3
tha iterative steps
V-PDEA
1st Sol. Opt. Sol.
20 846
376
1052
1236 2199
23
26
27
854 1488
71 183*
44
241 6462
69 3416
641
156 1488*
t CPU time is given in milliseconds. 0 indicates that the group is a cyclic sub-group. ^ . , , * a common upper bound value of 5 is used tor the non-basic S1^:K
variables.
50
Table 6
Results of Computational Tasting for Randomly Generated Test Problems of Category 2
P#
1
CVJ
3
4
5
6
7
8
9
10
11
12
13
14
15
n
10
10
10
25
25
25
50
50
50
75
75
75
100
100
100
Propart
m
5
10
25
5
10
25
5
10
25
5
10
25
5
10
25
las of th
1 dat.l
60
1027
1073
6924
1127
5772
1465
3788
1126620
10020
462340
1725120
32700
49680
8931
a ILP
Sol.
436
92
91
219
133
138
500
328
175
515
285
126
373
239
157
Gap
9
9
8
19
10
6
14
12
12
7
21
16
16
13
31
CPU •
I-LP
22
34
75
45
66
150
73
121
238
112
165
340
145
214
463
bima'*' of
II-TR
3
53
1157
5
35
1405
4
31
1568
4
39
1196
3
30
1319
tha itarati
V-PD
1st Sol.
110
119
49
198
1144
27083
1101
281
8476
va steps
EA
Opt. Sol.
17
65
136
1460
32
177
289*
3945
2391
139
98103*
41924
1262
1168
33293
t CPU time is given in milliseconds. 0 indicates that the group is a cyclic sub-group. * a common upper bound value of 10 is used for the non-basic slack
variables.
51
Table 7
Results of Computational Tasting for Randomly Generated Test Problems of Category 3
p#
1
CVJ
3
4
5
6
7
8
9
10
11
12
13
n
10
10
10
10
10
25
25
25
25
50
50
75
100
Propart
m
5
10
15
25
50
5
10
15
25
5
10
5
5
las of th
1 dat.|
1620
3118
52388
66420
501637
6591
48059
42273
43598
22836
36376
3675
30000
a ILP
Sol.
264
87
92
48
0
419
173
107
85
132
139
347
1940
Gap
15
13
18
14
15
19
37
20
11
13
15
19
26
CPU
I-LP
22
43
45
68
129
50
64
90
144
43
117
113
147
time"'' of
II-TR
4
44
213
1382
13783
3
39
229
1497
4
61
4
4
the iterative steps
V-PDEA
1st Sol. Opt. Sol.
20
38
89
115
168
1445
2268
930
5327
558
5563
516
-
12519
933*
2077
7537
419*
1854
221
386
t CPU time is given in milliseconds. 0 indicates that the group is a cyclic sub-group. ^ • , . * a common upper bound value of 10 is used for the non-basic slack
variables. - no solution is found.
52
Table 8
Results of Computational Testing for Randomly Generated Test Problems of Category 4
P#
1
CVJ
3
4
5
6
7
8
9
10
11
n
10
10
10
10
10
25
25
25
50
75
100
Propart
m
5
10
15
25
50
5
10
25
5
5
5
las of th
1 dat.l
11400
30780
9
59545
6776
14225
34232
79090
552
56640
12009
a ILP
Sol.
306
89
52
37
56
326
233
104
1390
781
674
Gap
20
3
6
5
6
13
22
11
36
28
17
CPU
I-LP
25
36
4
72
131
44
67
139
79
115
159
tima"̂ of
II-TR
5
30
127
547
18069
4
46
1298
5
4
6
tha iterative steps
V-PDEA
1st Sol. Opt. Sol.
1414
163
18
751
57
460 11461
4855 21364
2700 14879
48 7740
273*
256931
t CPU time is given in milliseconds. 0 indicates that the group is a cyclic sub-group. ^ • , ^ * a common upper bound value of 10 is used for the non-basic slack
variables.
53
Table 9
Results of Computational Tasting for Randomly Generated Test Problems of Category 5
P#
1
CVJ
3
4
5
6
7
8
9
10
11
12
13
14
15
n
10
10
10
25
25
25
50
50
50
75
75
75
100
100
100
Propart
m
11
13
15
26
28
30
51
53
55
76
78
80
101
103
105
las of
1 dat.|
6
10
9
7
29
426
8
10
4
10
6
64
8
45
460
tha ILP
Sol.
29
16
17
149
66
107
267
196
203
389
345
308
462
454
441
Gap
0
1
2
0
0
0
0
1
0
1
1
1
0
1
0
CPU tima+
I-LP
34
38
43
135
177
151
499
449
483
1160
1217
1207
2325
2415
2251
of the iterative steps
II-TR V-PDEA
56 11
111 15
201 54
644 25
903 26
2983 30
3027 43
25770 57
31593 46
83190 74
45844 179
70176 101
69204 93
194293 230
244554 192
t CPU time is given in milliseconds. 0 indicates that the group is a cyclic sub-group,
54
Table 10
Results of Computational Tasting for Randomly Generated Test Problems of Category 6
P#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
n
10
10
10
25
25
25
50
50
50
75
75
75
100
100
100
Propart
m
11
13
15
26
28
30
51
53
55
76
78
80
101
103
105
las of •
1 dat.|
12
15
12
13
150
75258
16
144
142
16
10160
28032
14
176
9
tha ILP
Sol.
17
15
13
87
119
104
258
211
196
408
378
361
503
480
488
Gap
1
1
4
1
3
7
0
2
1
0
2
4
0
0
0
CPU time^
I-LP
32
39
41
138
149
162
500
482
562
500
1221
1327
2683
2612
2722
of the iterat
II-TR
62
109
187
1192
1835
3549
2393
28537
35549
2393
166354
188678
84877
250550
105439
.ive steps
V-PDEA
18
16
58
32
185
2071
44
227
63
44
1485
9557
93
480
164
t CPU time is given in milliseconds. 0 indicates that the group is a cyclic sub-group
55
Table 11
Results of Computational Tasting for Randomly Generated Test Problems of Category 7
(m=l, and for each size of n, tan ILP problems are solved)
n
10
20
50
100
150
200
250
300
350
400
450
500
Average P
1 dat.l
13
12
12
12
11
11
10
11
10
10
10
10
ropartias
Sol.
397
402
470
502
529
548
532
546
585
609
578
614
of Problems
Gap
14
18
18
18
19
22
19
19
22
27
31
20
Average
I-LP
14
21
48
82
124
161
194
234
268
303
349
383
CPU time+
V-PDEA
11
20
38
84
105
187
205
276
365
496
512
605
of Step I & V
Total(I+V)
25
41
84
166
229
348
399
510
633
799
861
988
t CPU time is given in milliseconds,
56
Tha Fixed-Charga Problems (Table 1)
1. The results given in Table 1 demonstrate tha ability of the
algorithm to solving fixed-charge problems. As noted before, the
solutions for some of these problems (P# 5, 6, 7, 8, and 10) ware
not reported by White [1966] or Shapiro [1968b]. The Bounded
Enumeration Algorithm (BEA) of Ruparel [1983] performed poorly on
fixed-charge models.
2. Tha solution times reported for the problems of relatively small
"D" is vary encouraging. However, this efficiency tends to decline
whan tha size of "D" grows large.
3. Problem #10 Is considered to be very difficult, and as reported in
Trauth and Woolsey [1969], four out of the five commercial codes
tasted did not yield tha solution after 15,000 iterations. Of
course this difficulty arises mainly from the very large size of
"D." As shown in the table, the GTPEA solved this problem using an
upper bound value of 15. This feature of the algorithm will always
enhance its ability to overcome similar difficulties.
4. The algorithm yielded good sub-optimal solutions for four of the
hard problems in a very short time.
5. The results shown give the solution time of the PDEA in obtaining
tha first optimal solution only, but as a matter of fact the
57
algorithm is capable of generating all alternate optimum solu
tions, if tha run is not terminated.
Tha 0-1 Allocation Problems (Tables 2 and 3)
1. Tha results shown in Table 2 indicate that tha algorithm is
relatively insansitiva to changes in tha right hand side values.
2. Tha results obtained in Table 3 exhibit tha versatility of the
algorithm. By changing some of the stopping criteria, the PDEA
was able to continue tha search for the desired optimum. Thus we
have avoided tha need for explicitly using upper bound constraints
and saved the CPU time required for transforming the basis
(Step II), in addition to the saving that results from the reduc
tion of the CPU time in solving the LP problem (Step I). The total
saving of time for some problems exceeds 50%.
3. Despite tha fact that these problems are small, the total CPU time
of the algorithm given in both tables is very encouraging.
4. The algorithm again showed the ability to generate all alternative
optimum solutions.
Tha following important points are drawn from the analysis of the
randomly generated test problems.
58
The General Problems (Table 5, 6, 7 and 8)
1. Tha algorithm showed good potential to solve general variables
problems in all four categories. It solved problems with very
large sizes of "D" and different degrees of difficulty throughout
all four categories.
2. It is apparent that tha CPU time of Step I is affected most by the
number of decision variables (n), and to a lesser extent by the
number of constraints (m). CPU time in Step II is driven by m and
tha size of "D". CPU time in Step V is affected by the size of "D"
and the number of non-basic variables (n); it is also affected, to
certain degree, by the magnitude of the gap and the type of the
group.
3. The CPU time of the PDEA seems to be very close to that of Step I,
when the size of "D" is relatively small. But when "D" becomes
vary large, the PDEA requires much more time to find the solution;
in fact, we had to use tight upper bound values to obtain the
solution of some problems within reasonable time.
4. In transforming the basis for such 100% dense randomly generated
problems (which are not common in practice), the limitations of
tha computer in handling integers larger than 2,147 million limited
the experimentation on problems of larger size. This overflow
usually occurs in matrix R when the group is "cyclic" and/or the
size of "D" is more than six integer digits.
59
5. Tha algorithm has baan able to yield sub-optimal solutions for some
problems in much lass time than that shown for obtaining the opti
mal solution. Also whan tha run is continued, altarnativa solu
tions for soma problems ara generated.
Tha 0-1 Problems (Table 9 and 10)
1. Tha CPU time shown for tha itaractiva steps of the algorithm in both
tables verifies tha observation made in point #2 above for general
problems.
2. For this class of problem, the size of "D," with few exceptions, is
relatively small. This factor, in addition to the tight upper
bound values (which ara equal to 1 for the non-basic variables),
makes the efficiency of the PDEA outperform the efficiency of the
LP algorithm used in Step 1.
3. Tha CPU time of Step II seems to grow considerably when the size of
m increases; this in turn reduces the overall efficiency of the
GTPEA. Because most of the constraints used are explicit upper
bounds for the decision variables, we are currently considering the
possibility of using a bounded LP algorithm in Step I of the GTPEA,
to solve 0-1 problems in particular.
4. As is the case with other types of problems, the algorithm
generated several alternate solutions for most of the problems
solved. However, the cost of obtaining any alternate solution for
60
this class of problems is very low, because of the relatively high
afficiancy of tha PDEA.
Tha Knapsack Problems (Table 11)
1. Tha data and tha graphical analysis of Figure 2 indicate that the
average CPU time of Step I (LP) is a linear function of n for
10 _< n _< 500, as would be expected. The relationship between the
average CPU time of Step V (PDEA) and n is given by a curvilinear
function that may be approximated by the following quadratic
regression aquation:
TpE = 0.6045 n + 0.0013 n2,
where TpE is the average CPU time of the PDEA. The multiple coef
ficient of determination ( R 2 ) of this statistical relation is equal
to 0.996 and the model F-value and the significance probability
(PR > F) are equal to 1230 and 0.0001, respectively. It should be
noted that for n < 50, the average CPU time of the PDEA is slightly
lower than that of the LP.
2. Tha total curve of the average CPU time of the two steps is also
non-linear, and is approximated by the following quadratic
regression relation:
TGT = 1.4207 n + 0.00115 n2,
where Tgj is the average CPU time of the GTPEA. The model's R 2 ,
F-value, and PR > F are equal to 0.998, 3725, and 0.0001, respec
tively.
fA
O u 01
Jooo 61
TQT « 1.4207 n + 0.00115 n^
TGT = ni.ii
i GTPEA
POEA
LP
> TLP = 8.2033 + 0.7491 n
R2 « 0.9998
Tp£ » 0.6045 n + 0.0013 n2
^ B ^ ^ _ ^ ^ ^ l
5^c;
Figure 2: Plot of Results Obtained in Table 11
62
In order to have a direct comparison with tha result obtained
by Cabot [1970] (in which tha CPU time of tha solution appears to
grow as n^'^S)^ tha following polynomial aquation is used for
approximating tha same relationship:
TGJ = nl.ll.
This result compares vary favorably with Cabot's n^-^^.
3. For almost all of tha 120 problems solved, the algorithm generated
several alternate solutions in very little marginal CPU time.
CHAPTER V
SUMMARY, CONTRIBUTIONS, LIMITATIONS, AND RECOMMENDATIONS FOR FURTHER RESEARCH
This chapter discusses tha advantages and limitations of this
research effort and racommands areas for further investigation.
Moreover, tha following section is included to summarize briefly the
basic concepts and tha outcomes of the computational testing given in
the previous four chapters.
Summary
This dissertation presents a Group Theoretic Partial Enumeration
Algorithm (GTPEA) for solving all-integer programming problems. The
GTPEA synthesizes tha group theoretic approach developed in the asymp
totic algorithm of Gomory and tha Partial Dynamic Enumeration Algorithm
(PDEA) of Austin and Zaatari. The PDEA is an efficient technique deve
loped to solve tha group knapsack problem and, if required, to generate
all of its feasible solutions. A brief explanation of the group
theoretic approach is given in order to introduce the notation and the
basic concept. A small but difficult problem that illustrates the
operation and tha effectiveness of tha algorithm is solved as an
example. The results of computational tasting on a standard set of
test problems and randomly generated problems are reported.
The analysis of the results indicate that the algorithm has the
ability to solve some standard all-integer programming problems, tor
63
64
which group theoretic solutions have not been reported. In addition,
the algorithm has shown vary promising potential in solving certain
special classes of all-intagar programs.
Contributions
Abalian groups of high order increase the difficulty of solving
ILP problems. This is particularly true for group theoretic approaches.
Tha theoretical explanation of this issue has been addressed by Gomory
[1965] and discussed thoroughly by other researchers and authors in
this area. Actually it is one of tha factors that affect the suf
ficient condition considered in the asymptotic theory of Gomory [1965]
(the other factors are the degree of degeneracy and the magnitude of
tha right hand side). This is to say, for a certain ILP problem, if
tha size of tha Abalian group is large, the degree of degeneracy is
high, and/or tha magnitude of the right hand side is relatively small,
then the problem is more likely non-asymptotic. This implies that the
optimal solution of tha group problem will yield infeasible solutions
for tha basic variables ( X B ) . Of course the degree of difficulty is
associated with the impact of the level of the above factor(s). On
these foundations the group theoretic approaches are developed.
Some of the group theoretic algorithms discussed in Chapter II
strive to mitigate the above difficulties and to provide the sub-
optimal solution that yields a feasible XB- None of them, however, has
reported the solution of some standard fixed-charge problems or other
similar problems. The GTPEA demonstrates its ability to solve sj.h
65
problems, which is its major contribution to this area of research.
Tha specific contributions of tha algorithm ara as follow:
1. As a group thaoratic approach, the GTPEA can solve moderately large
non-asymptotic problems and highly degenerate problems. It also
mitigates tha difficulty that arises from the high order of Abelian
groups by considering tha m^h modular constraint of the group knap
sack problem (this would help only when the group is a "cyclic
sub-group") and by allowing the use of explicit tight upper bounds
for tha non-basic variables to enhance its efficiency.
2. The algorithm exploits the group theoretic approach by using the LP
relaxed solution as an advanced start toward the optimal ILP solu
tion. This is a very advantageous step if the LP solution happens
to ba in tha vicinity of the ILP optimum.
3. Unlike other dual-based algorithms, the GTPEA would yield, in some
cases, good sub-optimal solutions if computations stopped prema
turely. Moreover, it has the ability to generate other alternate
solutions that might aid the decision maker in better planning.
4. The GTPEA avoids the problem of round-off errors usually encoun
tered by fractional cutting plane and branch-and-bound methods.
Also it does not require large computer memory resources, as do
branch-and-bound approaches.
5. With some modification, the algorithm can be geared to solve very
large 0-1 knapsack problems (this is currently under investigation).
66
6. Tha GTPEA is a vary efficient algorithm for solving ordinary knap
sack problems. Because tha algorithm is Insensitive to changes
In tha magnitude of tha right hand side (RHS) value, it would be
mora affective than employing dynamic programming and/or network
programming algorithms.
7. Use of tha algorithm reveals Important information about the
properties of tha integer programming problem under consideration.
This could be vary helpful in constructing other similar models.
8. Tha interactive capability of the GTPEA allows direct user inter
ventions to enhance the speed of the algorithm, and to search for
another optimal and/or sub-optimal solution by changing the upper
bounds and ra-solving the same problem. This is actually a unique
and powerful feature of the algorithm that gives it versatility.
Limitations
The GTPEA also has tha following shortcomings.
1. In its present form, the algorithm can solve only maximization
models. However, it can be adapted to solve minimization models
as wall.
2. Tha GTPEA solves pure integer programming models (all-integer).
Tha other important class of mixed integer programming models (MIP)
cannot be solved by this algorithm.
67
3. Despite tha afficiancy of tha algorithm used for transforming the
basis into Smith's normal form, it appears that the accumulation
of tha numbers in tha last rows (or columns) of matrix R, on large
problems, results in integer overflow in tha VAX 11/780. This is
especially true whan tha group under consideration is "cyclic," the
number of constraints is greater than or equal to 50, and the basic
matrix Is dense. Normally in such cases the order of the Abelian
group is larger than six integer digits.
Recommendations for Further Research
1. As noted above, problems with Abelian groups of high order curtail
the efficiency of group theoretic-based algorithms. The GTPEA has
partially alleviated this difficulty by using explicit tight upper
bounds for the non-basic variables. However, if other approaches
(such as branch-and-bound or cutting plane methods) are incor
porated after Step I to reduce the size of the determinant of the
LP optimal basis, two advantages might be gained: 1) the problem
of integer overflow will be mitigated; and 2) the efficiency of
the PDEA will improve considerably.
2. In Step II of the GTPEA, a very efficient technique is used for
the transformation. As explained above, the technique uses ^nd
updates one R matrix only, and this causes an integer overflow
in some cases. In order to overcome this problem (in the
VAX 11/780 and similar machines), other less efficient techniques
that use several successive R matrices (e.g., Hu's [1970] technique)
68
can be employed for tha transformation. Another approach is to
attempt to transform tha optimal basis inverse (B"^) instead of B.
3. For mixad-intagar programming (MIP) models this algorithm could ba
integrated into the Geoffrion [1972] decomposition technique and
utilized to solve tha generated all-integer program.
Thasa seem to be potentially promising areas for further research
in tha field of integer programming.
REFERENCES
Rafarancas Cited
Austin, L. M., "IP Test Problem Sat," Unpublished listing. College of Business Administration, Texas Tach University, 1978.
Austin, L. M., "Tha Bounded Descent Algorithm for Integer Programming," Unpublished Monograph, Collage of Business Administration, Texas Tech University, 1979.
Austin, L. M., and A. T. Zaatari, "A Partial Dynamic Enumeration Algorithm for Generating tha Solutions of the Group Knapsack Problem," Working Paper, Collage of Business Administration, Texas Tech University, 1984.
Austin, L. M., and B. C. Ruparel, "The Mixed Cutting Plane Algorithm for All-Integer Programming," forthcoming in Computers and Operations Research (1985).
Austin, L. M., and M. E. Hanna, "A Bounded Dual (All-Integer) Algorithm for All-Integer Programming," Naval Research Logistics Quarterly, Vol. 30(2), pp. 271-281 (1983).
Austin, L. M., and P. Ghandforoush, "An Advanced Dual Algorithm With Constraint Relaxation for All-Integer Programming," Naval Research Logistics Quarterly, Vol. 30(1), pp. 133-143 (1983).
Austin, L. M., and P. Ghandforoush, "A Surrogate Cutting Plane Algorithm with Constraint Relaxation for All-Integer Programming," forthcoming in Computers and Operations Research (1985).
Balas, E., "An Additive Algorithm for Solving Linear Programs With Zero-One Variables," Operations Research, Vol. 13(4), pp. 517-546 (1965).
Balas, E., "Intersection Cuts--A New Type of Cutting Planes for Integer Programming," Operations Research, Vol. 19(1), pp. 19-39 (1971).
Bradley, G. H., "Equivalent Integer Programs and Canonical Problems," Management Science, Vol. 17(5), pp. 354-366 (1971).
Bradley, G. H., and P. N. Wahi, "An Algorithm for Integer Linear Programming: A Combined Algebraic and Enumeration Approach," Operations Research, Vol. 21(1), pp. 45-60 (1973).
69
70
Burdat, C., "Enumerative Inequalities in Integer Programming," Mathematical Programming. Vol. 2(1), pp. 32-64 (1972).
^^''^!^nE•^"^"""'^''^^^'^® ^"^5 I>" Oparations Research. Vol. 21(1), pp. 61-89 (1973).
Cabot, A. v., "An Enumeration Algorithm for Knapsack Problems," Oparations Research. Vol. 18(2), pp. 306-311 (1970).
Chan, D. S., "A Group Thaoratic Algorithm for Solving Integer Linear Programming Problems," Ph.D. Thesis, Department of Industrial Engineering, State University of New York at Buffalo, 1970.
Chan, 0. S., and S. Zionts, "Comparison of Some Algorithms for Solving tha Group Thaoratic Integer Programming Problem," Oparations Research, Vol. 24(6), pp. 1120-1128 (1976).
Crown, J. C , "Solution of Linear Algebraic Systems Using All-Integer Tableaux," Presented at the American Mathematical Society Meeting, Cincinnati, Ohio, 1982a.
Crown, J. C , "Generalized Simplex Algorithms," Presented at the ORSA/TIMS Meeting, San Diego, Calif., 1982b.
Dakin, R. J., "A Tree-Search Algorithm for Mixed Integer Programming Problems," Computer Journal, Vol. 8(3), pp. 250-255 (1965).
Dantzig, G. B., D. Fulkerson, and S. Johnson, "Solution of a Large Scale Traveling Salesman Problem," Operations Research, Vol. 2(4), pp. 393-410 (1954).
Denardo, E. V., and B. L. Fox, "Shortest-Route Methods: 2. Group Knapsacks, Expanded Networks, and Branch-and-Bound," Operations Research, Vol. 27(3), pp. 548-566 (1979).
Garey, M.R., and D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman, San Francisco, 1979.
Garfinkel, R. S., and G. L. Nemhauser, Integer Programming, John Wiley, New York, 1972.
Geoffrion, A. M., "An Improved Implicit Enumeration Approach for Integer Programming," Operations Research, Vol. 17(3), pp. 437-454 (1969).
Geoffrion, A., "Generalized Bender's Decomposition," Journal of Optimization Theory and its Applications, Vol. 10(4), pp. 237-260 (1972).
71
Ghandforoush, P., "A Constructive Primal-Dual Cutting Plana Algorithm for Integer Programming," DBA Dissertation, Collage of Business Admlnlstation, Texas Tach University, 1980.
Ghandforoush, P., and L. M. Austin, "A Primal-Dual Cutting Plana Algorithm for All-intagar Programming," Naval Research Logistics Quarterly, Vol. 28(4), pp. 559-567 (1981T:
Ghandforoush, P. and L. M. Austin, "Solution Approaches for Highly Primal and Dual-Degenerate All-Integer Programming Problems," Computers and Oparations Research, Vol. 12(1), pp. 91-95 (1985).
Glover, F., "A Multiphase-Dual Algorithm for the Zero-One Integer Programming Problem," Operations Research, Vol. 13(6), pp. 879-919 (1965).
Glovar, F., "An Algorithm for Solving tha Linear Integer Programming Problem Over a Finite Additive Group, with Extensions to Solving General and Certain Nonlinear Integer Programs," Operations Research Canter Report 66-29, University of California at Berkely, 1966.
Glovar, F., "A Psaudo Primal-Dual Integer-Programming Algorithm," Journal of Research of the National Bureau of Standards, Vol. 71B(4), pp. 187-195 (1967).
Glover, F., "A New Foundation for a Simplified Primal Integer Programming Algorithm," Operations Research, Vol. 16(4), pp. 727-740 (1968a).
Glover, F., "Surrogate Constraints," Operations Research, Vol. 16(4), pp. 741-749 (1968b).
Glover, F., "Convexity Cut and Cut Search," Operations Research, Vol. 21(1), pp. 123-134 (1973).
Gomory, R. E., "Outline of an Algorithm for Integer Solutions to Linear Programs," Bulletin of American Mathematical Society, Vol. 64, pp. 275-278 (1958).
Gomory, R. E., "An Algorithm for Integer Solutions to Linear Programs," in Recent Advances in Mathematical Programming (R. L. Graves and P. Wolfe, eds.), McGraw-Hill, New York, pp. 269-302 (1963a).
Gomory, R. E., "All-Integer Integer Programming Algorithm," in Industrial Scheduling (J. F. Muth and G. L. Thompson, eds.), Prentice-Hall, Englewood Cliffs, New Jersey, pp. 193-206 (1963b).
72
Gomory, R. E. "On tha Relation Between Integer and Non-Integer
SS:%o?.'5S^2^^,^^^^^ ^' ''^ '^'^^^^^ ^---^-v -̂ Gomory, R E "Faces of an Integer Polyhedron," Proceedings of the National Academy of Science. Vol. 57(1), pp. 15-18 (1967).
Gomory R E., "Soma Polyhadra Related to Combinational Problems," allT) ^̂ "̂ '̂̂ Algebra and Its Applications. Vol. 2(4), pp. 451-558
Gorry, A. G., and J. F. Shapiro, "An Adaptive Group Theoretic Algorithm for Integer Programming Problems," Management Science, Vol. 17(5), pp. 285-306 (1971).
Gorry, A. G., J. F. Shapiro, and L. A. Wolsey, "Relaxation Methods for Pure and Mixed Integer Programming Problems," Management Science, Vol. 18(5), pp. 229-239 (1972).
Greenbarg, H., "An Algorithm for the Computation of Knapsack Functions," Journal of Mathematical Analysis and Applications, Vol. 26(1), pp. 159-162 (l969a).
Graanberg, H., "A Dynamic Programming Solution to Integer Linear Programs," Journal of Mathematical Analysis and Applications, Vol. 26(2), pp. 454-459 (l969b).
Hanna, M. E., "An All-Integer Cutting Plane Algorithm with An Advanced Start," DBA Dissertation, College of Business Administration, Texas Tach University, 1981.
Hanna, M. E., and L. M. Austin, "An Advanced Start Algorithm for All-Integer Programming," Forthcoming in Computers and Operations Research (1985).
Hi 1 liar, F. S., "A Bound-and-Scan Algorithm for Pure Integer Linear Programming with General Variables," Operations Research, Vol. 17(4), pp. 638-679 (1969).
Hu, T. C , Integer Programming and Network Flows. Addison-Wesley, Reading, Massachusetts, 1969.
Hu. T. C , "On the Asymptotic Integer Algorithm," Journal of Linear Algebra and Its Applications, Vol. 3(2), pp. 279-294 (1970).
Kannan, R., and A. Bachem, "Polynomial Algorithms for Computing the Smith and Hermita Normal Forms of an Integer Matrix," Society tor Industrial and Applied Mathematics (SIAM) Journal of Computing, Vol. 8(4), pp. 499-507 (1979).
73
Kuzdrall, P. J., and B. C. Ruparel, "An Efficient Dynamic Enumeration Schema for All-Intagar Programming," Working Paper, Collage of Business Administration, Texas Tach University, 1982.
Land, A. H., and A. G. Doig, "An Automatic Method for Solving Discrate Programming Problems," Economatrica, Vol. 28(3), pp. 497-520 (1960).
Martin, G. T., "An Accelerated Euclidean Algorithm for Integer Linear Programming," in Recant Advances in Mathematical Programming (R. L. Graves and P. Wolfe, ads.), McGraw-Hill, New York, pp. 311-318 (1963).
Mitten, L. G., "Branch-and-Bound Methods: General Formulation and Properties," Oparations Research, Vol. 18(1), pp. 24-34 (1970).
Namhausar, G., and W. Pierskalla, "Reporting Computational Experience in Oparations Research," Operations Research, Vol. 27, pp. vii-x (1979).
Ruparel, B. C , "The Bounded Enumeration Algorithm for All-Integer Programming," DBA Dissertation, College of Business Administration, Texas Tech University, 1983.
Ruparel, B. C , L. M. Austin, and P. J. Kuzdrall, "The Bounded Enumeration Algorithm for All-Integer Programming," working paper, Collage of Business Administration, Texas Tech University, 1983.
Shapiro, J. F., "Dynamic Programming Algorithms for the Integer Programming Problem-I: The Integer Programming Problem Viewed as a Knapsack Type Problem," Operations Research, Vol. 16(1), pp. 103-121 (1968a).
Shapiro, J. F., "Group Theoretic Algorithms for the Integer Programming Problam-II: Extension to a General Algorithm," Operations Research, Vol. 16(5), pp. 928-947 (1968b).
Shapiro, J. F., "Turnpike Theorems for Integer Programming Problems," Operations Research, Vol. 18(3), pp. 432-440 (1970).
Taha, H. A., Integer Programming: Theory, Applications, and Computations, Academic Press, New York, 1975.
Trauth, C. A., and R. E. D. Woolsey, "Integer Programming: A Study in Computational Efficiency," Management Science, Vol. 15(9), pp. 481-493 (1969).
White, W., "On a Group Theoretic Approach to Linear Integer Programming," Operations Research Center Report 66-27, University of California at Berkeley, 1966.
74
Wolsey, L. A., "Extensions of tha Group Theoretic Approach in Integer Programming," Management Science, Vol. 18(1), pp. 74-83 (1971).
Worm, G. H., C. D. Fiskaaux, and P. S. Wludyka, "A Computational Form for Finding tha Integral Solutions to a System of Linear Equations," Computer and Mathematics with Application, Vol. 7(5), pp. 405-406 (1981).
Young, R. D., "A Primal (All-Integer) Integer Programming Algorithm," Journal of Research of tha National Bureau of Standards, Vol. 69B(3), pp. 213-249 (1965).
Young, R. D., "A Simplified Primal (All-Integer) Integer Programming Algorithm," Oparations Research, Vol. 16(4), pp. 750-882 (1968).
Zionts, S., "Toward a Unifying Theory of Integer Linear Programming," Oparations Research, Vol. 17(2), pp. 359-367 (1969).
Zionts, S., "Generalized Implicit Enumeration Using Bounds on Variables for Solving Linear Programs with Zero-One Variables," Naval Research Logistics Quarterly, Vol. 19(1), pp. 165-181 (1972).
Zionts, S., Linear and Integer Programming, Prentice-Hall, Englewood Cliffs, New Jersey, 1973.
Additional References
Austin, L. M., and J. R. Burns, Management Science: An Aid for Managerial Decision Making, Macmillan Publishing Company, New York,
Balas. E., "A Note on the Group Theoretic Approach to Integer Programming and the 0-1 Case," Operations Research, Vol. 21(1), pp. 321-322 (1973).
Balinski, M. L., "Integer Programming: Methods, Uses, Computation." Management Science, Vol. 12(3), pp. 253-313 (1965).
Baale E. M. L., "Survey of Integer Programming," Operations Research Quarterly, Vol. 16(2), pp. 219-228 (1965).
Bell, D. E., "A Simple Algorithm for Integer Programs Using Group Constraints," Operations Research Quarterly, Vol. 28(2), pp. 453-.ba (1977).
Bell D. E., "Efficient Group Cuts for Integer Programs," Ma_thena^Jcr Programming] Vol. 17(2), pp. 176-183 (1979).
75
Gushing, B. E. "Tha Application Potential of Integer Programming," Journal of Business. Vol. 43(4), pp. 457-467 (1970).
Dantzig G. B., " Discrata-Variabla Extremum Problems," Oparations Research, Vol. 5(2), pp. 266-277 (1957).
Dantzig, G. B., "Notes on Solving Linear Programs in Integers," Naval Research Logistics Quarterly. Vol. 6(1), pp. 75-76 (1959)!
Dantzig, G. B., "On tha Significance of Solving Linear Programming Problems with Some Integer Variables," Econometrica. Vol. 28 (1) pp. 30-44 (1960). '
Dantzig, G. B., Linear Programming and Extensions, Princeton University Press, Princeton, New Jersey, 1963.
Geoffrion, A. M., and R. E. Marstan, "Integer Programming: A Framework and Stata-of-tha-Art Survey," Management Science, Vol. 18(9), pp. 465-491 (1972).
Graves, R. L., and D. Wolfe, Recent Advances in Mathematical Programming, McGraw-Hill, New York, 1963.
Greenbarg, H., Integer Programming, Academic Press, New York, 1971.
Griffin, H., Elementary Theory of Numbers, McGraw-Hill, New York, 1954.
Hi 1 Her, F. S., "Efficient Heuristic Procedures for Integer Linear Programming with an Interior," Operations Research, Vol. 17(4), pp. 600-637 (1969).
Hi 1 liar, F. S., and G. J. Lieberman, Introduction to Operations Research (3rd ed.), Holdan-Day, San Francisco, 1980.
Jambekar, A. B., and D. I. Steinberg, "An Implicit Enumeration Algorithm for the All Integer Programming Problem," Computer and Mathematics with Application, Vol. 4(1), pp. 15-13, 1978.
Jersolow, R. G., "Comments on Integer Hulls of Two Linear Constraints," Operations Research, Vol. 19(4), pp. 1061-1069 (1971).
Kendall, K. E., and S. Zionts, "Solving Integer Programming Problems by Aggregating Constraints," Operations Research, Vol. 25(2), pp. 346-351 (1977).
Kwak, N. K., Mathematical Programming with Business Applications. McGraw-Hill, New York, 1973.
76
Mathis, S. J., "A Counterexample to tha Rudimentary Primal Integer Programming Algorithm," Oparations Research, Vol. 19(6), pp. 1518-1522 (1971). '
Muth, J. F., and Thompson, G. L., Industrial Scheduling, Prentice-Hall, Englewood Cliffs, Naw Jersey, 1963.
Nivan, I., and H. S. Zuckarman, An Introduction to tha Theory of Numbers, John Wiley, Naw York, 1960.
Papadimitriou, C. H., "On tha Complexity of Integer Programming," Journal of tha Association for Computing Machinery, Vol. 28(4), pp. 765-768 (1981).
Saaty, T. L., Optimization in Integers and Related Extremal Problems, McGraw-Hill, Naw York, 1970.
Salkin, H. M., Integer Programming, Addison-Wesley, Reading, Massachusetts, 1975.
Vainott, A. F., Jr., and G. B. Dantzig, "Integral Extreme Points," Society for Industrial and Applied Mathematics (SIAM) Review, Vol. 10(3), pp. 371-372 (1968).
Wagner, H. M., Principles of Operations Research, (2nd ed.); Prentice-Hall, Englewood Cliffs, New Jersey, 1975.
APPENDICES
A. THE SPECIAL KNAPSACK PROBLEM
B. THE PARTIAL DYNAMIC ENUMERATION ALGORITHM (PDEA)
C. FLOW-CHART OF THE GTPEA
77
APPENDIX A
THE SPECIAL KNAPSACK PROBLEM
Consider tha group minimization problem given in Chapter II
page 15:
Minimize C^XN
Subject to B-1NXN = B'^b (mod 1) (3)
X(yj >̂ 0 and integer,
where cJ = CN - C B B - ^ N > 0.
As shown in Chapter III, pages 29-30, this set of constraints can
ba mapped into an eguivalent set of constraints by transforming the LP
optimal basis into the Smith normal form. This will yield the
following group knapsack problem:
* Minimize C^XN
Subject to / ^ 1 \
RNXN = Rb mod. . (3a)
\dm/
X^ >_ 0 and integer, m
where n di = "D" and R is the unimodular matrix used for the 1=1
transformation of the basis.
In tha above problem, if we consider only the mth modular equation
of the constraints, the following problem is obtained:
78
79
•
(3b) Minimize C^X^
Subject to
^m^mXNm - Rm^m (mod dm)
X(\j >̂ 0 and integer,
whara Rm is tha mth row of matrix R, Nm is tha mth row of matrix N, X̂ ni
is tha mth element of vector XN, and bm is the mth element of vector b.
If tha group under consideration is "cyclic," dm is equal "D" (the
order of tha Abalian group) and any solution for problem (3b) is a
valid solution for problem (3a). However, in the case of a "cyclic
sub-group" only some of tha solutions of problem (3b) are feasible for
problem (3a), but all the solutions of problem (3a) are feasible for
problem (3b). This can be proven easily since the constraint of
problem (3b) is a sub-set of the set of constraints given in the group
knapsack problem.
According to basic results of number theory, the mth modular
constraint of problem (3b), can be transformed into a linear equation
using a non-nagativa integer variable "I." This will yield what we
call the special knapsack problem, given below:
• Minimize CKIXN
(3c) Subject to
RmNmXNm - dmi = Rmbm
X^ >_ 0 and integer,
where 0 < I < d̂ ,. This is exactly an equivalent represent.^ ion of
problem (3b).
APPENDIX B
THE PARTIAL DYNAMIC ENUMERATION ALGORITHM (PDEA)
This algorithm has bean developed especially to find the optimal
or tha sub-optimal solution of tha non-basic variables (X^) of the group
knapsack problem that in turn yields non-negative values for the basic
variables (XB) of tha LP problem. The PDEA solves tha group knapsack
problem by first transforming it into tha special knapsack problem
that is defined in Appendix A. Then it generates, in sub-sets, the
integral optimal and/or sub-optimal solutions of the special knapsack
problem using a modified dynamic enumeration scheme that sorts out or
maps the non-nagativa integral solutions of the linear constraint. The
upper bounds on the variables of the constraint are used to reduce the
enumeration of non-promising solutions, which in turn would enhance the
spaed of tha algorithm in generating the desired non-negative solu
tions. If the considered group is a "cyclic sub-group" the feasibility
of each solution generated is checked against the other constraints of
the group knapsack problem. Only feasible solutions of X^ are used to
test whether any of them yield non-negative values of Xg; if one does,
it is saved. The same procedure continues until the optimal solution
is found or an external criterion is used for termination.
The steps of tha PDEA ara grouped and described in two phases. In
Phase I tha group knapsack problem is transformed into the special
knapsack problem and the parameters initialized. In Phase II the
80
81
partial dynamic enumeration scheme is invoked. Tha following
simplifiad notation of tha special knapsack problem is necessary for
presentation of tha algorithm.
The special knapsack problem can be written as
Min. YQ = CiXi + C2X2 + ... + CjXj + ... + CnXp
S.T. aiXi + a2X2 + ... + ajXj + ... + anXn + dml' = RHS,
0 1 Xj < UBj (j=l...n),
0 < I' < UBQ ( I ' = 1 - I ) .
whara Cj, aj, dm, RHS, and UBj, are known non-negative integer
constants, and Xj and I are non-negative integer variables. UBQ is an
integer constant, which is calculated according to the following
relation:
UB Q = Min ( J ajXj (mod dm), D) j=l
The Steps of The PDEA
Phase I
Step 1. Arrange the non-basic variables of the objective equation and
the mth constraint of the group knapsack problem in descending
order, according to the values of their coefficients in the
objective equation.
Step 2. Transform this mth modular constraint into a linear equation,
using a non-negative integer variable I, where 0 < I < d,n.
82
Step 3. Obtain a value for UBQ using tha above relation and use it as
an upper bound to change tha sign of I and update RHS.
Step 4. Sat X^, X2, ...,Xj, ...,Xn of the linear constraint to zero.
Sat NS equal to tha desired number of solutions to be
generated in a sub-sat. Set k = n and i = n.
Phasa^ II
Step 1. Sat X|( = Xk + 1, update RHS = RHS - akXk,
and go to Step 2.
Step 2. If RHS/dm is an integer, then set I' = RHS/dm and record the
solution with the variables at their current values.
Otherwise go to Step 3.
Step 3. If tha number of solution recorded equals NS, then go to
Step 9.
Otherwise go to Step 4.
Step 4. If Xk < UBk, then go to Step 1.
Otherwise go to Step 5.
Step 5. If i = k, then set i = i-1 and go to Step 7.
Otherwise, set k=k-l, and go to Step 6.
1 This phase is a modification of the Dynamic Enumeration Scheme, originally developed by Kuzdrall and Ruparel [1982].
83
Step 6. If Xk < UBk, than sat Xk = Xk + 1, update RHS = RHS - akXk,
sat K=n, and go to Step 2.
Otherwise go to Step 5.
Step 7. If 1 = 0, than stop.
Otherwise go to Step 8.
Step 8. Sat Xi+i, ..., Xn = 0, Xi = Xi + 1,
update RHS = RHS - aiXi, and go to Step 2.
Step 9. If the group is cyclic, check the feasibility of each
solution; if none is feasible, then delete the recorded solu
tions and go to Step 4.
Otherwise go to Step 10.
Step 10. If none of the feasible solutions generated yields non-
negative integer values for XB, then delete the recorded
solutions and go to Step 4.
Otherwise go to Step 11.
Step 11. Stop, if the optimal solution is found or time has expired.
Otherwise go to Step 4.
c APPENDIX C
FLOW-CHART OF THE GTPEA
Start
Data or Filename
± Solve the Relaxed
LP
No Terminate
/\
Yes
No
\/
Transform the LP optimal basis
(RBC = B)
84
85
Construct the group
knapsack problem
Do you want to enter upper
bounds for X^
No
\/
Obtain implicit upper bounds from the original data
\^
Yes T^
< •
Invoke the Partial Dynamic
Enumeration Algorithm <r
\/
Yes
Common Value
Terminate