GENERALIZATION OF SIMPLEX METHOD WITH ANALYTICAL AND
COMPUTATIONAL TECHNIQUES FOR SOLVING LINEAR
PROGRAMMING PROBLEM
By
MONJUR MORSHED Student No. 100609003P
Registration No. 100609003P, Session: October-2006
MASTER OF PHILOSOPHY IN
MATHEMATICS
Department of Mathematics
Bangladesh University of Engineering & Technology Dhaka-1000, Bangladesh
December, 2010
ii
GENERALIZATION OF SIMPLEX METHOD WITH ANALYTICAL AND
COMPUTATIONAL TECHNIQUES FOR SOLVING LINEAR
PROGRAMMING PROBLEM
A thesis submitted to the
Department of Mathematics, BUET, Dhaka-1000 in partial fulfillment of the requirement for the award of the degree of
MASTER OF PHILOSOPHY IN
MATHEMATICS
By
MONJUR MORSHED Student No. 100609003P
Registration No. 100609003P, Session: October-2006
Under the supervision of
Dr. Md. Abdul Alim Associate Professor
Department of Mathematics
Bangladesh University of Engineering & Technology Dhaka-1000, Bangladesh
December, 2010
iii
The thesis titled
GENERALIZATION OF SIMPLEX METHOD WITH
ANALYTICAL AND COMPUTATIONAL TECHNIQUES FOR
SOLVING LINEAR PROGRAMMING PROBLEM
Submitted by MONJUR MORSHED
Student No. 100609003P, Registration No. 100609003P, Session: October-2006 a part-time student of M. Phil. (Mathematics) has been accepted as satisfactory in partial
fulfillment for the degree of Master of Philosophy in Mathematics
on December 11, 2010
BOARD OF EXAMINERS
1. ______________________________________ Dr. Md. Abdul Alim Chairman Associate Professor (Supervisor) Department of Mathematics, BUET, Dhaka 2. ______________________________________ Head Member
Department of Mathematics, BUET, Dhaka (Ex-Officio) 3. ______________________________________ Dr. Md. Mustafa Kamal Chowdhury Member Professor Department of Mathematics, BUET, Dhaka 4. ______________________________________ Dr. Md. Elias Member Professor Department of Mathematics, BUET, Dhaka 5. ______________________________________ Dr. Mohammad Babul Hasan Member
iv
Assistant Professor (External) Department of Mathematics, Dhaka University, Dhaka.
DEDICATION
Dedicated To
My Parents
v
Abstract
In this thesis, we have studied the established traditional simplex methods of
Dantzig for solving linear programming problem (LP) by replacing one basic variable by
one non-basic variable at each simplex iteration, suggest to generalize the traditional
simplex methods for solving linear programming problem (LP) by replacing more than one
(P, where P • 1) basic variables by non-basic variables at each simplex iteration and
compare the methods between themselves. To apply these methods on large-scale real life
linear programming problem, we need computer-oriented program of these methods. To
fulfill this purpose, we developed computer program based on (MATHEMATICA)
language of these methods and apply on a sizable large-scale real life linear programming
problem of a garment industry and textile mill scheduling problem. In this thesis we also
developed a computational technique using mathematica codes to show the feasible region
of two-dimensional linear programming problems accurately as well as this method also
gives the optimal solution. Finally, conclusion is drawn in favour of the developed
generalized simplex method.
vi
Author’s Declaration
This is to certify that the work presented in this thesis is the outcome of the investigation
carried out by the author under the supervision of Dr. Md. Abdul Alim, Associate Professor,
Department of Mathematics, Bangladesh University of Engineering and Technology
(BUET), Dhaka-1000 and that it has not been submitted anywhere for the award of any
degree or diploma.
Monjur Morshed
Date: 11 December, 2010
vii
Acknowledgements
The author would like to mention with gratitude Almighty ALLAH’S continual
kindness without which no work would reach its goal.
The author is highly grateful and obliged to his honorable supervisor Dr. Md. Abdul
Alim, Associate Professor, Department of Mathematics, BUET, Dhaka for his continuous
guidance, constant support, supervision, valuable suggestions, inspiration, infinite patience,
friendship and enthusiastic encouragement throughout this work.
The author express his deep regards to his honorable teacher, Dr. Md. Abdul Hakim
Khan, Professor and Head, Department of Mathematics, Bangladesh University of
Engineering and Technology for providing help, advice and necessary research facilities.
The author is also grateful to Prof. Dr. Md. Mustafa Kamal Chowdhury, the former
Head of the Department of Mathematics and Prof. Dr. Md. Elias, Prof. Dr. Md. Abdul
Maleque, Prof. Dr. Monirul Alam Sarkar, Prof. Dr. Nilufar Farhat Hossain, Department of
Mathematics, BUET, Dhaka for their wise and liberal co-operation in providing me all
necessary help from the department during my course of M. Phil. Program. The author
would also like to extend his thanks to all respectable teachers, Department of Mathematics,
BUET, Dhaka for their constant encouragement.
The author thanks the members of the Board of Examination namely Prof. Dr. Md.
Abdul Hakim Khan, Prof. Dr. Md. Mustafa Kamal Chowdhury, Prof. Dr. Md. Elias, and Dr.
Mohammad Babul Hasan for their contributions and for their flexibility and understanding
in helping his meet such an ambitious schedule.
The foundation for his education and success started at home. The author credits his
parents, Muhammed Sirajul Islam and Maleka Begum for shaping him into the person he is
today. Their unwavering love and support throughout his life has given him the confidence
and ability to pursue his academic and personal interests. The author expresses his heartfelt
gratitude and thanks to his beloved wife, sisters, family members and friends for their
constant encouragement during this work.
Finally, the author acknowledges the help, co-operation of all office staff of this
Department.
viii
Contents
Abstract ............................................................................................................... v
Author’s Declaration ....................................................................................... vi
Acknowledgements ......................................................................................... vii
NOMENCLATURE ......................................................................................... xi
CHAPTER 1 ....................................................................................................... 1INTRODUCTION ................................................................................................................... 1
1.1 Introduction: ....................................................................................................................... 1
1.2 Mathematical Model: ......................................................................................................... 3
1.3 Mathematical Programming: .............................................................................................. 4
1.4 Mathematical Programming problem or Mathematical Program (MP): ............................ 4
1.5 General Mathematical form of Linear Programming (LP): ............................................... 6
1.6 Formulation of Linear Programming Problem: .................................................................. 8
1.7 Standard Linear Programming: ........................................................................................ 101.7.1 Reduction to Standard Form: ................................................................................... 121.7.2 Feasible Canonical Form: ......................................................................................... 131.7.3 Relative Profit Factors: ............................................................................................. 141.7.4 Some Important Theorems of Standard Linear Program: ........................................ 15
1.8 A Real Life Production Problem of a Garment Industry (Standard Group): ................... 15
1.9 A Real Life Problem of a Textile Mill: ............................................................... 201.9.1 Introduction: ....................................................................................................... 201.9.2 Textile Mill Scheduling problem: ................................................................ 201.9.3 Formulation of the Textile Mill Scheduling problem: ............................................. 21
Chapter 2 .......................................................................................................... 25Linear Programming Models: Graphical and Computer Methods ......................................... 25
2.1 Steps in Developing a Linear Programming (LP) Model: ............................................... 252.1.1 Properties of Linear Programming Models: ............................................................. 252.1.2 Mathematical Formulation of Linear Programming problem: ................................. 25
2.2 Graphical Method: ........................................................................................................... 262.2.1 Real Life Example of Model Formulation (Otobi Furniture Co.): ........................... 262.2.2 Graphical Solution: .................................................................................................. 28
2.3 LP Characteristics: ........................................................................................................... 302.3.1 Special Situation in LP: ............................................................................................ 30
2.4 Numerical Example-1: ..................................................................................................... 32
ix
2.5 Mathematica Codes for Graphical Representation of Feasible Region: .......................... 332.5.1 Numerical Example- 2: ............................................................................................ 332.5.2 Numerical Example- 3: ............................................................................................ 35
2.6 Conclusion: ...................................................................................................................... 37
Chapter 3 .......................................................................................................... 38
SIMPLEX METHOD AND COMPUTER ORIENTED ALGORITHM FOR SOLVING LINEAR PROGRAMMING PROBLEMS ............................................................................ 38
3.1 Introduction: ..................................................................................................................... 38
3.2 Simplex Method: .............................................................................................................. 383.2.1 Computational steps for solving (LP) in simplex method: ...................................... 403.2.2 Properties of the Simplex Method: ........................................................................... 413.2.3 The standard form of (LP) is in canonical form: ...................................................... 423.2.4 The Standard Form of (LP) is Not in a Canonical Form: ......................................... 43
3.3 Artificial Variable Technique: ......................................................................................... 433.3.1 The Big-M Simplex Method: ................................................................................... 433.3.2 The Two-Phase Simplex Method: ............................................................................ 44
Chapter 4 .......................................................................................................... 46MORE THAN ONE BASIC VARIABLES REPLACEMENT IN SIMPLEX METHOD FOR SOLVING LINEAR PROGRAMMING PROBLEMS ................................................ 46
4.1 Paranjape’s Two-Basic Variables Replacement Method for Solving (LP): .................... 464.1.1 Algorithm: ................................................................................................................ 464.1.2 New Optimizing Value: ........................................................................................... 484.1.3 Optimality Condition: .............................................................................................. 494.1.4 Criterion-1: (Choices of the entering variables into the basis): ................................ 504.1.5 Criterion-2: (Choices of the out going variables form the basis): ............................ 50
4.2 Agrawal and Verma’s Three Basic Variables Replacement Method for Solving (LP): .. 514.2.1 Algorithm: ................................................................................................................ 514.2.2 New Optimizing Value: ........................................................................................... 544.2.3 Optimality Condition: .............................................................................................. 554.2.4 Criterion-1: (Choices of the entering variables into the basis): ................................ 554.2.5 Criterion-2: (Choices of the out going variables form the basis): ............................ 56
4.3 Numerical example: ......................................................................................................... 56
Chapter 5 .......................................................................................................... 60
GENERALIZATION OF SIMPLEX METHOD FOR SOLVING LINEAR PROGRAMMING PROBLEMS ........................................................................................... 60
5.1 P-Basic Variables Replacement Method for Solving (LP): ............................................. 605.1.1 Algorithm: ................................................................................................................ 605.1.2 New Optimizing Value: ........................................................................................... 655.1.3 Optimality Condition: .............................................................................................. 665.1.4 Criterion-1: (Choices of the entering variables into the basis): ................................ 665.1.5 Criterion-2: (Choices of the out going variables form the basis): ............................ 67
5.2 The Combined Algorithm: ............................................................................................... 67
x
5.3 Mathematica Codes: ......................................................................................................... 685.3.1 The combined program in Mathematica (Eugere, Wolfram): .................................. 695.3.2 Numerical Examples and Comparison: .................................................................... 74
5.4 Solution of LP on a production problem of a garment industry (Standard Group) using combined program: ................................................................................................................ 76
5.5 Solution of LP on Textile Mill Scheduling problem using combined program: ... 78
5.6 Conclusion: ...................................................................................................................... 79
Chapter 6 .......................................................................................................... 81
COUNTER EXAMPLES OF MORE THAN ONE BASIC VARIABLES REPLACEMENT AT EACH ITERATION OF SIMPLEX METHOD .............................................................. 81
6.1 Introduction: ..................................................................................................................... 816.1.1 Numerical Example 1: .............................................................................................. 816.1.2 Numerical Example 2: .............................................................................................. 86
6.2 Conclution: ....................................................................................................................... 91
Chapter 7 .......................................................................................................... 92
CONCLUSION ...................................................................................................................... 92
References .............................................................................................................................. 94
xi
NOMENCLATURE
OR Operation Research
LP Linear Programming
LPP Linear Programming Problem
FPP Fractional Programming Problems
LFP Linear Fractional Program
LFPP Linear Fractional Programming Problem
MP Mathematical Program
NLP Non-Linear Program
NLPP Non-Linear Programming Problem
QPP Quadratic Programming Problem
.
1
CHAPTER 1
INTRODUCTION
1.1 Introduction:
Mathematical programming or linear programming is one of the most widely
used techniques in operations research. Many practical problems in operations research
can be expressed as linear programming (LP) problems. Certain special cases of linear
programming, such as network flow problems and multicommodity flow problems are
considered important enough to have generated much research on specialized algorithms
for their solution. In many cases its application has been so successful that its use has
become an accepted routine planning tool. It is therefore rather surprising that
comparatively little attention has been paid to the problems of formulating and building
mathematical programming models as well as developing computer technique for
solving linear programming problems.
The study of operation research is of great importance to the researcher because
of their applications in many branches of science and Engineering. Some of the earlier
researchers studied the problems related with optimization technique. At first George
Bernard Dantzig develop simplex method in 1950. The simplex method is an iterative
procedure for solving a linear program in a finite number of steps and provides all the
information about the program. Dantzig (1962) developed a solution method for solving
linear programming problem (LP) by replacing one basic variable by one non-basic
variable at each simplex iteration. Assuming the compactness of the constraint set S and
applying the transformation, y = tx, t • 0 Charnes and cooper (1962) transformed linear
fractional programming (LFP) to two linear programs and solved either or both of the
linear programs and hence solved the LFP. Paranjape (1965) developed a method which
replaces two-basic variables by two non-basic variables at each iteration of simplex
method for solving LP. Agarwal and Verma (1977) generalized the method of Paranjape
for solving LP by replacing 3-basic variables at each iteration. Kanchan (1976) extended
Paranjape’s method for solving (LFP) and Gupta and Sharma (1983) further extended
Kanchan’s method for solving quadratic programming problem(QP). Forhad (2004)
compared different methods for solving linear fractional programming problem.
.
2
In this research we have generalized the simplex method of one variable
replacement to simplex method of P variables replacement, where P • 1. We also
developed a computer techniques for solving LP problems of replacing more than one
basic variable by non-basic variables at each simplex iteration.
For the sake of self-containness of the thesis we first briefly discuss the linear
programming models as well as graphical and computers methods in Chapter 2. In this
chapter we have developed a computational technique using mathematica codes to show
the feasible region of two-dimensional linear programming problems and which also
give the optimal solution.
In Chapter 3, we briefly discuss the usual simplex method and computer
oriented algorithm for solving linear programming problems.
In Chapter 4, we present more than one basic variables replacement methods of
Paranjape and Agrawal & Verma for solving linear programming problem (LP). We
also give a numerical example to demonstrate both the methods.
In Chapter 5, we present the generalization of simplex method for solving linear
programming problems. We also give a combined program in mathematica for solving
large scale real life problem by more than one basic variables replacement methods.
In Chapter 6, we illustrate some counter example to highlight the more than one
basic variables replacement at each iteration of simplex method as well as graphically,
numerically and by using our combined program in programming language mathematica.
Thus the method developed in this thesis is an extension of traditional simplex
type method by replacing more than one basic variables by non-basic variables at each
simplex iteration. A large scale LP problem, which involves a numerous amount of data,
constraints and variables, cannot be handle analytically with pencil and paper. To
overcome the complexities of large-scale Linear Programming (LP) problem here we
develop a combined program in mathematica for solving LP by more than one basic
variables replacement at each iteration of simplex method. To illustrate the purpose, we
solve a sizable large-scale LP Problem of Textile Mill Scheduling problem, which
is formulated in section 1.8. To present our study, we required the following
prerequisites:
.
3
1.2 Mathematical Model:
Many application of science makes use of models. The term ‘model’ is usually
used for structure has been built purposely to exhibit features and characteristics of some
other object. Generally only some of these features and characteristics will be retrained
in the model depending upon the use to which it is to be put. More often in Operations
Research we will be concerned with abstract models. These models will usually be
mathematical in that algebraic symbolism will be used to mirror the internal relationships
in that object (often an organization ) being modeled. Our attention will mainly be
confined to such mathematical models although the term ‘model’ is sometimes used
more widely to include purely descriptive models.
The essential feature of a mathematical model in Operation Research is that it
involves a set of mathematical relationship (such as equations, inequalities, logical
dependencies, etc) which correspond to some down-to-earth relationships in a real world
(such as technological relationships, physical laws, marketing constraints, etc)
There are a number of motives for building such models:
• The actual exercise of building a model often reveals relationships, which were not apparent to many people. As a result a greater understanding is achieved of the object being modeled.
• Having built a model it is usually possibly to analysis it mathematically to help suggest courses of action, which might not otherwise be apparent.
• Experimentation is possible with a model whereas it is often not possible or desirable to experiment with the object being modeled. It would clearly be politically difficult as well as undesirable to experiment with unconventional economic measures in a country if there was a high probability of disastrous failure. The pursuit of such courageous experiments would be more (though not perhaps totally) acceptable on a mathematical model.
It is important to realize that a model is really defined by the relationships which
it incorporates. These relationships are to large extent, independent of data in the model.
A model may be used on many different occasions with differing data, e.g. cost,
technological coefficients, resource availability’s, etc. We would usually still think of it
as the same model even though some coefficients had changed. This distinction is not, of
course, total radical changes in the data would usually be thought of as a changing the
relationships and therefore the model.
.
4
1.3 Mathematical Programming:
Mathematical programming is one of the most widely used techniques in
Operations Research. In many cases its application has been so successful that its use has
passed out of Operations Research departments to become an accepted routine planning
tool. It is therefore rather surprising that comparatively little attention has been paid in
the literature to the problems of formulating and building mathematical programming
models even deciding when such model is applicable.
It should be pointed out immediately that mathematical programming is very
different from Computer Programming. Mathematical programming is ‘Programming’ in
the sense of ‘planning’. As such it need have nothing to do with Computers. The
confusion over the use of world ‘programming’ is widespread and unfortunate.
Inevitably mathematical programming becomes involved with computing since practical
problems almost always involves large quantities of data and arithmetic which can only
reasonably be tackle by the calculating power of a computer. The correct relationship
between Computers and Mathematical Programming should, however, be understood.
The common feature which mathematical programming models have is they all
involve Optimization. We want to maximize something. The quantity by which we want
to maximize or minimize is known as an objective function. Unfortunately the realization
that Mathematical Programming is concerned with optimizing an objective often leads
people to summarily dismiss Mathematical programming as being inapplicable in
practical situation where there is no clear objective or there are a multiplicity of
objectives.
In this thesis we confine our attention to a special sort of a Mathematical
Programming Model, called a linear programming model and its related problems.
1.4 Mathematical Programming problem or Mathematical Program (MP):
Mathematical Programming problem or Mathematical Program (MP) deals with
the optimization (maximization or minimization) of a function of several variables
.
5
subject to a set of constraints (inequalities or equalities) imposed on the values of
variables.
A general mathematical programming problem can be stated as follows:
(MP) Maximize f(x) 1.1
Subject to xεS = {x : gi (x)≤0, i =1,2,3, …… ,m} 1.2
Where x = (x1,x2,x3, …… xn )T is the vector of unknown decision variables and
f(x),gi(x), (i =1,2,3, ……… ,m ), are real valued functions of the n real variables
x1,x2,x3, ………… xn .The function f is called objective function and (1.2) is referred
to as constraints.
The model of mathematical programming in which all the functions appearing in
it are linear decision variables x is called a linear programming problem (LP).Among
the mathematical programs the linear programming problem(LP) is a well known
optimization technique. The mathematical model of a linear programming problem (in its
canonical form) is as follows:
( LP) Maximize Z = cTx 1.3
Subject to xεS = {xεRn : Ax≤b, x ≥o} 1.4
Where A is an m ×n matrix, x, cεRn , bεRn , cT
The set S is normally taken as a connected subset of R
denotes transpose of c.
We have started the MP as maximization one. This has been done without any
loss of generality, since a minimization problem can always be converted into a
maximization problem using the identity
min f(x)=max(-f(x))
i.e., the minimization of f(x) is equivalent to the maximization of (-f(x)). n . Here the set S is taken
as the entire space Rn. The set X ={x ∈S, gi (x) ≤ 0 ; i=1,2,…,m, } is known to as the
feasible region, feasible set or constraint set of the program MP and any point x∈X is a
feasible solution or feasible point of the program MP which satisfies all the constraints of
.
6
MP . If the constraint set X is empty (i.e. X=φ ) , then there is no feasible solution ; in
this case the program MP is inconsistent .
A feasible point x0
Xxxfxf ∈≤ ,)()( 0
∈X is known as a global optimal solution to the program MP if
1.5
A global optimal solution x0 of MP program is indeed a global maximum point
of the program MP. A point x0 is said to be a strict global maximum point of f(x) over X
if the strict inequality (<) in (1.5) holds for all x ∈ X and x = x0
)(,)()( ** xNXxxfxf ε∩∈∀≤
.
A point x*∈X is a local or relative maximum point of f(x) over X if there exists
some ε >0 such that
.
Where Nε(x*) is the neighborhood of x* having radius ε . Similarly , global
minimum and local minimum can defined by changing the sense of inequality.
The MP can be broadly classified into two categories: unconstrained
optimization problem and constrained optimization problem. If the constraint set X is
the whole space Rn, program MP is then known as an unconstrained optimization
problem, in this case, we are interested in finding a point of Rn at which the objective
function has an optimum value . On the contrary, if X is a proper subset of Rn
1.5 General Mathematical form of Linear Programming (LP):
.
If both the objective function and the constraint set are linear, then MP is called a
linear programming problem (LPP) or a linear program (LP)
On the other hand, non-linearity of the objective function or constraints gives rise
to non-linear programming problem or a non-linear program (NLP). Several
algorithms have been developed to solve certain NLP.
The mathematical expression of a general linear programming problem (LP) is
As follows:
(LP) Maximize (or Minimize)
Subject to
.
7
Where one and only one of the signs ≤ , = , ≥ holds for each constraint in (1.6)
and the sign may vary from one constraint to another.
Here cj (j = 1,2, …...,n ) are called profit (or cost) coefficients, xj (j = 1,2,………,n ) are
called decision variables. The set of feasible solution to (LP) is
S = { (x1,x2,……,xn)T : (x1,x2,……,xn)T ε Rn and (1.6) holds at (x1,x2,……,xn)T }
The set S is called the constraints set, feasible set or feasible region of (LP).
In matrix vector notation the above problem can be expressed as :
Maximize (or Minimize) Z = cx
Subject to Ax (≤ , = , ≥ ) b
where A is an m×n matrix, x is an (n×1) column vector, b is an (m×1) column vector
and c is a (1×n) row vector.
Convex Set :
A set SεRn is called a convex set if x1,x2 εS => λ x1 +(1-λ) x2εS for all (0≤ λ ≤1).
The empty and singleton sets are treated as convex sets. A set S is clearly convex if the
line segment joining any two points of S lies in S. It should be noted that the number of points in
a convex set is zero, one or infinite.
Extreme Point :
Let S⊆Rn be a convex set. A point xεS is called an extreme point or vertex of S if there
exist no two distinct points x1 and x2 in S such that
x = λ x1 + (1- λ) x2
j
n
jj xcZ ∑
=
=1
for 0< λ <1.
}{ ( )6.1....,.........2,1;,,1
mibxa ij
n
jij =≥=≤∑
=
.
8
1.6 Formulation of Linear Programming Problem:
The procedure for mathematical formulation of linear programming problem consists of
the following major steps :
Step 1:
Identify the unknown variables to be determined (decision variables) and represent them
in terms of a algebraic symbols.
Step 2:
Formulate the other conditions of the problem such as resource limitations, market
constraints and inter-relation between variables etc. as linear equations or enequations in
terms of decision variables.
Step 3:
Identify the objective or criterion and represent it as linear function of the decision
variables, which is to be maximized or minimized.
Step 4:
Add the ‘Non-negativity’ constraint from the consideration that negative values of the
decision variables do not have any valid physical interpretation.
The objective function, the set of constraints and the non-negative constraints together
from a linear programming problem.
We now recall the following are basic results of a linear programming problem (LP)
from Kambo [1984] and Gass [1984].
Theorem 1.1
The constraint or feasible set of a linear programming problem is a convex set.
Proof:
Consider the linear programming problem
(LP) Minimize z = Σ cjxj
.
9
{ }{
{ ( ) }mibxax
mibxaxSx
iT
ij
n
jij
....,.........2,1;,,:
....,.........2,1;,,:
oSubject t
1
=≥=≤=
=≥=≤=∈ ∑=
We have to prove that S is a convex set. The definition tells us that S is a intersection
of H, H+ and H- . By theorem we know that the sets H, H+, H-, H+0 and H-
0 are all convex sets.
So H, H+, and H-are convex sets. Also by theorem we know that the intersection of any
collection of convex set is a convex set. So S is a convex set. Hence the theorem is proved.
Theorem 1.2
The set of optimal solutions to the linear programming(L.P) is convex.
Proof:
Let x0 = (x10,x2
0,………,xn0)T and y0 = (y1
0, y20, …….yn
0)T be two optimal solutions to
program (LP). Then cTx0 = cTy0 = min z
where c = (c1,c2,……,cn)T. Since x0 and y0 are feasible for (LP) and the feasible set S is
convex, then λ x0+(1- λ)y0εS for 0≤ λ ≤1.
Also cT (λ x0 + (1- λ) y0 )
= λ cTx0 + (1- λ)cTy0
= λ min z + (1- λ) minz
= min z
Hence λ x0+(1- λ)y0 is also an optimal solution for all 0≤ λ ≤1. This means that the set of all
optimal solutions to the linear programming Problem is a convex set.
Theorem 1.3 : (Fundamental Theorem)
Let the constraint set T be non-empty closed and bounded. Then an optimal solution to
the Linear Problem (LP) exists and it is attained at a vertex of the constraint set T.
.
10
Proof:
Since the is non-empty and compact and Z = cT x is continuous and an optimal solution
exists. The number of the vertices of the convex polyhedron T is finite. Let the vertices of T be
x1,x2,........,xk (xi ∈Rn for all i). Then the set T is equal to the convex hull of the points x1, x2,
............., xk
X=
. Thus any feasible point x ∈ T can be written as
Where λ i ≥ 0 (i=1,...........,k) and ∑=
k
ii
1λ = 1
Let Z0 = min { cT xi
, i = 1,..........,k}. Then for any x ∈ T. We obtain that,
Hence the minimum value of cT x over T is Z0
1.7 Standard Linear Programming:
and is attained at a vertex of T.
A problem of the form
(LP1) Maximize z = cT
ik
iiX∑
=1λ
x
Subject to:
Ax = b 1.7
x > 0 1.8
ik
ii
T Xc ∑=
=1λ
ik
i
T
iXc∑
=
=1λ
xcZ T=
kTk
TT xcxcxc λλλ +++= ...........22
11
( )0
210 ...........
z
z k
=+++≥ λλλ
.
11
is known as a linear program in standard form. The characteristics of this form are:
All the constraints are expressed in the form of equations, except for the non-negative
restrictions.
The right hand side of each constraint equation is non-negative.
In (LP1), the m × n matrix A = (aij) is the coefficient matrix of the equality
constraints, b = (b1, b2, ......, bm)T is the vector of right hand side constants, the
component of c are the profit factors, x = (x1, x2, ......, xn)T ∈ Rn is the vector of
variables, called the decision variables and (1.8) are the non-negativity constraints. The
column vector of the matrix A are referred to as activity vectors. We recall the following
definition for standard linear program.
Feasible Solution:
xj (j = 1, 2, ......, n) is a feasible solution of the standard linear programming (LP1) if it is
satisfies conditions (1.7) and (1.8).
Basic Solution:
A basic solution to (1.7) is a solution obtained by setting (n-m) variables equal to
zero and solving for the remaining m variables, provided the determinant of the
coefficients of these m variables are non-zero. The m variables are called basic variables.
Basic Feasible Solution:
A basic feasible solution is a basic solution, which also satisfies (1.8) that is, all
basic variables are non-negative.
Degenerate Solution:
A basic feasible solution to (1.7) is called degenerate if one or more the basic
variables are zero.
Non- degenerate Basic Feasible Solution:
A non -degenerate basic feasible solution is a basic feasible solution with exactly
m positive xi, that is, all basic variables arc positive.
.
12
Optimal Solution:
A basic feasible solution is said to be optimal solution if it maximize the
objective function while satisfying condition (1.7) and (1.8) provided the maximum
value exists.
1.7.1 Reduction to Standard Form:
Every general linear program can be reduced to an equivalent standard linear
program as explained below.
( i) Conversion of right hand side constraint to non-negative
If a right hand side constant of a constraint is negative , it can be made non-negativity by
multiplying both sides of the constraints by –1(if necessary).
( ii) Conversion of inequality constraint to equality
Slack Variable:
For an inequality constraint of the form
)0;,......,2,1(
1≥=∑ ≤
=i
n
jjjij bmibxa
,
Adding a non-negative variable xn+1 can be made equation
∑ ==+=
+
n
jinjij mibxxa
11 ),.......,2,1(
and the non-negative variable xn+1 is called the slack variable .
Surplus Variable :
For an inequality constraint of the form
)0;,......,2,1(1
≥=≥∑=
i
n
jjjij bmibxa
subtracting a non-negative variables xn+1 can be made equation
∑ ==−=
+
n
jinjij mibxxa
11 ),.......,2,1(
.
13
and the non-negative variable xn+1 is called the surplus variable .
(iii) Making All Variables-Non-Negative
All variables in the equivalent linear program can be made non-negative as follows:
i) If xi < 0, then put xi/ = -xi clearly xi/ > 0.
ii) If xi is unrestricted in sign (i.e. a free variables ), then
Put xi = xi/- xi// where xi/, xi// > 0.
(iv) Conversation of Minimization Problem
Since, Min f(x) =Max {-f (x)}
The minimization of f (x) over F is equivalent to the maximization of -f (x) over F.This
enables us to convert a minimization problem into the equivalent maximization problem
(if necessary).
1.7.2 Feasible Canonical Form:
Consider the constraints (1.7) i.e. Ax = b, are consistent and rank (A) = m (< n).
Let B be any non singular m × m submatrix made up of the columns of A and R is the
remainder portion of A. Further, suppose that XB
[ ] bx
xRB
NB
B =
,
is the vector of variables associated
with columns of B. Then (1.2) can be written as
[
or, B xB + RxNB = b
That is the solution of (1.2) is given by
xB =B-1b -B-1 RxNB
or, xB + B-1bRxNB = B-1b 1.9
where the (n -m) variables xNB can be assigned arbitrary values. The form (1.9) of
constraint is called the canonical form in the variables xB. The particular solution of
(1.7) given by
.
14
xB =B-1b , xNB = 0 1.10
is called the basic solution to the system Ax = b with respect to the basic matrix B. The
variables xNB are known as the non basic variables and the variables xB are said to be
the basic variables. It should be noted that the column of A associated with the basic
matrix B is linearly independent and that all non-basic variables are zero in a basic
solution. The basic solution given by (1.9) is feasible if xB
1.7.3 Relative Profit Factors:
> 0.
Suppose that there exists a feasible solution to the constraint (1.7) and (1.8). The
coefficients of the variables in the objective function z, after the basic variables from it
have been eliminated, are called relative profit factors. In order to find relative profit
factors corresponding to basis matrix B, we partition the profit vector c as
cT = (CBT, CNB
T), where cB and cNB are the profit vectors corresponding to the variables
xB and xNB The objective function then is
z = cTB
= cBTxB + cNB
TxNB 1.11
Substituting in this equation the values of xB from (1.4), we get
z = cB
T B-1b – cBTB-1 R xNB + cNB
T
z
xNB
= - (cBT B-1R - cNB
T ) x
z
NB
= - c BTxB c - NB
Tx
z
NB
= - c T
c
x
Where ,
= ( c B c, NB)T
c
,
B
c
= 0.
NBT = cB
T B-1 R- cNB
z
T
= cBT B-1 b
.
15
Here c is the vector of relative profit factors corresponding to, the basis matrix B
and z is the value of the objective function at the basic solution given by (1.10). Observe
that the components of c corresponding to the basic variables are zero, which ought to be
as is evident from the definition of c .
1.7.4 Some Important Theorems of Standard Linear Program:
We now state the following results from Kambo [I984].
Theorem 1.4: If standard linear program with the constraints Ax = b and x > 0,
where A is an m × n matrix of rank m, has a feasible solution, then it also has a basic
feasible solution.
Theorem 1.5: Let F be a convex polyhedron consisting of all vectors x ∈ Rn
1.8 A Real Life Production Problem of a Garment Industry (Standard Group):
satisfying
the system Ax = b, x > 0, where A is an m × n matrix of rank m. Then, x is an extreme
point of F if and only if x is a basic feasible solution to the system.
The above theorem ensures that every basic feasible solution to a (LP) is an extreme
point of the convex set of feasible solutions to the problem and that every extreme point
is a basic feasible solution corresponding to one and only one extreme point of the
convex set of feasible solution and vice versa.
Standard group is one of the prominent garment industries in Bangladesh. They
have huge contributions to our national GDP. They company wishes to expand its
business activities across national boundaries. The owner of Standard Group has $
400000 by which he can produce maximum 1500 pieces of garment items per day. The
owner wishes to produce different garment items (Men’s long sleeve shirt, Men’s short
sleeve shirt, Men’s long pant, Men’s shorts, Ladies long pant, Ladies shorts, Boys long
pant, Boys shorts, Men’s boxer, Men’s fleece jacket, Men’s jacket, Ladies jacket, Boys
jacket) He has the following data for per piece:
.
16
S/
N
Name of
Garment items
Fabr
ics
cost
($)
Acc
esso
ries
cos
t ($)
Was
hing
cos
t ($)
Pack
agin
g co
st (
$)
Lab
or/C
M C
ost (
$)
Man
agem
ent
/
prod
uctio
n/
over
head
cost
($)
Tot
al c
ost (
$)
Ret
urn
($)
1 Men’s long
sleeve shirt
2.90 .25 .18 .18 .90 .18 4.59 7.09
2 Men’s short
sleeve shirt
2.20 .22 .25 .20 1.0 .25 4.12 6.32
3 Men’s long pant 3.50 .30 .20 .17 .85 .22 5.24 8.24
4 Men’s shorts 3.0 .25 .22 .19 1.0 .23 4.89 7.59
5 Ladies long
pant
3.20 .30 .18 .18 .90 .20 4.96 7.76
6 Ladies shorts 2.75 .06 .20 .18 .90 .18 4.27 6.57
7 Boys long pant 2.70 .25 .17 .18 .80 .16 4.26 7.46
8 Boys shorts 2.20 .15 .05 .10 .25 .05 2.80 4.90
9 Men’s boxer 1.0 .30 .15 .20 .80 .20 2.65 3.65
10 Men’s fleece
jacket
3.20 .75 .40 .45 2.0 .50 7.30 10.80
11 Men’s jacket 5.20 .60 .35 .40 1.80 .40 8.75 13.75
12 Ladies jacket 4.40 .50 .30 .35 1.50 .30 7.35 12.85
13 Boys jacket 3.70 .20 .20 1.0 .20 .25 5.55 11.55
In addition, the group of industries has the following limitations of expenditures:
Maximum investment for fabrics is $ 4050
Maximum investment for accessories is $ 1200
Maximum investment for washing is $ 800
Maximum investment for packaging is $ 720
Maximum investment for labor/CM is $ 2200
Maximum investment for Management/production/overhead is $ 880
And the industry has a fixed expenditure for each day is $ 4300
.
17
Determine how many of each garment items should be produce for maximum daily
profit.
The objective is to maximize the profit. This leads to a LP.
Formulation:
The three basic steps in constructing a LP model are as follows:
Step1: Identify the unknown variables to be determined (decision variables) and
represent them in terms of algebraic symbols.
Step 2: Identify all the restrictions or constraints in the problem and express them as
linear equations or inequalities, which are linear functions of the unknown variables.
Step 3: Identify the objective or criterion and represent it as a linear functions of the
decision variables, which is to be maximized (or minimized).
Now, we shall formulate above problem as follows:
Step 1: (Identify the Decision variables)
For this problem the unknown variables are the number of RMG items produced for
different product. So, let
x1 = The number of RMG items- Men’s long sleeve shirt need to be produced
x2 = The number of RMG items- Men’s short sleeve shirt need to be
produced
x3 = The number of RMG items- Men’s long pant need to be produced
x4 = The number of RMG items- Men’s shorts need to be produced
x5 = The number of RMG items- Ladies long pant need to be produced
x6 = The number of RMG items- Ladies shorts need to be produced
x7 = The number of RMG items- Boys long pant need to be produced
x8 = The number of RMG items- Boys shorts need to be produced
x9 = The number of RMG items- Men’s boxer need to be produced
x10 = The number of RMG items- Men’s fleece jacket need to be produced
x11 = The number of RMG items- Men’s jacket need to be produced
x12 = The number of RMG items- Ladies jacket need to be produced
and x13 = The number of RMG items- Boys jacket need to be produced
.
18
Step 2: (Identify the Constraint)
In this problem constraints are the limited availability of fund for different purposes as
follows:.
1. Since the company wishes to produce maximum 1500 pieces RMG items, so we have
150013121110987654321 ≤++++++++++++ xxxxxxxxxxxxx
2. Since the company has Maximum investment for fabrics is $ 4050, so we have
4050 70.340.420.5
20.320.270.275.220.300.350.320.290.2
131211
10987654321
≤++++++++++++
xxx
xxxxxxxxxx
3. Since the company has Maximum investment for Accessories is $ 1200 so we have
120020.50.06.75.
30.15.25.06.30.25.30.22.25.
13121110
987654321
≤++++++++++++
xxxx
xxxxxxxxx
4. Since the company has Maximum investment for washing is $ 800, so we have
80020.30.35.40.
15.05.17..20.18.22.20.25.18.
13121110
987654321
≤++++++++++++
xxxx
xxxxxxxxx
5. Since the company has Maximum investment for packaging is $ 720, so we have
72010.35.40.45.
20.10.18.18.18.19.17.20.18.
13121110
987654321
≤++++++++++++
xxxx
xxxxxxxxx
6. Since the company has Maximum investment for labor/CM is $ 2200, so we have
220020.5.18.12
80.25.80.90.9.85..90.
13121110
987654321
≤++++++++++++
xxxx
xxxxxxxxx
7. Since the company has Maximum investment for management/production/overhead is $ 880, so we have
88025.30.40.50.
20.05.16.18.20.23.22.25.18.
13121110
987654321
≤++++++++++++
xxx
xxxxxxxxx
We must assume that the variables xi , i=1,2, …….,13 are not allowed to be negative.
That is, we do not make negative quantities of any product.
Step 3: (Identify the objective)
In this case, the objective is to maximize the profit by different RMG items. That is,
.
19
1312111098
7654321
65.555.31.2
2.33.28.27.232.25.2)(
xxxxxx
xxxxxxxxFMaximize
++++++++++++=
Now, we have expressed our problem as a mathematical model. Since the objective
function is to maximize the profit by different RMG items and all of the constraints
functions are linear , the problem can be modeled as the following LP model:
0,,,,,,,,,,,,
88025.30.40.50.
20.05.16.18.20.23.22.25.18.
220020.5.18.12
80.25.80.90.9.85..90.
72010.35.40.45.
20.10.18.18.18.19.17.20.18.
80020.30.35.40.
15.05.17..20.18.22.20.25.18.
120020.50.06.75.
30.15.25.06.30.25.30.22.25.
4050 70.340.420.5
20.320.270.275.220.300.350.320.290.2
1500
65.555.31.2
2.33.28.27.232.25.2)(
13121110987654321
13121110
987654321
13121110
987654321
13121110
987654321
13121110
987654321
13121110
987654321
131211
10987654321
13121110987654321
1312111098
7654321
≥
≤++++++++++++
≤++++++++++++
≤++++++++++++
≤++++++++++++
≤++++++++++++
≤++++++++++++
≤++++++++++++
++++++++++++=
xxxxxxxxxxxxx
xxx
xxxxxxxxx
xxxx
xxxxxxxxx
xxxx
xxxxxxxxx
xxxx
xxxxxxxxx
xxxx
xxxxxxxxx
xxx
xxxxxxxxxx
xxxxxxxxxxxxx
toSubject
xxxxxx
xxxxxxxxFMaximize
Thus the given problem has been formulated as a LP. We will solve this formulated
problem by using our developed computer program.
.
20
1.9 A Real Life Problem of a Textile Mill: 1.9.1 Introduction:
Linear programming has proven to be one of the most successful quantitative
approaches to decision making. Applications have been reported in almost every
industry. Problems studied include production scheduling, media selection, financial
planning, capital budgeting, product mix, blending and many others. As the variety of
applications suggests, linear programming is a flexible problem-solving tool.
In this section we present a real life problem called Textile Mill
Scheduling introduce by Jeffrey D. Camn, P.M. Dearing and Suresh K.
Tadisnia which is given as an exercise in ANDERSON [2000]. We
formulate this problem and solve it by using our MATHEMATICA
computer program.
1.9.2 Textile Mill Scheduling problem:
The Scottsville Textile Mill produces five different fabrics. Each fabric can be
woven on one or more of the mills 38 looms. The sales department has forecast demand
for the next month. The demand data are shown in table 1 along with date on the selling
price per yard. Manufacturing cost per yard and purchase price per yard. The mill
operates 24 hours a day and is scheduled for 30 days during the coming month.
Table 1
Fabric Demand
(yards)
Selling Price
($/ yard)
Manufacturing
Cost ($/ yard)
Purchase Price
($/ yard)
1
2
3
4
5
16,500
22,000
62,000
7500
62,000
0.99
0.86
1.10
1.24
0.70
0.66
0.55
0.49
0.51
0.50
0.80
0.70
0.60
0.70
0.70
.
21
The mill has two types of looms: dobbie and regular. The dobbie looms are more
versatile and can be used for all five fabrics. The regular looms can produce only three of
the fabrics.
The mill has a total of 38 looms, 8 are dobbie and 30 are regular. The rate of
production for each fabric on each type of loom is given in Table 2. The time required to
change over from producing one fabric to another is negligible and does not have to be
considered.
Table- 2
Loom Production Rates (yards/hour)
Fabric Dobbie Regular
1
2
3
4
5
4.63
4.63
5.23
5.23
4.17
-----
-----
5.23
5.23
4.17
The Scottsville Textile mill satisfies all demand with either its own
fabric or purchased from another mill. That is fabric that cannot be woven
at the Scottsville mill because of limited loom capacity will be purchased
from another mill.
Determine how many of each fabric should be woven and how many
should be purchased for maximum monthly profit.
1.9.3 Formulation of the Textile Mill Scheduling problem:
Step1: Identify the Decision Variables
let x11 = amount of Fabric-1 Woven by Dobbie looms
x12 = amount of Fabric-1 Purchased from another mill
x21 = amount of Fabric-2 Woven by Dobbie looms.
x22 = amount of Fabric-2 Purchased from another mill
x31 = amount of Fabric-3 Woven by Dobbie looms.
.
22
x32 = amount of Fabric-3 Woven by Regular lomms.
x33 = amount of Fabric-3 Purchased from another mill
x41 = amount of Fabric-4 Woven by Dobbie lomms
x42 = amount of Fabric-4 Woven by Regular looms
x43 = amount of Fabric-4 Purchased from another mill
x51 = amount of Fabric-5 Woven by Dobbie looms
x52 = amount of Fabric-5 Woven by Regular looms
x53 = amount of Fabric-5 Purchased from another mill.
Step 2: Identify the Constraints
The market demand of fabric-1 is 16,500 yards and the textile mill
satisfies all demand with either its own fabric or fabric purchased from
another mill thus the demand constraint for Fabric –1 is
x11 + x12 = 16,500
The others demand constraints for fabric-2, 3, 4, 5, are
x21 + x22 = 22,000
x31 + x32 + x33 = 62,000
x41 + x42 + x43 = 75,00
x51 + x52 + x53 = 62,000
There are 8 dobbie looms and every loom works 24 hours a day and 30 days in a month.
Thus The total dobbie loom time = 8 × 24 × 30 = 5760 hour.
All Fabrics are woven by dobbie looms.
Loom Production rates of Fabric –1 is = 4.63 yards/ hour.
Thus 4.63 yards of Fabric-1 will produce at 1 hour
∴ x11 63.4
x11yards of Fabric-1 will produce hours
Hence time requirement for Fabric-1 will be 0.22 x11 hours
Similarly Fabric-2, Fabric-3 Fabric-4, Fabric-5 will need 0.22 x22 hours,
0.20x31 hours, 0.20 x41 hours and 0.24 x51 hours successively.
Thus the total requirement of time will be
0.22x11+0.22x21+0.20x31+0.20x41+0.24x51 which should not exceed the
available dobbie loom time 5760 hours. So the constraint becomes.
.
23
0.22x11 + 0.22x21 + 0.20x31 + 0.20x41 + 0.24x51 ≤ 5760.
Again, 30 regular loom have 30 × 24 × 30 = 21600 hours. Regular loom can make
Fabric-3 Fabric-4 and Fabric-5. Similarly Fabric-3, Fabric-4, fabric-5 require 0.20x32
hours, 0.20x42 hours and 0.24 x52 hours.
Thus the time constraint becomes
0.20 x32 + 0.20 x42 + 0.24 x52 ≤ 21600
Step 3 : Identify Objective function
The Objective function is to maximize the total profit from sales. The selling
price of fabric-1 is = 0.99 $ / yard, thus profit after manufacturing Fabric-1 is = 0.99 -
0.66 = 0.33 $/yard. Profit from x11 yard is = 0.33 x11 $
Again purchasing cost is = 0.80 $/yard, thus profit after purchasing fabric-1 is =
0.99 - 0.80 = 0.19 $/yard. Profit from x12 yard is = 0.19 x12 $.
Similarly the other profit from rest of the decision variables are
0.31x21, 0.16x22, 0.61x31, 0.61x32, 0.50x33, 0.73x41, 0.73x42, 0.54x43, 0.20x51, 0.20x52,
0.0 x53.
Thus the objective functions to maximize the total profit is
Z = 0.33 x11 + 0.19x12 + 0.31x21 + 0.16 x22 + 0.61 x31 + 0.61 x32 +
0.50 x33+ 0.73 x41 + 0.73x42 + 0.54 x43 + 0.20x51 + 0.20x52 + 0.00 x53.
Step-4 : Identify non-negative constraints
Since the amount of fabric xij produce or not in mill, we have to
restrict the variables non-negative. That is xij ≥ 0 ( where i = 1,2,3,4,5 &
j = 1,2 ) & x33, x43, x53 ≥0.
Hence the linear programming model for our Textile mill Scheduling problem becomes
Maximize
.
24
Z = 0.33 x11 + 0.19 x12 + 0.31 x21 + 0.16 x22 + 0.61 x31 + 0.61 x32 + 0.50
x33 + 0.73 x41 + 0.73 x42 + 0.54 x43 + 0.20 x51 + 0.20 x52 + 0.00 x53
Subject to
x11 + x12 = 16500
x21 + x22 = 22000
x31 + x32 + x33 = 62000
x41 + x42 + x43 = 7500
x51 + x52 + x53 = 62000
0.22x11 + 0.22x21 + 0.20x31 + 0.20x41 + 0.24x51 ≤ 5760.
0.20 x32 + 0.20 x42 + 0.24 x52 ≤ 21600
xij ≥ 0 ( where i = 1,2,3,4,5 & j = 1,2 ) & x33, x43, x53
Thus the given problem has been formulated as a LP. We will solve this formulated
problem by using our developed computer program.
≥0.
Chapter 2
LINEAR PROGRAMMING MODELS: GRAPHICAL AND COMPUTER METHODS
2.1 Steps in Developing a Linear Programming (LP) Model:
There are three steps in developing a Linear Programming (LP) model:
1) Formulation
2) Solution
(i) Graphical Method
(ii) Numerical Method
3) Interpretation and Sensitivity Analysis
2.1.1 Properties of Linear Programming Models:
1) Seek to minimize or maximize
2) Include “constraints” or limitations
3) There must be alternatives available
4) All equations are linear
2.1.2 Mathematical Formulation of Linear Programming problem:
Linear programming deals with the optimization of a function of variables known
as objective function, subject to set of linear equalities/inequalities known as constraints.
The objective function may be profit, loss, cost, production capacity or any other measure
of effectiveness which is to be obtained in the best possible or optimal manner. The
constraints may be imposed by different sources such as market demand, production
processes and equipment, storage capacity, raw material availability, etc. by linearity is
meant a mathematical expression in which the variables have unit power only.
Linear programming is used for optimization problems that satisfy the following
conditions:
1. There is a well defined objective function to be optimized and which can be
expressed as a linear function of decision variables.
2. There are constraints on the attainment of the objective and they are capable of
being expressed as linear equalities/inequalities in terms of variables.
26
3. There are alternative courses of action.
4. The decision variables are interrelated and non-negative.
5. Resources are in limited supply.
2.2 Graphical Method:
A linear programming problem with only two variables presents a simple case, for
which the solution can be derived using a graphical method 3, 4. This method consists of
the following steps:
Step-1. Represent the given problem in mathematical form, i.e. , formulate an L.P.
model for the given problem.
Step-2. Represent the given constraints as equalities on x1,x2 co-ordinates plane and
find the convex region formed by them.
Step-3. Plot the objective function.
Step-4
(i) A definite and unique optimal solution,
. Find the vertices of the convex region and also the value of the objective function
at each vertex. The vertex that gives the optimum value of the objective function gives the
optimal solution to the problem.
In general, a linear programming problem may have
(ii) An infinite number of optimal solutions,
(iii) An unbounded solution, and
(iv) No solution.
2.2.1 Real Life Example of Model Formulation (Otobi Furniture Co.):
Otobi is one of the largest and reputed furniture companies in Bangladesh. They
have started their operation since 1975. They have produced diversified furniture products
in different sections. Here we collected data from one section, which produces only chairs
and tables. The company has the following data
Tables
(per table)
Chairs
(per chair) Hours Available
27
Profit Contribution $7 $5
Carpentry 3 hrs 4 hrs 2400
Painting 2 hrs 1 hr 1000
Other Limitations:
• Make no more than 450 chairs
• Make at least 100 tables
Determine how many of each furniture item should be produce for maximum daily
profit.
Formulation:
Decision Variables:
T = Num. of tables to make
C = Num. of chairs to make
Objective Function: Maximize Profit
Maximize $7 T + $5 C
• Have 2400 hours of carpentry time available
Constraints:
3 T + 4 C < 2400 (hours)
• Have 1000 hours of painting time available
2 T + 1 C < 1000 (hours)
• Make no more than 450 chairs
More Constraints:
C < 450 (num. chairs)
• Make at least 100 tables
T > 100 (num. tables)
Nonnegativity:
Cannot make a negative number of chairs or tables
28
T > 0
C > 0
2.2.2 Graphical Solution:
Model Summary:
Max 7T + 5C (profit)
Subject to the constraints:
3T + 4C < 2400 (carpentry hrs)
2T + 1C < 1000 (painting hrs)
C < 450 (max. chairs)
T > 100 (min. tables)
T, C > 0 (non-negativity)
• Graphing an LP model helps provide insight into LP models and their solutions.
• While this can only be done in two dimensions, the same properties apply to all LP
models and solutions.
Figure-1
Carpentry Constraint Line 3T + 4C = 2400 Intercepts (T = 0, C = 600) (T = 800, C = 0)
0 800 T
C 600
0
Feasible < 2400 hrs
Infeasible > 2400 hrs
29
Figure-2
Figure-3
Painting Constraint Line 2T + 1C = 1000 Intercepts (T = 0, C = 1000) (T = 500, C = 0)
0 500 800 T
C 1000 600 0
0 100 500 800 T
C 1000 600 450 0
Max Chair Line C = 450
Min Table Line
T = 100
Feasible Region
30
Figure-4
The company should produce 320 pieces of table and 360 pieces of chair Determine
for maximum daily profit
2.3 LP Characteristics:
• Feasible Region: The set of points that satisfies all constraints
• Corner Point Property: An optimal solution must lie at one or more corner points
• Optimal Solution: The corner point with the best objective function value is optimal
2.3.1 Special Situation in LP:
1. Redundant Constraints - do not affect the feasible region
Example: x < 10
x < 12
The second constraint is redundant because it is less restrictive.
0 100 200 300 400 500 T
C
500
400
300
200
100 0
Objective Function Line
7T + 5C = Profit
Optimal Point (T = 320, C =
360)
31
2. Infeasibility – when no feasible solution exists (there is no feasible region)
Example: x < 10
x > 15
3. Alternate Optimal Solutions – when there is more than one optimal solution
Figure-5
4. Unbounded Solutions – when nothing prevents the solution from becoming infinitely large
Figure-6
Max 2T + 2C Subject to: T + C < 10 T < 5 C < 6 T, C > 0
0 5 10 T
All points on Red segment are optimal
C
10
6
0
Max 2T + 2C Subject to: 2T + 3C > 6 T, C > 0
0 1 2 3 T
C
2
1
0
Direction of solution
32
2.4 Numerical Example-1:
Maximize Z =5x1 + 8x2
Subject to 3x1 + 2x2 ≤ 36
x1 + 2x2 ≤ 20
3x1 + 4x2 ≤ 42
x1, x2 ≥ 0
Solution of the above program in graphical method:
The solution space satisfying the given constrains and meeting the non- negativity
restrictions x1,x2 ≥ 0 is shown shaded in figure-7 below. Any point in this shaded region is
a feasible solution to the given problem.
Figure-7
Feasible region for example - 1
The vertices of the convex feasible region OABCD are O(0,0), A(12,0), B(10,3), C(2,9) and D(0,10).
The value of the objective function at these points are:
Z(O)=0 , Z(A)=60, Z(B)=74, Z(C)=82, and Z(D)=80 .
Since the maximum value of the objective function is 82 and it occurs at C(2,9), the optimal
solution to the given problem is x1=2, x2=9 with
Zmax =82 .
33
2.5 Mathematica Codes for Graphical Representation of Feasible Region:
In this section we have developed a computational technique using mathematica codes to
show the feasible region of two-dimensional linear programming problems. This method also gives
the optimal solution. We have illustrated two numerical examples (maximization & Minimization) to
demonstrate our method.
2.5.1 Numerical Example-2:
Maximize Z = 2x1 + 3x2
Subject to x1 + x2 ≤ 30
x2 ≥ 3 x2 ≤ 12
x1-x2 ≥ 0
0 ≤ x1 ≤ 20.
Solution:
The solution space satisfying the given constraints and meeting the non-negativity restrictions
x1 ≥ 0 and x2
Arrow [{25, 3} , {25, 4}, HeadScaling -> Relative] ,
≥ 0 is shown shaded in Fig. 8. Any point in this shaded region is a feasible solution to
the given problem.
Mathematica Codes for Graphical Representation:
<<Graphics `ImplicitPlot`
<<GraPhics `Colors`
<<Graphics `Arrow`
11 = ImplicitPlot [{ x1+x2 == 30 , x2 ==3 ,x2 == 12 , x1-x2 == 0 , x1 == 20 },
{x1, 0 ,25} , {x2, 0 ,25} , PlotStyle -> {Blue , Maroon , Green , Brown , Purple} ,
DisplayFunction -> Identity] ;
p1 = Graphics [{Maroon , Polygon[{{3,3} , {12,12} , {18,12} , {20,10} , {20,3}}]}] ;
t1 = Graphics [{Text [“A(3,3)”, {3.5 , 2.5}] , Text [“B(12,12)”, {12.5 , 12.5}],
Text [“C(18,12)”, {18.6 , 12.5}] , Text [“D(20,10)”, {22.5, 10.5}] ,
Text [“E(20,3)”, {22.2 , 2.5}]}] ;
t2 = Graphics [{Text [“x2• 3”,{23 , 3.5}] , Text [“x2• 12”, {23, 12.5}] ,
Text [“x1-x2• 0”, {5.2, 8}] , Text[“x1+x2• 30”, {13, 20}] ,
Text [“x1• 20”, {21.5, 6}]}] ;
a1 = Graphics [{Arrow [{5, 25}, {4, 24}, HeadScaling -> Relative] ,
Arrow [{25, 25} , {26, 24}, HeadScaling -> Relative] ,
Arrow [{25, 12} , {25, 11}, HeadScaling -> Relative] ,
34
Arrow [{20, 25} , {19, 25}, HeadScaling -> Relative]}] ;
Show [{l1 , p1 , t1 ,t2 , a1} , AxesLabel -> {“x1” , “x2”} ,
Ticks -> {{3 ,6 , 9 , 12 ,15 , 18 , 21} , {3 , 6 , 9 , 12 , 15 , 18 , 21}} ,
DisplayFunction -> $DisplayFunction]
Figure-8
Feasible region for example 2
The co-ordinates of the five vertices of the convex region ABCDE are A(3,3),
B(12,12), C(18,12), D(20,10) and E(20,3).
Mathematica Codes for Optimal Value of the Objective Function: INPUT:
z [ x1_, x2_] : = 2 x1 + 3 x2 ;
v = { z[ 3 , 3] , z[ 12 , 12 ] , z[ 18 , 12] , z[ 20 , 10 ] , z[ 20 , 3]}
optimal = Max [v]
OUTPUT:
{ 15 , 60 , 72 , 70 , 49 }
72
Since the maximum value of Z is 72, which occurs at the point C(18,12), the solution to the
given problem is x1 = 18, x2 = 12 with
Zmax
= 72.
35
Remark-1: If we solve this problem by usual simplex method we need to use artificial
variables and to apply 2 phase simplex method or Big-M simplex method, which needs 7
iterations. But it is time consuming and clumsy method.
2.5.2 Numerical Example-3:
(LP) Minimize Z = - x1 + 2x2
Subject to - x1 + 3x2 ≤ 10
x1 + x2 ≤ 6
x1 - x2 ≤ 2
x1, x2 ≥ 0
Solution :
The solution space satisfying the given constraints and meeting the non-negativity
restrictions x1 ≥0 and x2 ≥0 is shown shaded in Fig. 9. Any point in this feasible region is
a feasible solution to the given problem.
Mathematica Codes for Graphical Representation:
13 = ImplicitPlot [{-x1+3*x2 == 10 , x1+x2 == 6 , x1-x2 == 2 }, {x1, -2 ,10 } ,
{x2, -2, 8} , PlotStyle -> { Blue , Maroon , Green } , DisplayFunction -> Identity] ;
p3 = Graphics [{ Hue [ .55] , Polygon [{{0, 0} , {0, 10/3} , {2, 4} , {4, 2} , {2, 0}}]}] ;
t5 = Graphics [{Text [“O(0,0)”, {-.15 , -.2 }] , Text [“A(0, 10/3)”, {1.2 , 3.5}] ,
Text [“B(2, 4)”, {2.8 , 4.4}] , Text [“C(4, 2)”, {5, 2.1}] ,
Text [“D(2,0)”, {3.2 , .2}]}] ;
t6 = Graphics [{Text [“-x1+3*x2≤10”, {4.5 , 5.5}] , Text [“x1+x2≤6”, {6.5, .8}] ,
Text [“x1-x2≤2”, {7.5, 4}]}] ;
a3 = Graphics [{Arrow [{-2, 8}, {-2.5, 7.5}, HeadScaling -> Relative] ,
Arrow [{10, 8} , {9.5, 8.5}, HeadScaling -> Relative] ,
Arrow [{10, 6.7} , {10.2, 5.5}, HeadScaling -> Relative]}] ;
Show [{l3 , p3 , t5 , t6 , a3} , AxesLabel -> {“x1” , “x2”} ,
Ticks -> {{2 , 4 , 6 , 8 ,10} , {2 , 4, 6 , 8}} ,
DisplayFunction -> $DisplayFunction]
36
Figure-9
Feasible region for example 3
The coordinates of the vertices of the convex polygon OABCD are O(0,0), A(0,10/3), B(2,4),
C(4,2), and D(2,0)
Mathematica Codes for Optimal Value of the Objective Function:
INPUT:
z [ x1_, x2_] : = -x1 + 2 x2 ;
v = { z[ 0 , 0] , z[ 0 , 10/3 ] , z[ 2 , 4] , z[ 4 , 2 ] , z[ 2 , 0]}
optimal = Min [v]
OUTPUT:
{ 0 , 20/3 , 6 , 0 , –2 }
-2
Since the minimum value of Z is – 2, which occurs at the vertex D(2,0), The
solution to the given problem is x1 = 2 , x2 = 0 with
Zmax
= - 2.
37
2.6 Conclusion:
The solution of linear programming problem is possible to find by graphical
method as well as numerical method. But, we can only use graphical method when the
problem is in two dimensional. For solving a LP problem by graphical method, it is
necessary to plot the graph accurately which is very difficult and also time consuming. To
overcame this difficulties, in this section we developed a computational technique using
mathematica codes to show the feasible region of two-dimensional linear programming
problems and our mathematica codes also give the optimal solution. In usual simplex
method, we need to use artificial variables and have to apply 2 Phase simplex method or Big-M
simplex method when the set of constraints is not in canonical form, which needs many iterations
which is also time consuming and clumsy. But by applying our computational technique using
mathematica codes we can solve any types of problems easily.
Chapter 3
SIMPLEX METHOD AND COMPUTER ORIENTED ALGORITHM FOR SOLVING LINEAR PROGRAMMING PROBLEMS
3.1 Introduction:
The simplex method is an iterative procedure for solving a linear program in a
finite number of steps and provides all the information about the program. Also it indicates
whether or not the program is feasible. If the program is feasible, it either finds an optimal
solution or indicates that an unbounded solution exists. At first G.B. Dantzig develop this
method in 1950. Following Dantzig [1963], KAMBO [1984], Gillet [1988] described the
simplex method as below.
3.2 Simplex Method:
Basically the simplex method is an iterative procedure that can be used to solve
any linear programming model if the needed computer time and storage are available. It is
assumed that the original linear programming model
Maximize ∑=
=r
jjj xcz
1
Subject to : ∑=
r
jjij xa
1
(>, =, <)bi , bi > 0, i = 1, 2, ........, m
all xj
∑=
=n
jjj xcz
1
> 0
has been converted to the equivalent standard LP model
Maximize
Subject to : ∑=
=r
jijij bxa
1
i = 1, 2, ........, m
all xj > 0
39
which includes slack variables that have been added to the left side of each less
than or equal to constraint, surplus variables that have been subtracted from the left side of
each greater than or equal to constraint, and artificial variables that have been added to the
left side of than greater than or equal to constraint and each equality. It is assumed that the
profit coefficients for the slack and surplus variables are zero while the profit coefficients
for the artificial variables are arbitrarily small negative numbers (algebraically), say -M.
The equivalent model necessarily assures us that each equation contains a variable with a
coefficient of 1 in that equation and a coefficient of zero in each of the other equations. If
the original constraint was a less than or equal to constraint, the slack variable in the
corresponding equation will satisfy the condition just stated. Likewise, the artificial
variables added to the greater than or equal to constraints and equalities satisfy the
condition for each of the remaining equations in the equivalent model. These slack and
artificial variables are the basic variables in the initial basic feasible solution of the
equivalent problem.
The equivalent model is now rewritten as
Maximize : z
Subject to:
01
=− ∑=
n
jjj xcz
3.1
bixan
jjij =∑
=1
, i = 1, 2, ........, m 3.2
all xj > 0
since cj
01
bxczn
jjj =− ∑
=
= -M for each artificial variable, we must multiply by -M each equation
representation by (2.2) that contains an artificial variable and add the resulting equations
to equation (2.1) to give.
Maximize : z
Subject to : 3.3
40
bixan
jjij =∑
=1
, i = 1, 2, ........, m 3.4
all xj > 0
Where bo ∑*
kb = -M and * represent the equations containing artificial variables.
This assures us that each equation in (3.4) contains a slack or artificial variable that
has a coefficient of 1 in that equation and a coefficient of zero in each of the other
equations in (3.4) as well as in equation (3.3). Equation (3.3) will be refereed to as the
objective function equation. We will now present the general simplex method.
3.2.1 Computational steps for solving (LP) in simplex method:
The computational steps of the simplex method for solving an (LP) which is in
canonical form are as follows: (for maximization problem).
Step 1 : Express the problem in standard form.
Step 2 : Start with an initial basic feasible solution in canonical form and set up the Initial
table.
Step 3 : Use the innerproduct rule to find the relative profit factors (∩j)as follows, ∩j = cj
cj- zj =cj (innerproduct of cB and the column corresponding to xj in the canonical
system).
Step 4 : (choice of the entering variable in to the basis )
If all ∩j ≤0, the current basic feasible solution is optimal. Otherwise,select the
nonbasic variable with the most positive ∩j
To determine the outgoing variable from the basis, we examine each element of
pivot column to observe howfar the non–basic variable can be increased. For those
constraints in which the non–basic variable has a positive co-efficient, the limit is given by
the ratio of the R.H.S constant to that positive co-efficient. For other constraints the limit
to enter the basis and
the corresponding column is called pivot column.
Step 5 : (Choice of the outgoing variable from the basis )
41
is set to • . The constraint with the lowest limit determined, the corresponding row is
called the pivot row, the basic variable in that constraint will be replaced by the non-basic
variable. The element which is at the intersection of the pivot row and pivot column is
called the pivot element.
Since the determination of the variable to leave the basis involves the calculation
of ratios and selection of the minimum ratio, this rule is generally called the minimum
ratio rule.
Step 6 : Perform the pivot operation to get the new table and the basic feasible solution.
That is,
(1) Divide all elements of the pivot row by the pivot element.
(2) Then, in order to obtain zeros in the other places of the pivot column, add suitable multiples of the transformed pivot row to the remaining rows.
Step 7 :Compute the relative profit factors by using inner–product rule.Return to step-4.
Remark 2.2.1 : each sequence of step–4 to step–7 is called an iteration to the simplex
method. Thus each iteration gives a new table and an improved basic feasible solution.
Remark 2.2.2 : An alternative optimal solution is indicated whenever there exists a non-
basic variable whose relative profit factor ∩j
3.2.2 Properties of the Simplex Method:
is zero in the optimal table. Otherwise the
solution is unique.
Remark 2.2.3 : If all the elements in the pivot column are non-positive then this indicates
that the problem has an unbounded solution.
The important properties of the simplex method are summarized here for convenient ready
reference.
i) The simplex method for maximizing the objective function starts at a basic feasible solution for the equivalent model and moves to an adjacent basic feasible solution that does not decrease the value of the objective function. If such a solution does not exist an optimal solution for the equivalent model has been reached. That is, if all of the coefficients of the non-basic variables in the objective function equation are greater than or equal to zero at some point, then all optimal solution for the equivalent model has been reached.
42
ii) If an artificial variable is an optimal solution of the equivalent model at a non-zero level, then no feasible solution for the original model exists. On the contrary, if the optimal solution of equivalent model does not contain an artificial variable at a non-zero level, the solution is also optimal for the original model.
iii) If all of the slack, surplus, and artificial variables are zero when an optimal solution of the equivalent model is reached, then all of the constraints in the original model are strict "equalities" for the values of the variables that optimize the objective function.
iv) If a non-basic variable has a zero coefficient in the objective function equation when an optimal solution is reached, there are multiple optimal solutions. In fact, there is an infinity of optimal solutions. The simplex method finds only one optimal solutions and stops.
v) Once an artificial variable leaves the set of basic variables (the basis), it will never enter the basis again. So all calculations for that variable can be ignored in future steps.
vi) When selecting the variable to leave the current basis:
a) If two or more ratios are smallest, choose one arbitrarily.
b) If a positive ratio does not exist, the objective function in the original model is not bounded by the constraints. Thus, a finite optimal solution for the original model does not exist.
vii) If a basis has a variable at the zero level, it is called a degenerate basis.
viii) Although cycling is possible, there have never been any practical problems for which the simplex method failed to converge.
The standard form of the linear programming problem(LP) may be either
1) in canonical form
or 2) not in canonical form .
3.2.3 The standard form of (LP) is in canonical form:
The standard linear programming problem (LP)is of the canonical form if
(LP1) Maximize Z = cx
Subject to ImxB + NXN = b
43
Where Im is m × m identity matrix, xB = (x1,x2, ………, xm)is the vector of basic
variables. N=(aij
3.2.4 The Standard Form of (LP) is Not in a Canonical Form:
)is an m× (n-m) submatrix formed by the remaining Column of
A.
If all of the constraints are of “• ” type or can be converted to “ • ” type and all
R.H.S constants bi (I=1,2,……..,m) are non-negative,the canonical form Can easily be
obtained. Then we can form the initial basic feasible simplex table.
In some linear programming problem may not have a readily available canonical
Form. In these problems at least one of the constraints is of either “=”or “≥”type. In such
a problem one has to find a basic feasible solution in canonical form before starting initial
simplex table. In such case we follow artificial variables technique.
3.3 Artificial Variable Technique:
In this technique, first the linear programming problem is converted to standard
form then each constraint is examined for the existence of and basic variable. If none is
available, and new variable is added to act as the basic variable in that constraint. These
new variables are termed as artificial variables. There are two methods available to solve
such problems.
i) The big-M simplex method.
ii) The two-Phase simplex method.
3.3.1 The Big-M Simplex Method:
This method consists of the following basic steps:
Step-1 : Express the linear programming problem (LP) to standard form .
step-2 : Add the artificial variables “wi
Step-3: continue with the regular steps of simplex method of subsection 2.2.1
” to the left hand side of all the constraints of “=”
and “≥” type in the original problem. Therefore we would lick to get rid of these variables
and would not allow them to appear in the final solution. To do so, these artificial
variables are assigned the letter M as the cost in a minimization problem and-M as the
profit in a maximization problem with the assumption that M is a very large positive
number.
44
While making iterations, using simplex method, one of the following cases may
arises :
Case-I : If no artificial variable remains in the in positive level and the optimality
condition is satisfied, then the solution is optimal.
Case-II : When the Big-M simplex method terminates with an optimal table, it is
sometimes possible for one or more artificial variables to remain as basic variables at
positive level. This implies that the original problem is infeasible.
Remark 2.2.4 : Remark 2.1.2 and Remark 2.1.3 are also applicable here.
3.3.2 The Two-Phase Simplex Method:
In this method the linear programming problem is solved in two phases.
Phase-1 : As lick as Big-M simplex method one has to add an artificial variable “wi” to
each of the constraint which are “≥” or “=” type in the original problem. Instead of
original objective function, an artificial objective function y=∑wi is introduced and is then
minimized subject to the constraints of the original problem for minimization problem. For
maximization problem artificial objective function is the negative of minimization
problem.
Then the following cases arise: (for maximization problem)
Case-1 : If max y= -∑wi = 0 and no artificial variables appears in the basis, then a basic
feasible solution to the original problems is obtained. We then move to the Phase II.
Case-2: If max y= -∑wi
Phase II : In this phase, the basic feasible solution found at the end of phase I is
optimized with respect to the original objective function. The simplex method is once
again applied to determine the optimal solution as in subsection 2.2.1
≥ 0 and at least one of the artificial variables appears in the basis
at a positive level, then the original problem has no feasible solution and the procedure
terminates.
Remark 2.2.5 : The artificial objective function can always be minimized whatever be the
objective function of original problem and thus one can avoid the negative sign in artificial
objective function.
45
Remark 2.2.6 : Remark 2.2.2 and Remark 2.2.3 are also applicable here.
Chapter 4
MORE THAN ONE BASIC VARIABLES REPLACEMENT IN SIMPLEX METHOD FOR SOLVING LINEAR PROGRAMMING PROBLEMS
4.1 Paranjape’s Two-Basic Variables Replacement Method for Solving (LP):
In this section we present the work of Paranjape’s in which he studied the
replacement of two basic variables by two non basic variables at each iteration of Simplex
method for solving (LP).
4.1.1 Algorithm:
Let∧
Bx be another basic feasible solution to the (LP), where ∧
B = ( ),.......,, 21
∧∧∧
mbbb in the
basis in which 1r
b and 2r
b are replaced by 1ub and 2ub respectively of A but not in B.
The columns of ∧
B are given by
ii bb =∧
for i ≠ ,1r 2r ∧
1rb = 1ua
∧
2rb = 2ua
Then the new basic variables can be expressed in terms of the original ones and 1iuy and
2iuy
i.e. 1ua = ∑=
m
iiiu bY
11
=> y ∑≠
−=+m
rriiiuururr byabybur
2,1
11212111 4.1
47
Similarly, y ∑≠
−=+m
rriiiuururr byabybur
2,1
22222121 4.2
Multiplying equation (4.1) by
22ury and (4.2) by 12ury and Subtracting we have,
( )221122
2,1
1222211
1)(1uriuuriu
m
rriuruurur yyyybikyayakb −+−= ∑
≠
=
22
21
22
12
21
11
,
,1
ur
m
rriiiuu
ur
m
rriiiuu
ybya
ybya
k∑
∑
≠
≠
−
−
Similarly,
2rb =
∑
∑
≠
≠
−
−
m
rriiiuuur
m
rriiiuuur
byay
byay
k
21
2221
21
1111
,
,1
Where 2212
2111
urur
urur
yy
yyK =
Now bBxB1−=
=> b=Bx
∑=
m
iBi i
xb1
B
=
=2211
21
1,
BrBr
m
rriBi xbxbxb
r++∑
≠
= krB
i
xm
rriBi xb
1
21 ,
+∑≠
22
21
22
12
21
11
,
,
ur
m
rriiiuu
ur
m
rriiiuu
ybya
ybya
∑
∑
≠
≠
−
−
+ krBX
2
∑
∑
≠
≠
−
−
m
rriiiuuur
m
rriiiuuur
byay
byay
21
2221
21
1111
,
,
48
= { kui
i
ym
rriBi xb
1
21 ,
−∑≠ 222
211
urrB
urrB
yx
yx- k
uiy
2 }2
12
1
11
rB
rB
X
ur
X
ur
y
y +
kua
1
222
211
urB
urB
yX
yX
r
r
+212
111
2
r
r
k Bur
Bur
u XY
XYa
=> b = ∑≠
∧∧∧
++m
rri
BuBuBi rri xaxaxb21
2211,
4.3
Where, ii BB xx =
∧
-
+
∧∧
2211 rr BiuBiu xyxy 4.4
==
==
∧
∧
)(1
)(1
2
212
111
2
1
222
211
1
sayxy
xy
kX
sayyX
yX
kX
uBur
Brur
B
uurB
urBB
r
r
r
r
r
θ
θ
4.5
Also
+=
+=∧∧
∧∧
2221122
2211111
rurBurB
BurBurB
BxyxyX
xyxyX
rr
rrr
4.6
4.1.2 New Optimizing Value:
∑=
∧∧∧
=m
iBB
ii xcZ1
= ∑≠
∧∧∧∧∧
++m
rri
BBBBBB rrrrii xcxcxc2,1
2211
=22112211
2,1
rrrriiBuBuBiuBiuB
m
rriB xcxcxyxyxc
∧∧∧∧
≠
++
+−∑
Where 2211
,, uBuBBiB cccccc rri===
∧∧∧
49
−−−−=∧
≠=
∧
∑∑ 11
21
2211,1
rirrrriiBiu
m
rriB
m
iBBBBBB xycxcxcxcZ
221122
21 ,rrri
BuBuBiu
m
rriB xcxcxyc
∧∧∧
≠
++∑
=
+−
∧∧
2211111rrr
BurBurB xyxycZ -
+
∧∧
2221122 rr BurBurrB xyxyc
221122
2121
11,,
rBuBuBiu
m
rriB
m
rri
BuiB xcxcxycxyc rriri
∧∧∧
≠≠
∧
++−− ∑∑
2211221111
rrririBuBuBiu
m
iB
m
i
BuiB xcxcxycxycZ∧∧∧
==
∧
++−−= ∑∑
( ) ( )222111 rr BuuBuu xzcxzcZZ
∧∧∧
−+−+=∴
4.1.3 Optimality Condition:
The value of the objective function will improve
If zz >∧
( ) ( ) zxzcxzcz rr BuuBuu >−+−+=>∧∧
222111
( ) ( ) 0222111
>−+−=>∧∧
rr BuuBuu xzcxzc
Therefore we get
(a) zz =∧
when 1r
Bx∧
and 2r
Bx∧
both are separately equal to zero.
(b) zz >∧
if
(i) ( ) 011
>− uu zc
(ii) ( ) 022
>− uu zc
In general cj-zj>0
50
4.1.4 Criterion-1: (Choices of the entering variables into the basis):
(i) Choose the u1
11 uu zc −th column of A for which is the greatest positive of
cj-zj, j=1,2,....,n. (ii) Choose the u2
22 uu zc −th column of A for which is the greatest positive of
cj-zj ≠, j=1,2,....,n. , j u
4.1.5 Criterion-2: (Choices of the out going variables form the basis):
1
Since the co-efficient of bi in the expression of b should always be non-negative
therefore the condition on the choices of r1th and r2
0≥∧
iBx
th columns of B are
(i)
(ii) 01
≥∧
rBx
(iii) 02
≥∧
rBx
The above inequalities lead to the conditions for the selection of 1RBX and
2RBX respectively as
(i) Choose 1r
BX for which
>= 0,
1
111
1 min iuiu
B
iur
By
y
x
y
xir
(ii) Choose 2r
BX for which
>= 0,
2
222
2 min iuiu
B
iur
By
y
x
y
xir
Remark:4.1.1: All the elements in the expression of K are to be non-negative. This difficulty can be overcome as follows: After choosing u1th the and u2th column and forming K, if one see that all the elements of K are not non-negative then choose the v′th column of A in lieu of u2 th column by the alternative criterion such as choose the column of A as the v′ th for which cv′-zv′ is the greatest positive cj-zj ; j=1,2,..........,n ;j≠u1, u2
> 0,
1
1
min iuiu
B
i
yy
xi
. Remark : 4.1.2 : If the two non-basic variables happen to replace the same basic variable i.e.
and
> 0,
2
2
min iuiu
B
i
yy
xi occur for the same value of 'i' then one can
overcome this difficulty by the same procedure as in Remark 3.1.1.
51
Remark 4.1.3: In the above discussion we express the mathematical expression in determinant form which is easily accessible to readers.
4.2 Agrawal and Verma’s Three Basic Variables Replacement Method for Solving (LP):
In this section we present the work of Agrawal and Verma in which they studied
the replacement of three basic variables by three non basic variables at each iteration of
Simplex method for solving (LP).
4.2.1 Algorithm:
Let Bx∧
be another basic feasible solution, where
=
∧∧∧∧
mbbbB ......, 21 is the basis in
which 21
, rr bb and 3r
b are replaced by 21
, uu aa and 3ua respectively of A but not in B.
The columns of ∧
B are given by
,ii bb =∧
for 321 ,, rrri ≠
11 ur ab =
∧
22 ur ab =
∧
33 ur ab =
∧
Then the new basic variables can be expressed in terms of the original ones and
,,21 iuiu yy and
3iuy
i.e. i
m
iiuu bya ∑
=
=1
11
∑≠
−=++=>m
rrriiiuururrurrur byabybyby
321
11313212111,,
4.7
Similarly, ∑≠
= −++m
rrriiiuarurrurrur bybybyby
u
321
22323222121,,
4.8
and, ∑≠
= −++m
rrriiiuarurrurrur bybybyby
u
321
33333232131,,
4.9
Solving the above three equations for
21, rr bb and
3rb we have,
52
3332
321
33
2322
321
22
1312
321
11
1
,,
,,
,,
1
urur
m
rrriiiuu
urur
m
rrriiiuu
urur
m
rrriiiuu
r
yybya
yybya
yybya
kb
∑
∑
∑
≠
≠
≠
−
−
−
=
Similarly
33
321
331
23
321
221
13
321
1111
2
,,3
,,2
,,
_
1
ur
m
rrriiiuuur
ur
m
rrriiiuuur
ur
m
rrriiiuuur
r
ybyay
ybyay
ybyay
kb
∑
∑
∑
≠
≠
≠
−
−
=
and
∑
∑
∑
≠
≠
≠
−
−
−
=
m
rrriiiuuurur
m
rrriiiuuurur
m
rrriiiuuurur
r
byayy
byayy
byayy
kb
321
333231
321
222221
321
111211
3
,,
,,
,,
1
Where
332313
322212
312211
ururur
ururur
ururur
yyy
yyy
yyy
k =
Now XB=B-1b => b = Bx
∑=
m
iBi i
xb1
B
=
3322
321
11,,
rrri BrBr
m
rrriBrBi xbxbxbxb +++= ∑
≠
53
∑
∑
∑
∑
≠
≠
≠
≠
−
−
−
+=
m
rrriururiiuu
m
rrriururiiuu
m
rrriururiiuu
m
rrri
B
Bi
yybya
yybya
yybya
k
xxb r
i
321
333233
321
232222
321
131211
321
1
,,
,,
,,
,,
+
∑
∑
∑
≠
≠
≠
−
−
−
m
rrriuriiuuur
m
rrriuriiuuur
m
rrriuriiuuur
B
ybyay
ybyay
ybyay
k
xr
321
333331
321
232221
321
131111
2
,,
,,
,,
+
∑
∑
∑
≠
≠
≠
−
−
−
m
rrriiiuuurur
m
rrriiiuuurur
m
rrriiiuuurur
B
byayy
byayy
byayy
k
xr
321
333231
321
222221
321
111211
3
,,
,,
,,
= i
m
rrri
b∑≠ 321 ,,
{33313
32212
31111
2
33233
32222
31211
1
urBur
urBur
urBur
iu
ururB
ururB
ururB
iuB
yxy
yxy
yxy
k
y
yyx
yyx
yyx
k
yx
r
r
r
r
r
r
i−−
-
33233
32222
31211
1
32313
22212
12111
3
ururB
ururB
ururB
u
Burur
Burur
Burur
iu
yyx
yyx
yyx
k
a
xyy
xyy
xyy
k
y
r
r
r
r
r
r
+
32313
22212
12111
3
33313
32212
31111
2
r
r
r
r
r
r
Burur
Burur
Burur
u
urBur
urBur
urBur
u
xyy
xyy
xyy
k
a
yxy
yxy
yxy
k
a++
==> b332211
321 ,,rrri BuBuBuB
m
rrrii xaxaxaxb
∧∧∧∧
≠
+++∑
54
Where
++−=
∧∧∧∧
332211 rrrii BiuBiuBiuBB xyxyxyxx 4.10
==
==
==
∧
∧
∧
)(1
)(1
)(1
3
32313
22212
12111
3
2
33313
32212
31111
2
1
33233
32222
31211
1
say
xyy
xyy
xyy
kx
say
yxy
yxy
yxy
kx
say
yyx
yyx
yyx
kx
u
Burur
Burur
Burur
B
u
urBur
urBur
urBur
B
u
ururB
ururB
ururB
B
r
r
r
r
r
r
r
r
r
r
r
r
θ
θ
θ
4.11
Also,
++=
++=
++=
∧∧∧
∧∧∧
∧∧∧
3332231133
3322221122
3312211111
rrrr
rrrr
rrrr
BurBurBurB
BurBurBurB
BurBurBurB
xyxyxyx
xyxyxyx
xyxyxyx
4.12
4.2.2 New Optimizing Value:
Substituting the new value of the variables in the objective function we get the new objective function as follows:
ii BB
m
i
xcz∧∧
=
∧
∑=1
332211
321 ,,rrrrrrii BBBBBBBB
m
rrri
xcxcxcxc∧∧∧∧∧∧∧∧
≠
+++= ∑
=33221332211
321
1,,
rrrrrrii BuBuBuBiuBiuBiuBB
m
rrri
xcxcxcxyxyxyxc∧∧∧∧∧∧
≠
+++
++−∑
Where 332211
,,, uBuBuBBB cccccccc rrrii ====∧∧∧∧
55
332211
3
321
32
321
21
321
1332211
^
,,,,,,1
rrr
riririrrrrrrii
BuBuBu
B
m
rrriiuBB
m
rrriiuBB
m
rrriiuBBBBBBBBB
m
i
xcxcxc
xycxycxycxcxcxcxcz
++
+−−−−−−=
∧∧
∧
≠
∧
≠
∧
≠=
∧
∑∑∑∑
332211
33
321
22
321
11
321
333223113
333222211223312211111
,,,,,,
rrr
riririrrr
rrrrrrrrr
BuBuBu
BiuB
m
rrri
BiuB
m
rrri
BiuB
m
rrri
BurBurBur
BBurBurBurBBurBurBurB
xcxcxc
xycxycxycxyxyxy
xxyxyxycxyxyxycz
∧∧∧
∧
≠
∧
≠
∧
≠
∧∧∧
∧∧∧∧∧∧∧
++
+−−−
++
−
++−
++−=
∑∑∑
332211332211111
rrrriririBuBuBuB
m
iiuBB
m
iiuBB
m
iiuB xcxcxcxycxycxycz
∧∧∧∧
=
∧
=
∧
=
+++−−−= ∑∑∑
( ) ( ) ( )333222111 rrr BuuBuuBuu xzcxzcxzczz
∧∧∧∧
−+−+−+=
4.2.3 Optimality Condition:
The value of the objective function will improve if zz >∧
( ) ( ) ( ) zxzcxzcxzcz rrr BuuBuuBuu >−+−+−+=>∧∧∧
333222111
( ) ( ) ( ) 0333222111
>−+−+−=>∧∧∧
rrr BuuBuuBuu xzcxzcxzc
But for non-degenerate case 0,,321
>∧∧∧
rrr BBB xxx . Hence we must have
(i) ( ) 011
>− uu zc
(ii) ( ) 0
22>− uu zc
(iii) ( ) 0
33>− uu zc
In general cj-zj
4.2.4 Criterion-1: (Choices of the entering variables into the basis):
>0
(i) Choose the u111 uu zc −th column of A for which is the greatest positive of
cj-zj, j=1,2,...,n. (ii) Choose the u2
22 uu zc −th column of A for which is the greatest positive of
cj-zj, j=1,2,...,n. , , j≠u1 (iii) Choose the u3
33 uu zc −th column of A for which is the greatest positive of
cj-zj, j=1,2,...,n. , j≠u1, u2
56
4.2.5 Criterion-2: (Choices of the out going variables form the basis):
(i) Choose 1r
Bx for which
⟩= 0,min
1
111
1
iuiu
B
iur
By
y
x
y
xir
(ii) Choose 2r
Bx for which
⟩= 0,min
2
221
2
iuiu
B
iur
By
y
x
y
xir
(iii) Choose 3r
Bx for which
⟩= 0,min
3
331
3
iuiu
B
iur
By
y
x
y
xir
Remark: 4.2.1 : Remark 3.1.1-Remark 3.1.3 are also applicable here.
Remark 4.2.2: In calculation of ),....2,1(1 mixB =∧
we replaced the corresponding column of
k by iBx (column b) instead of only in the first column as was done by agrawal & Verma
[I] and thus avoid unnecessary negative sign and hereby simplify the notation throughout the chapter and henceforth.
Numerical example:
Maximize Z = 3x1+5x2+4x3
Subject to x1+3x2 ≤ 8 2x2+5x3 ≤ 10 3x1+2x2+4x3 ≤ 15 x1, x2, x3
c
• 0 Adding slack variables to the constraints we get the initial table as follows:
Table-1 (Initial Table)
B jC
↓
→ 3 5 4 0 0 0 Constant
b X xB↓ 1 x2 x3 x4 x5 x6
5
4
3
X
X
4
5
X
1 0 1 0 0
0 2 0 1 0
2 4 0 0 1 6
8
10
15
jjj zcc −=−
3 5 4 Z=0 0 0 0
u3 u1 u2
3
3
5
57
Table-2 (Qptimal Table)
cjCB
↓
→ 3 5 4 0 0 0 Constant
b X xB↓ 1 x2 x3 x4 x5 x6
5
4
3
X2
X3
X
0 1 0
1
15/43 4/43 -5/43
0 0 1 -6/43 7/43 2/43
1 0 0 -2/43 -12/43 15/43
85/43 52/43 89/43
cj=cj-zj 0 0 0 -45/43 -12/43 -28/43 Z= 900/43
Since in Table-2 call 0≤−=−
jjj zcc , this table gives the optimal solution to the
given linear programming problem. Therefore the optimal solution is, X1 = 89/43, X2=85/43, X3=52/43
With Zmax = 900/43
Calculations
Step1 : (choices of the entering variables in to the basis) Since in Table-1 cj-zj are positive for j=1,2,3 we consider x1, x2, x3 to enter in to the basis. Step2: (choice of the out going variables from the basis) Considering the minimum ratio rule (as in criterion2) we see that x4, x5, x6 are going to leave to leave the basis. Also x1 replaces x6, x2 replaces x4 and x3 replaces x5. The entries (yij) at the intersection of the entering and leaving variables are called pivot elements. Here the pivot elements are shown circled in the table. Step 3: (Formulation of K) Formulate K with the entries (yij
43
342
052
103
332313
332212
312111
===
ururur
ururur
ururur
yyy
yyy
yyy
k
) by the following formula,
Step4: (Formulation of the new table) We first calculate the new basic variables i.e. the components of constant vector b.
58
4385
3415
0510
108
4311
33233
32222
31211
12 ====
∧
ururB
ururB
ururB
B
yyx
yyx
yyx
kxx
r
r
r
r
4352
3152
0102
183
4311
33313
32212
31111
23 ====
∧
urBur
urBur
urBur
B
yxy
yxy
yxy
kxx
r
r
r
r
4389
1542
1052
883
4311
32313
22212
12111
31 ====
∧
r
r
r
r
Burur
Burur
Burur
B
xyy
xyy
xyy
kxx
We now calculate the first column of table2, i.e. we calculate yi1
1 0 3 Heavy y
; i=1,2,3. Consider the part of the initial table.
11
0
343
050
101
43
11
3323
3222
3121
13
12
11
11 ===∧
urur
urur
urur
yyy
yyy
yyy
ky
=1. Therefore
Note : Since y11
3
0
1=1 corresponds to the pivot element 3 which is in the first column of K we
replace the first column of K by
Similarly
0
332
002
113
43
11
3313
3212
3111
13
12
11
21 ===∧
urur
urur
urur
yyy
yyy
yyy
ky
x2 x3 x1 3 0 1 2 5 0 2 4 3
59
1
342
052
103
43
11
13
12
11
31
2313
2212
2111
===∧
yyy
yyy
yyy
ky
urur
urur
urur
Similarly we calculate other columns of Table-2. Remark 4.3.1: If m> (number of replacement variable), then the relations
=+−=
∧∧∧∧
3322111 rrriBiuBiuBiuBB xyxyxyxx
++−=
∧∧∧∧
jriujriujriuijij yyyyyyyy332211
are used. Since m=3 in the above example, these relations are not used here. Similarly Remark 3.1.1 and Remark 3.1.2 are not necessary for the above example.
Chapter 5
GENERALIZATION OF SIMPLEX METHOD FOR SOLVING LINEAR PROGRAMMING PROBLEMS
5.1 P-Basic Variables Replacement Method for Solving (LP):
In this section, we have generalized the simplex method of one basic variable
replacement by non basic variables to simplex method of more than one (P, where P • 1)
basic variables replacement by non basic variables at each iteration of simplex method for
solving (LP).
5.1.1 Algorithm:
Let Bx∧
be another basic feasible solution, where
=
∧∧∧∧
mbbbB ......, 21 is the basis in
which 21
, rr bb ….. Prb are replaced by
21, uu aa …..
Pua respectively of A but not in B.
The columns of ∧
B are given by
,ii bb =∧
for 321 ,, rrri ≠
11 ur ab =
∧
22 ur ab =
∧
33 ur ab =
∧
……… ……...
PP ur ab =
∧
Then the new basic variables can be expressed in terms of the original ones and ,.....,
21 iuiu yyPiuy
i.e. i
m
iiuu bya ∑
=
=1
11
∑≠
−=+++=>m
rrrriiiuururrurrur
p
PPbyabybyby
...,, 321
111212111.......... 5.1
Similarly, ∑≠
= −+++m
rrrriiiuarurrurrur
p
uppbybybyby
...,, 321
222222121.......... 5.2
61
and, ∑≠
= −+++m
rrrriiiuarurrurrur
p
uppbybybyby
...,, 321
333232131.......... 5.3
∑≠
= −+++m
rrrriiiuarurrurrur
p
ppupppppbybybyby
...,, 321
2211.......... 5.4
Solving the above three equations for prrr bbb .....,
21we have,
pppp
p
pp
p
p
p
p
p
p
ururur
m
rrrriiiuu
ururur
m
rrrriiiuu
ururur
m
rrrriiiuu
ururur
m
rrrriiiuu
r
Yyybya
Yyybya
yyybya
yyybya
kb
32
321
33332
321
33
22322
321
22
11312
321
11
1
...,,
...,,
...,,
...,,
1
∑
∑
∑
∑
≠
≠
≠
≠
−
−
−
−
=
and
ppp
p
ppp
p
p
p
p
p
p
urur
m
rrrriiiuuur
urur
m
rrrriiiuuur
urur
m
rrrriiiuuur
urur
m
rrrriiiuuur
r
Yybyay
Yybyay
yybyay
yybyay
kb
3
321
1
333
321
3331
223
321
2221
113
321
1111
2
...,,
...,,
...,,
...,,
1
∑
∑
∑
∑
≠
≠
≠
≠
−
−
−
−
=
Similarly
∑
∑
∑
∑
≠
≠
≠
≠
−
−
−
−
=
m
rrrriiiuuururur
m
rrrriiiuuururur
m
rrrriiiuuururur
m
rrrriiiuuururur
r
p
ppppp
p
p
p
p
byayyy
byayyy
byayyy
byayyy
kb
...,,
...,,
...,,
...,,
321
321
321
33333231
321
22232221
321
11131211
1
62
Where
ppppp
p
p
p
urururur
urururur
urururur
urururur
yyyy
yyyy
yyyy
yyyy
k
.....
.....
.....
.....
321
3332313
2322212
1312111
=
Now XB=B-1b => b = Bx
∑=
m
iBi i
xb1
B
=
prprr
p
ri BrBrBr
m
rrrriBrBi xbxbxbxbxb +++++= ∑
≠
3322
32111
...,,
pppp
p
pp
p
p
p
p
p
p
r
i
ururur
m
rrrriiiuu
ururur
m
rrrriiiuu
ururur
m
rrrriiiuu
ururur
m
rrrriiiuu
m
rrri
B
Bi
Yyybya
Yyybya
yyybya
yyybya
k
xxb
32
321
33332
321
33
22322
321
22
11312
321
11
321
1
...,,
...,,
...,,
...,,
,,
∑
∑
∑
∑
∑
≠
≠
≠
≠
≠
−
−
−
−
+= +
ppp
p
ppp
p
p
p
p
p
p
r
urur
m
rrrriiiuuur
urur
m
rrrriiiuuur
urur
m
rrrriiiuuur
urur
m
rrrriiiuuur
B
Yybyay
Yybyay
yybyay
yybyay
k
x
3
321
1
333
321
3331
223
321
2221
113
321
1111
2
...,,
...,,
...,,
...,,
∑
∑
∑
∑
≠
≠
≠
≠
−
−
−
−
+
63
∑
∑
∑
∑
≠
≠
≠
≠
−
−
−
−
+
m
rrrriiiuuururur
m
rrrriiiuuururur
m
rrrriiiuuururur
m
rrrriiiuuururur
B
p
ppppp
p
p
p
pr
byayyy
byayyy
byayyy
byayyy
k
x
...,,
...,,
...,,
...,,
321
321
321
33333231
321
22232221
321
11131211
= i
m
rrri
b∑≠ 321 ,,
{
pppprp
pr
pr
pr
pppppr
pr
pr
pr
i
ururBur
ururBur
ururBur
ururBur
iu
urururB
urururB
urururB
urururB
iuB
yyxy
yyxy
yyxy
yyxy
k
y
yyyx
yyyx
yyyx
yyyx
k
yx
31
333313
232212
131111
2
32
333233
232222
131211
1 −−
pppppr
pr
pr
pr
prppp
r
r
r
p
urururB
urururB
urururB
urururB
u
Bururur
Bururur
Bururur
Bururur
iu
yyyx
yyyx
yyyx
yyyx
k
a
xyyy
xyyy
xyyy
xyyy
k
y
32
333233
232222
131211
1
321
3332313
2322212
1312111
+
−−
prppp
r
r
r
p
pppprp
pr
pr
pr
Bururur
Bururur
Bururur
Bururur
u
ururBur
ururBur
ururBur
ururBur
u
xyyy
xyyy
xyyy
xyyy
k
a
yyxy
yyxy
yyxy
yyxy
k
a
321
3332313
2322212
1312111
31
333313
232212
131111
2 +++
==> bprprrri
p
BuBuBuBuB
m
rrrrii xaxaxaxaxb
∧∧∧∧∧
≠
+++++∑ 332211
321 ...,,
64
Where
++++−=
∧∧∧∧∧
prprrrii BiuBiuBiuBiuBB xyxyxyxyxx 332211
5.5
==
==
==
∧
∧
∧
)(1
)(1
)(1
321
3332313
2322212
1312111
2
31
333313
232212
131111
2
1
32
333233
232222
131211
1
say
xyyy
xyyy
xyyy
xyyy
kx
Similarly
say
yyxy
yyxy
yyxy
yyxy
kx
say
yyyx
yyyx
yyyx
yyyx
kx
p
prppp
r
r
r
pr
pppprp
pr
pr
pr
r
pppppr
pr
pr
pr
r
u
Bururur
Bururur
Bururur
Bururur
B
u
ururBur
ururBur
ururBur
ururBur
B
u
urururB
urururB
urururB
urururB
B
θ
θ
θ
5.6
Also,
++++=
++++=
++++=
++++=
∧∧∧∧
∧∧∧∧
∧∧∧∧
∧∧∧∧
prpprprprppr
prprrrr
prprrrr
prprrrr
BurBurBurBurB
BurBurBurBurB
BurBurBurBurB
BurBurBurBurB
xyxyxyxyx
xyxyxyxyx
xyxyxyxyx
xyxyxyxyx
332211
33332231133
23322221122
13312211111
5.7
65
5.1.2 New Optimizing Value:
Substituting the new value of the variables in the objective function we get the new objective function as follows :
ii BB
m
i
xcz∧∧
=
∧
∑=1
prprrrrrrrii
p
BBBBBBBBBB
m
rrrri
xcxcxcxcxc∧∧∧∧∧∧∧∧∧∧
≠
+++++= ∑ 332211
321 ,...,,
prprrr
prprrrii
p
BuBuBuBu
BiuBiuBiuBiuBB
m
rrrri
xcxcxcxc
xyxyxyxyxc
∧∧∧∧
∧∧∧∧
≠
+++++
++++−= ∑
33221
332211
321
1
,...,,
Where pprrrrii uBuBuBuBBB cccccccccc =====
∧∧∧∧∧
,,,,332211
prprrr
pr
p
pir
p
ir
p
i
r
p
iprprrrrrrrii
BuBuBuBu
B
m
rrrriiuBB
m
rrrriiuBB
m
rrrriiuB
B
m
rrrriiuBBBBBBBBBBB
m
i
xx ccxcxc
xycxycxyc
xycxcxcxcxcxcz
^^
,...,,,...,,,...,,
,...,,1
332211
321
3
321
32
321
2
1
321
1332211
+++++
−−−−
−−−−−−=
∧∧
∧
≠
∧
≠
∧
≠
∧
≠=
∧
∑∑∑
∑∑
++++−−
++++−
++++−
++++−=
∧∧∧∧
∧∧∧∧∧
∧∧∧∧
∧∧∧∧
prpprprprppr
prprrrr
prprrrr
prprrrr
BurBurBurBurB
BurBurBurBurB
BurBurBurBurB
BurBurBurBurB
xyxyxyxyc
xyxyxyxyx
xyxyxyxyc
xyxyxyxycz
332211
33332231133
23322221122
13312211111
prprrrprpi
p
ri
p
ri
p
ri
p
BuBuBuBuBiuB
m
rrrri
BiuB
m
rrrri
BiuB
m
rrrri
BiuB
m
rrrri
xcxcxcxcxyc
xycxycxyc
∧∧∧∧∧
≠
∧
≠
∧
≠
∧
≠
+++++−
−−−−
∑
∑∑∑
332211
321
33
321
22
321
11
321
,...,,
,...,,,...,,,...,,
66
prprrrprpi
ririri
BuBuBuBuB
m
iiuB
B
m
iiuBB
m
iiuBB
m
iiuB
xcxcxcxcxyc
xycxycxycz
∧∧∧∧∧
=
∧
=
∧
=
∧
=
+++++−−
−−−=
∑
∑∑∑
332211
332211
1
111
( ) ( ) ( ) ( )prpprrr BuuBuuBuuBuu xzcxzcxzcxzczz
∧∧∧∧∧
−++−+−+−+= 333222111
5.1.3 Optimality Condition:
The value of the objective function will improve if zz >∧
( ) ( ) ( ) ( ) zxzcxzcxzcxzczprpprrr BuuBuuBuuBuu >−++−+−+−+=>
∧∧∧∧
333222111
( ) ( ) ( ) ( ) 0333222111
>−++−+−+−=>∧∧∧∧
prpprrr BuuBuuBuuBuu xzcxzcxzcxzc
But for non-degenerate case 0,, ,,321
>∧∧∧∧
prrrr BBBB xxxx . Hence we must have
(i) ( ) 011
>− uu zc
(ii) ( ) 0
22>− uu zc
(iii) ( ) 0
33>− uu zc
Similarly (iv) ( ) 0>−
pp uu zc
In general cj-zj
5.1.4 Criterion-1: (Choices of the entering variables into the basis):
>0
(i) Choose the u1
11 uu zc −th column of A for which is the greatest positive of
cj-zj, j=1,2,...,n. (ii) Choose the u2
22 uu zc −th column of A for which is the greatest positive of
cj-zj, j=1,2,...,n. , j≠u1 (iii) Choose the u3
33 uu zc −th column of A for which is the greatest positive of
cj-zj, j=1,2,...,n. , j≠u1, u2 Similarly (iv) Choose the up
pp uu zc −th column of A for which is the greatest positive of
cj-zj, j=1,2,...,n. , j≠u1, u2, u3….., up-1
67
5.1.5 Criterion-2: (Choices of the out going variables form the basis):
(i) Choose 1r
Bx for which
⟩= 0,min
1
111
1
iuiu
B
iur
By
y
x
y
xir
(ii) Choose 2r
Bx for which
⟩= 0,min
2
221
2
iuiu
B
iur
By
y
x
y
xir
(iii) Choose 3r
Bx for which
⟩= 0,min
3
331
3
iuiu
B
iur
By
y
x
y
xir
Similarly
(iv) Choose prBx for which
⟩= 0,min
1
p
p
i
p
pr
iuiu
B
iur
By
y
x
y
x
Remark: 5.1.1 : Remark 4.1.1-Remark 4.1.3 are also applicable here.
Remark 5.1.2: In calculation of ),....2,1(1 mixB =∧
we replaced the corresponding column of
k by iBx (column b) instead of only in the first column as was done by Agrawal & Verma
[I] and thus avoid unnecessary negative sign and hereby simplify the notation throughout the chapter and henceforth.
5.2 The Combined Algorithm:
In this section, we develop our combined algorithm for solving LP problems.
The algorithmic steps are presented below.
Step 1: Define the types of the constraints and express the problem in its standard
form.
Step 2: Start with an initial feasible solution in canonical form and set up initial table.
Step 3: Use the inner product rule to find the relative profit factors jc as follows
jjj zcc −= = jc - (inner product of Bc and the column corresponding to jx in the
canonical system).
Step 4: If all 0≤jc (maximization), the current basic feasible solution is optimal and
stop. If there is a single 0>jc , one variable replacement; Go to Step 6. Otherwise go
to Step 5.
68
Step 5:
Substep 1: Select the non basic variable with most and second most positive jc
to enter the basis.
Substep 2: Choose P out going variables from the basis by minimum ratio test.
If selected columns give more than one same minimum ratio, then choose
distinct rows.
Substep 3: Perform P basic variable replacement operations to get simplex
table.
Substep 4: Go to Step 4.
Step 6: Select the non basic variable to enter the basis.
Substep 1: Choose the out going variable from the basis by minimum ratio test.
Substep 2: Perform the pivot operation to get the table and basic feasible solution.
Substep 3: Go to Step 4.
Step 7: If any jc corresponding to non basic variable is zero, take this column as pivot
column (for alternative solution) and go to Step 6.
5.3 Mathematica Codes:
Now we shall present our combined program in programming language
Mathematica (Eugere, Wolfram). This program is written in Mathematica 5.2 for Students
version. In this program, we have used eight module functions- maketble[t_],
rowoperation[t_], morebasic[t_], onebsop[t_], alter[t_], morebsop[t_], main[morebasic_],
vinpt[m_,n_]. The function vinpt[m_,n_] has been used for taking inputs. This function will
ask the user to input number of rows, number of columns, number of greater than type
constraints, input row by row, right hand side constants, cost vector and type of each
constraint e.g. ‘l’ for less than type, ‘g’ for greater than type and ‘e’ for equality type
constraints respectively. Our program is case sensitive and minimizes the tedious work of
input data by generating slack or artificial variables. The function maketble[t_] is for
making tables and the function rowoperation[t_] performs all necessary calculations for
single variable replacement. The module function morebasic[t_]has been used for more
than one basic variable replacements in a single iteration. If the case arises that a simplex
69
table ends with only one positive jc , then to incorporate the problem with single variable
replacement we have introduced the function onebsop[t_] and this function controls all
necessary operations for single variable replacement. The function alter[t_] identifies
alternative (if any) solutions in either single basic or more than one basic variable
replacements. The module function morebsop[t_]does the primary works for using the
function morebasic[t_]. Finally the function main[morebasic_] calls all the functions
discussed above and controls the program.
5.3.1 The combined program in Mathematica (Eugere, Wolfram):
vinpt[m_,n_]:=Module[{},
For[i=1;str={},i• m,i++,
str=Append[str,InputString["Input type of constraints"]] ];
cstr={};
t=Table[ Input["Enter row elements"],{i,1,m},{j,1,n}];
tb=Transpose[t];rhs=Table[ Input["Right hand Constant"], {i,1,m}];
ceff=Table[ Input["Cost Vector"],{i,1,n}];tbcef=ceff;
For[ i=1;cstr={};bindx={},i• m,i++,
If[ StringMatchQ[ str[[i]],"l"]=True,
cstr=Append[cstr,Subscript[S,i ]];For[k=1;s={},k• m,k++,
If[i=k, s=Append[s,1],s=Append[s,0]] ];
tb=Append[tb,s];bindx=Append[bindx,Length[tb]] ;
ceff=Append[ceff,0];tbcef=Append[tbcef,0],
If[ StringMatchQ[ str[[i]],"g"]=True,
cstr=Append[cstr,Subscript[S,i ]];
cstr=Append[cstr,Subscript[A,i ]];For[k=1;s={};a={},k• m,k++,
If[ i==k,s=Append[s,-1];
a=Append[a,1],s=Append[s,0];a=Append[a,0] ] ];
tb=Append[tb,s];tb=Append[tb,a];
bindx=Append[bindx,Length[tb] ];
ceff=Append[ceff,0];
ceff=Append[ceff,-10^10];tbcef=Append[tbcef,0];
tbcef=Append[tbcef,-M],
cstr=Append[cstr,Subscript[A,i ]];
70
For[k=1;a={},k• m,k++,If[ i==k,a=Append[a,1],
a=Append[a,0] ]];tb=Append[tb,a];
bindx=Append[bindx,Length[tb]];
ceff=Append[ceff,-10^10];tbcef=Append[tbcef,-M] ];
] ];
For[j=n,j• 1,j--,cstr=Prepend[cstr,Subscript[X,j]] ];
tble=Transpose[tb];
Off[General::spell]
]
maketble[t_]:=Module[{},
For[j=1;coount={},j• m+n+pp,j++,
coount=Append[coount,j]];
fb"Cj","Basis","CjZj";fcj"","CB","Cj";
fr={"RHS","--","Z"};
For[i=1;cb={};tcbf={};cbv={};B={},i• m,i++,
For[j=1,j• m+n+pp,j++,
If[bindx[[i]]== coount[[j]],cb= Append[cb,cstr[[j]] ];
cbv=Append[cbv, ceff[[j]] ];
tcbf=Append[tcbf, tbcef[[j]] ]; B=Append[B,tb[[j]] ], ];
];fb= Insert[ fb,cb[[i]], i+2];
fcj=Insert[fcj,tcbf[[i]],i+2];fr=Insert[fr,rhs[[i]], i+2]; ];
fr=ReplacePart[fr,tcbf.rhs,-1]; B=Transpose[B];
For[ i=1;fbcjr={};cjbar={},i• m+n+pp,i++,
cjbar=Append[ cjbar, ceff[[i]]-cbv.Inverse[B].tb[[i]] ];
fbcjr=Append[ fbcjr, ( tbcef[[i]]-tcbf.Inverse[B].tb[[i]])
//Simplify ]; ];
tbfom=Prepend[ tble,cstr];
tbfom=Prepend[tbfom, tbcef];tbfom= Append[ tbfom, fbcjr];
tbfom2=Prepend[Transpose[tbfom],fb];
tbfom2=Prepend[tbfom2,fcj];tbfom2=Append[tbfom2,fr];hed++;
Print[" Table ",hed," "];
Print[];
Print[TableForm [ Transpose[tbfom2],
71
TableAlignments• Center,TableSpacing->{1,3}]];
Print["----------------------------------------------------------------"];
Print[];
For[i=1;nofe=0,i• m,i++, If[tcbf[[i]]==-M,nofe=1]];
If[ Max[cjbar]>0,Print["Feasible Solution = ", tcbf.rhs],
Print["Solution Point"];
For[i=1;k=0,i• m+n+pp,i++,For[j=1,j• m,j++,
If[i==bindx[[j]],
Print[ cb[[j]], " = ", rhs[[j]]," (Basic Variable)" ];k=1 ] ];
If[k==1,,
Print[cstr[[i]], " = 0 (Non Basic Variable )" ] ];k=0];
Ifnofe0, Print"All Cj 0 & Optimal Value ",tcbf.rhs,
Print"Though all Cj 0, but no feasible solution";
Off[General::spell]
]
morebasic[t_]:=Module[{},
If[ Max[cjbar]>0 ,p=u[1];For[j=1,j• 2,j++,
Fori1;teta,im,i,Ifyi,p0, tetaAppendteta, rhsiyi,p, tetaAppendteta, 10 6̂; ;
If[Min[teta]• 10^6,rr=Position[teta, Min[teta]][[1,1]];rc[j]=Position[teta, Min[teta]],
Print["Ratio with "," " , p ," th column is not possible"
];s=1;pd=u[j];Goto["end"]];
r[j]=rr;p=u[2];];
If[r[1]==r[2]&&Length[rc[1]]>Length[rc[2]],r[1]=rc[1] [[2,1]]];
If[r[1]==r[2]&&Length[rc[1]]<Length[rc[2]],r[2]= rc[2][[2,1]]];
k=Transpose[{{y[[ r[1],u[1] ]],y[[ r[1],u[2] ]]},{y[[ r[2],u[1] ]],y[[ r[2],u[2] ]]}}];
rh={rhs[[ r[1] ]],rhs[[ r[2] ]]};
For[j=1,j• 2,j++,
kop=Transpose[ReplacePart[k,rh,j]];
rhsrj 1DetkDetkop;
For[i=1,i• m,i++,If[i• r[1]&&i• r[2],rhs[[i]]=rhs[[i]]-
(y[[i,u[1] ]]*rhs[[ r[1] ]]+y[[i,u[2] ]]*rhs[[ r[2] ]]) ] ];
For[ i=1,i• m+n+pp,i++,yrep={y[[ r[1],i ]],y[[ r[2],i ]]};
72
For[j=1,j• 2,j++,kop=Transpose[ReplacePart[k,yrep,j]];
yyj 1DetkDetkop;;
tble[[ r[1],i ]]=yy[1];
tble[[ r[2],i ]]=yy[2];
For[p=1,p• m,p++,If[p• r[1]&&p• r[2], tble[[ p,i]]=y[[p,i]]-
(y[[ p,u[1] ]]*yy[1]+y[[ p,u[2] ]]*yy[2]) ]]
];Label["end"]; ];]
rowoperation[t_]:=Module[{},
If[ Max[cjbar]>0 ,
For[i=1;teta={},i• m,i++,If[ tble[[i,pcol]]>0,
tetaAppendteta, rhsitblei, pcol,
teta=Append[teta, 10^6] ]; ];
If[Min[teta]==10^6,Print["Ratio is not possible; Unbounded Solution"];
st=1;Goto["end"]];
pro=Position[teta, Min[teta]][[1,1]];
rhspro rhsprotblepro,pcol;
tbleprotblepro 1tblepro, pcol; Fori1,im,i,
If[i==pro,,rhs[[i]]=rhs[[i]]-tble[[i,pcol]]*rhs[[pro]];
tble[[i]]=tble[[i]]-tble[[i,pcol]]*tble[[pro]]; ] ], ];
Label["end"];
Off[General::spell]
]
onebsop[t_]:=Module[{},Print["One basic var replacement"];Print[];While[
Max[cjbar]> 0,
pcol=Position[cjbar, Max[cjbar]][[1,1]];rowoperation[tble];
If[st• 1,bindx=ReplacePart[bindx, coount[[pcol]],pro];
maketble[tble], Return[] ] ];
]
alter[t_]:=Module[{},For[i=1;nofe=0,i• m,i++, If[bindx[[i]]==10^10,nofe=1]];
nbindx=Complement[coount,bindx]; For[i=1;alt=0,i• Length[nbindx],i++,
If[ cjbar[[nbindx[[i]]]]==0,alt=1;
73
Print["Alternative Solution"]; pcol=nbindx[[i]];
cjbar=ReplacePart[cjbar, 10^6, pcol];rowoperation[tble];
If[st• 1,bindx=ReplacePart[bindx, coount[[pcol]],pro];
maketble[tble], Goto["lst"] ], ];Label["lst"]; ];
If[alt==0,Print["No Alternative Solution"]];
]
morebsop[t_]:=Module[{},tr=0;s=0;
Print["More than one basic var replacement"];Print[];
While[Max[cjbar]>0,
u1=Max[cjbar];u[1]=Position[cjbar, Max[cjbar]][[1,1]];
cjbar=ReplacePart[cjbar,0,u[1]];u2=Max[cjbar];
u[2]=Position[cjbar, Max[cjbar]][[1,1]];
If[u1>0&&u2>0,
cjbar=ReplacePart[cjbar,ceff[[ u[1] ]],u[1]];y=tble;morebasic[y];
If[st• 1&&s=0,For[i=1,i• 2,i++,
bindx=ReplacePart[bindx,coount[[ u[i] ]],r[i] ]]; maketble[tble],
Print["More than one basic var replacement not possible"];Return[]],
Print["After that more than one basic var replacement not possible"];
cjbar=ReplacePart[cjbar,0,u[1]];tr=1 ] ];
If[tr=1,cjbar=ReplacePart[cjbar,u1,u[1]];
onebsop[tble];alter[tble],alter[tble]];
]
main[morebasic_]:=Module[{},Clear["Context`*"];
m=Input["No of Rows"];n=Input["No of Columns"];
pp=Input["No of >= constraints"];hed=0;st=0;
vinpt[m,n];maketble[tble];
d=Input["Choose method \n'1' for one basic var \n '2' for more basic var"];
If[d=2,morebsop[tble],onebsop[tble];alter[tble]];
If[s=1,onebsop[tble];alter[tble]];
]
Clear[u,r,y]
main[morebasic];
74
5.3.2 Numerical Examples and Comparison:
In this section, we will compare the results obtained by our method with that of
Dantzig’s methods. And also show the differences between these methods with illustrative
numerical examples. Our method takes less iteration than Dantzig’s one basic variable
replacement method. The main short coming of Paranjape’s method is that if a simplex
table ends with one positive jc , then this method fails. Our method over comes this
problem easily.
Example 1:
This example is taken from Paranjape.
910740510430315225115 xxxxxxxZMax −−+−++−=
04
1
4
1
4
3
04
1
4
3
4
1
,02
1
2
1
2
1
/
654
321
321
≤++−
≤−+−
≤++−
xxx
xxx
xxx
ts
0
60
100
100
02
1
2
1
2
1
963
852
741
654
≥≤++≤++≤++
≤−+−
ix
xxx
xxx
xxx
xxx
The above LP takes four iterations (excluding initial table) in Dantizig’s method
whereas it takes only two (excluding initial table) iterations in our method. Using our
program, we have to input 7, 9, 0 respectively to indicate the LP has 7 constraints with 9
variables and no greater than type constraints. If there exists any greater than type
constraints then input the number of those constraints. We have to input ‘l’ seven times to
indicate all constraints are less than type with ‘A’, the coefficient matrix, right hand side
constants ‘b’ and cost coefficient ‘C’. The program will generate required number on slack
variables.
75
We obtained the optimal solution of the above problem after four iterations
(excluding initial table) by Dantzig’s single variable replacement method. We solve the
same problem by our method and obtained the optimal solution after two iterations
(excluding initial table). We see our method reduces the number of iterations by 50%. The
optimal tables of one basic variable replacement method and our method are as follows.
Optimal table of one basic variable replacement method:
Remark 1: The Table number refers to the number of iterations.
Optimal table of our method:
Example 2:
We shall show the failure of Paranjape’s method. For the following LP, we see that
Paranjape’s method fails after one iteration because there exists only one positive jc at
that time as shown in Table 2. Whereas our method solves the same problem effectively
and the result is shown in Table 4.
76
321 432 xxxZMax ++=
0,,
9325
72
5/
321
321
21
321
≥≤+−
=+≥++
xxx
xxx
xx
xxxts
The second table of Paranjape’s method:
Optimal table of our method:
5.4 Solution of LP on a production problem of a garment industry (Standard Group) using combined program:
Now, applying the above program to solve the production problem of the
garment industry (Standard Group) formed in section 1.8 of Chapter-1, we may
rearrange the computer solution in the following way:
Z= 5837.68
x1 =0.0 x2 =0.0
x3 =0.0 x4 =0.0
x5 =0.0 x6 =0.0 x7 = 0.0
x8 =0.0 x9 =0.0 x10
x
=0.0
11 = 0.0 x12 =446.377 x13 = 563.768
77
Illustrated Answer:
Fabric-1
Men’s long sleeve shirt:
Should not be produced the RMG items- Men’s long sleeve shirt
Men’s short sleeve shirt:
Should not be produced the RMG items- Men’s short sleeve shirt
Men’s long pant:
Should not be produced the RMG items- Men’s long pant
Men’s shorts:
Should not be produced the RMG items- Men’s shorts
Ladies long pant:
Should not be produced the RMG items- Ladies long pant
Ladies shorts:
Should not be produced the RMG items- Ladies shorts
Boys long pant:
Should not be produced the RMG items- Boys long pant
Boys shorts:
Should not be produced the RMG items- Boys shorts
Men’s boxer:
Should not be produced the RMG items- Men’s boxer
Men’s fleece jacket:
Should not be produced the RMG items- Men’s fleece jacket
Men’s jacket:
Should not be produced the RMG items- Men’s jacket
Ladies jacket:
Should be produced 446.377 pieces of RMG items- Ladies jacket
78
Boys jacket:
Should be produced 563.768 pieces of RMG items- Boys jacket
Maximum Profit
Maximum total profit $ 5837.68 only per day.
Note: If we solve the problem by Dantzis’s one variable replacement method it takes
10,636 iterations, which is very time consuming to solve by hand calculation. But by
applying our combined program we can easily solve these types of large scale real life
problems.
5.5 Solution of LP on Textile Mill Scheduling problem using combined program:
Now, applying the above program to solve the Textile Mill Scheduling
problem formed in section 1.9 of Chapter-1, we may rearrange the computer solution in
the following way:
Z= 62286.45
x11 =4181.82 x12 =12318.18
x21 =22000.00 x22 =0.0
x31 =0.0 x32 =26100.0 x33 = 35900.00
x41 =0.0 x42 =7500.0 x43 =0.0
x51 = 0.0 x52 =26100.0 x53 = 0.0
Illustrated Answer:
Fabric-1
Should be woven 4181.82 yards on dobbie loom
Should be purchased 12318.18 yards from another mill
79
Fabric-2
Should be woven 22000.00 yards on dobbie loom
Should not be purchased from another mill.
Fabric-3
Should not be woven on dobbie loom
Should be woven 26100.00 yards on regular loom
Should be purchased 35900.00 yards from another mill
Fabric-4
Should not be woven on dobbie loom
Should be woven 7500.00 yards on regular loom
Should not be purchased from another mill.
Fabric-5
Should not be woven on dobbie loom
Should be woven 26100.00 yards on regular loom
Should not be purchased from another mill
Maximum Profit
Maximum monthly total profit $ 62286.45 only.
Note: If we solve the problem by Dantzis’s one variable replacement method it takes
10,835 iterations, which is not possible to solve by hand calculation. But by applying our
combined program we can easily solve these types of large scale real life problem
problems.
In this chapter, we compared the results obtained by our method with that of
Dantzig’s methods. And also show the differences between these methods with
5.6 Conclusion:
80
illustrative numerical examples. Our method takes less iteration than Dantzig’s one basic
variable replacement method. The main short coming of Paranjape’s method is that if a
simplex table ends with one positive jc , then this method fails. Our method over comes
this problem easily. We obtained the optimal solution of example1 after four iterations
(excluding initial table) by Dantzig’s single variable replacement method. We solved the
same problem by our method and obtained the optimal solution after two iterations
(excluding initial table). We have seen that our method reduces the number of iterations by
50%. Moreover in example2 we have shown the failure of Paranjape’s method. In example
2 we have also shown that Paranjape’s method fails after one iteration because there exists
only one positive jc at that time as shown in Table 2. Whereas our method solves the
same problem effectively. That is why, we can easily say that our method is more effective
than any other methods.
Chapter 6
COUNTER EXAMPLES OF MORE THAN ONE BASIC VARIABLES REPLACEMENT AT EACH ITERATION OF SIMPLEX METHOD
6.1 Introduction:
We discussed the two and three basic variables replacement method of Paranjape and
Agrawal and Verma’s for solving linear programming problem (LP) in Chapter 4.
In this chapter we illustrate some numerical examples to compare the different method for
solving all kinds of linear programming problem replacing more than one basic variable at
each simplex iteration. For clarity we first solve the same example in graphical method
and also in usual simplex method of Dantzig. We also generalize the claim to more than
one basic variables replacement methods for solving (LP).
6.1.1 Numerical Example 1:
Maximize Z= -2x1+5x2 +3x3
Subjecto to 2x1+4x2- x3 ≤ 8
2x1+2x2 - 3x3 ≤ 7
x1 - 3x2 ≤ 2
4x1 + x2 + 3 x3 • 4
x1, x2 , x3 • 0
Solution of the above problem in usual simplex method:
Adding slack variables x4 , x5, x6, x7 to the constraints we get the initial basic
feasible solution as the following table:
82
Table-1 (Initial Table)
cB jC
↓
→ -2 5 3 0 0 0 0 Constant
b X xB↓ 1 x2 x3 x4 x5 x6 x7
0
0
0
0
x3
x4
x5
x
2 1 1 0 0 0
2 2 -3 0 1 0 0
1 -3 0 0 0 1 0
7 4 1 3 0 0 0 1
8
7
2
4
jjj zcc −=−
-2 5 3 0 0 0 0 Z=0
Table-2
cB jC
↓
→ -2 5 3 0 0 0 0 Constant
b X xB↓ 1 x2 x3 x4 x5 x6 x7
0
0
0
0
x2
x5
x6
x
1/2 1 -1/4 1/4 0 0 0
1 0 -5/2 -1/2 1 0 0
7
5/2 0 -3/4 3/4 0 1 0
7/2 0 -1/4 0 0 1
2
3
8
2
jjj zcc −=−
-9/2 0 17/4 -5/4 0 0 0 Z=10
Table-3(Optimal Table)
cB jC
↓
→ -2 5 3 0 0 0 0 Constant
b X xB↓ 1 x2 x3 x4 x5 x6 x7
0
0
0
3
x2
x5
x6
x
10/13 1 0 3/13 0 0 1/13
48/13 0 0 -9/13 1 0 10/13
43/13 0 0 9/13 0 1 3/13
14/13 0 1 -1/13 0 0 4/13 3
28/13
59/13
110/13
8/13
jjj zcc −=−
-ve 0 0 -ve 0 0 -ve Z=164/13
4
13/4
83
Since in Table – 3 all cj-zj• 0, this table gives the optimal solution to the given
problem. Therefore the optimal solution to the given problem is
x1= 0, x2 = 28/13 , x3 = 8/13
c
, with Zmax =164/13.
Solution of example 6.1.1 in Paranjape’s Two –Basic Variables Replacement
Method:
We now want to solve the above problem by Paranjape’s method. The initial table of the
problem is as follows:
Table-1 (Initial Table)
B jC
↓
→ -2 5 3 0 0 0 0 Constant
b X xB↓ 1 x2 x3 x4 x5 x6 x7
0
0
0
0
x4
x5
x6
x
2 -1 1 0 0 0
2 2 -3 0 1 0 0
1 -3 0 0 0 1 0
7 4 1 0 0 0 1
8
7
2
4
jjj zcc −=−
-2 5 3 0 0 0 0 Z=0
Here we see that x2 replaces x4 and x3 replaces x7
c
. And the solution of the above problem is as follow:
Optimal Table
B jC
↓
→ -2 5 3 0 0 0 0 Constant
b X xB↓ 1 x2 x3 x4 x5 x6 x7
0
0
0
3
x2
x5
x6
x
10/13 1 0 3/13 0 0 1/13
48/13 0 0 -9/13 1 0 10/13
43/13 0 0 9/13 0 1 3/13
14/13 0 1 -1/13 0 0 4/13 3
28/13
59/13
110/13
8/13
jjj zcc −=−
-ve 0 0 -ve 0 0 -ve Z=164/13
4
3
84
Since in the above table all cj-zj• 0, this table gives the optimal solution. Therefore
the optimal solution to the given problem is x1=0, x2=28/13, x3
1331
14=
−=K
=8/13 with Zmax=164/13.
Calculations
1328
34
18
1311
222
211
1 2 =−
===∧
urB
urBB
yX
yX
kxXr
r
r
138
41
84
1311
212
111
2 3 ====∧
r
r
r
Bur
BurB
Xy
Xy
kxX
1359
138
.31328
.27
221122 225
=
−−=
+−==
∧∧∧
rr BuBuBB xyxyxxx
Similarly
13110
01328
.3263 =
+−−==
∧
xxB
Calculation for the first column of the optimal table
1310
34
12
131
11 =−
=∧
y
Note: since 211 =y corresponds to the pivot element 4 which is in the first column of K
we replace the first column of K by
4
2
Similarly
1314
41
24
131
41 ==∧
y
1348
1314
.31310
.2221 =
−−=
∧
y
1343
01310
.3131 =
+−−=
∧
y
Similarly we calculate other columns of the optimal table.
85
Solution of example 6.1.1 in more than one Basic Variable Replacement Method by
using combined program in programming language Mathematica:
Out Put: Table 1
Cj 2 5 3 0 0 0 0 RHSCB Basis X1 X2 X3 S1 S2 S3 S4 0 S1 2 4 1 1 0 0 0 80 S2 2 2 3 0 1 0 0 70 S3 1 3 0 0 0 1 0 20 S4 4 1 3 0 0 0 1 4Cj CjZj 2 5 3 0 0 0 0 0
-----------------------------------------------------------
Feasible Solution = 0
More than one basic var replacement
Table 2
Cj 2 5 3 0 0 0 0 RHSCB Basis X1 X2 X3 S1 S2 S3 S4
5 X2 1013
1 0 313
0 0 113
2813
0 S2 4813
0 0 913
1 0 1013
5913
0 S3 4313
0 0 913
0 1 313
11013
3 X3 1413
0 1 113
0 0 413
813
Cj CjZj 118
130 0 12
130 0 17
1316413
-----------------------------------------------------------
Solution Point
X1 0 Non Basic Variable X2
2813Basic Variable
X3
813Basic Variable
S1 0 Non Basic Variable S2
5913Basic Variable
S3
11013Basic Variable
S4 0 Non Basic Variable All C
j 0 & Optimal Value
16413
No Alternative Solution
86
6.1.2 Numerical Example 2:
Maximize Z= 3x1+2x2
Subject to 2x1+x2 ≤ 4
-3x1+5x2 ≤ 15
3x1-x2 ≤ 3
x1, x2• 0
Solution of the above problem in Graphical Method:
The solution space satisfying the given constraints and meeting the non negativity
restrictions x1, x2
x
• 0 is shown shaded in figure below. Any point in this shaded region is
a feasible solution to the given problem.
2
(0,4)
D(0,3) C(5/13, 42/13) B(7/5,6/5) O(0,0) A(1,0) (2,0) x1
Figure-1 Feasible region for example 6.1.1
87
The vertices of the convex feasible region OABCD are O(0,0), A(1,0),
B(7/5, 6/5), C (5/13, 42/13) and D(0,3).
The value of the objective function at these points are:
Z(0)= 0
Z(A) = 3.1+0 = 3
Z(B) = 3. 7/5+, 2.6/5 = 33/5.
Z(C) = 3. 5/13+2. 42/13 = 99/13
and Z(D) = 3.0 + 2.3 = 6
Since the maximum value of the objective function is 99/13 and it occurs at
C(5/13,42/13), the optimal solution to the given problem is x1=5/13, X2=42/13 with
Zmax = 99/13
Solution of example 6.1.2 in usual simplex method:
Inserting the slack variables x3, x4, x5 • 0 to the 1st, 2nd and 3rd constraints of
example 6.1.2 we first transform the example to standard form as follows:
Maximize Z= 3x1+2x2
Subject to 2x1+x2 +x3 = 4
-3x1+5x2+ x4 = 15
3x1-x2 + x5 = 3
x1, x2, x3, x4,x5
c
• 0
Then we get the initial table as below:
Table-1 (Initial Table)
B jC
↓
→ 3 2 0 0 0 Constant
b X xB↓ 1 x2 x3 x4 x5
0
0
0
x3
x4
x
2 1 1 0 0
-3 5 0 1 0
3 -1 0 0 1 5
4
15
3
jjj zcc −=−
3 2 0 Z=0 0 0
3
88
Table-2
cjCB
↓
→ 3 2 0 0 0 Constant
b X xB↓ 1 x2 x3 x4 x5
0
0
3
x3
x4
x
0 1
1
0 - 2/3
0 4 0 1 1
1 -
1/3 0 0 1/3
2 18 1
cj=cj-zj 0 3 0 0 -1 Z= 3
Table-3
cB jC
↓
→ 3 2 0 0 0 Constant
b X xB↓ 1 x2 x3 x4 x5
2
0
3
x2
x4
x
0 1 3/5 0 -2/5
0 0 -12/5 1
1 1 0 1/5 0 1/5
6/5
66/5
7/5
jjj zcc −=−
0 0 -9/5 Z=33/5 0 1/5
Table-4 (Optimal Table)
cjCB
↓
→ 3 2 0 0 0 Constant
b X xB↓ 1 x2 x3 x4 x5
2
0
3
x2
x5
x
0 1 1/5
1
2/13 0
0 0 -12/13 5/13 1
1 0 5/13 -1/13 0
42/13 66/13 5/13
cj=cj-zj 0 0 -101/65 -1/13 Z= 0 99/13
Since in Table – 4 all the relative profit factors are non positive i.e all cj • 0, this
table give the optimal solution to the given problem. Hence the optimal solution is
x1 = 5/13, x2 = 43/13 With Zmax = 99/13.
5/3
13/5
89
Solution of example 6.1.2 in Paranjape [9]’s Two- Basic Variabls Replacement
Method:
We now attempt to solve the above problem following the two basic variables
replacement method of Paranjape.
The initial table of the problem is as follows:
Table-1 (Initial Table)
cB jC
↓
→ 3 2 0 0 0 Constant
b X xB↓ 1 x2 x3 x4 x5
0
0
0
x3
x4
x
2 1 1 0 0
5
-3 0 1 0
-1 0 0 1
4
15
3
jjj zcc −=−
3 2 0 Z=0 0 0
u1 u2
There are two positive cj=cj- zj, viz c1- z1 and c2- z2 in the above table. The first
one being numerically larger between the two. We choose the 1st column of A as the u1th
column and the 2nd column as the u2th column. Then by minimum ratio rule (also as in
criterion 2) we see that x2 replaces x4 and x1 replaces x5
1253
13
2212
2111 =−
−==
urur
urur
yy
yyK
.
Then
Here we see that all the elements in the expression of K are not non-negative. But
according to the Paranjape’s method all the elements in the expression of K are to be non
negative. Paranjape also describe a way to overcome this difficulty. In this regard he
suggested to choose the v’th column of A by the alternative criterion such as choose the
column of A as the v’th for which cv’-zv’ is the greatest positive cj-zj;
j-1,2,1….n; j • u1,u2.
But in the above example there is no such v’th column. So one can not move any
where to solve the above problem by replacing two basic variables at iteration. Paranjape
did not give any instruction to overcome this kind of difficulty.
So one cannot solve the above problem applying Paranjape’s method. But the usual
simplex method and the graphical method show that the above problem has an optimal
solution. Therefore Paranjape’s method fails here.
3
5
90
But we can solve the example 6.1.2 by using our combined program in
programming language Mathematica. This is our method is a combined method which
incorporates Dantzig and Paranjape.
Solution of example 6.1.2 by using our combined program in programming language
Mathematica:
Out Put:
Table 1
Cj 3 2 0 0 0 RHSCB Basis X1 X2 S1 S2 S3 0 S1 2 1 1 0 0 40 S2 3 5 0 1 0 150 S3 3 1 0 0 1 3Cj CjZj 3 2 0 0 0 0
----------------------------------------------------------- Feasible Solution = 0 Table 2
Cj 3 2 0 0 0 RHSCB Basis X1 X2 S1 S2 S3
0 S1 0 53
1 0 23
2
0 S2 0 4 0 1 1 183 X1 1 1
30 0 1
31
Cj CjZj 0 3 0 0 1 3
----------------------------------------------------------- Feasible Solution = 3 Table 3
Cj 3 2 0 0 0 RHSCB Basis X1 X2 S1 S2 S3
2 X2 0 1 35
0 25
65
0 S2 0 0 125
1 135
665
3 X1 1 0 15
0 15
75
Cj CjZj 0 0 9
50 1
5335
-----------------------------------------------------------
Feasible Solution
335
91
Table 4
Cj 3 2 0 0 0 RHSCB Basis X1 X2 S1 S2 S3
2 X2 0 1 313
213
0 4213
0 S3 0 0 1213
513
1 6613
3 X1 1 0 513
113
0 513
Cj CjZj 0 0 21
13113
0 9913
----------------------------------------------------------- Solution Point
X1
513Basic Variable
X2
4213Basic Variable
S1 0 Non Basic Variable S2 0 Non Basic Variable S3
6613Basic Variable
All C
j 0 & Optimal Value
9913
No Alternative Solution
6.2 Conclution:
In this chapter we illustrated some numerical examples to compare the different
method for solving all kinds of linear programming problem replacing more than one basic
variable at each simplex iteration. For clarity we first solve the same example in graphical
method and also in usual simplex method of Dantzig. We also generalize the claim to
more than one basic variables replacement methods for solving (LP). But in the above
numerical example1 there is no such v’th column. So one can not move any where to solve
the above problem by replacing two basic variables at iteration. Paranjape did not give any
instruction to overcome this kind of difficulty. So one cannot solve the above problem
applying Paranjape’s method. But the usual simplex method and the graphical method
show that the problem has an optimal solution. Therefore Paranjape’s method fails here.
But we can solve the example 6.1.2 by using our combined program in programming
language Mathematica. This is our method is a combined method which incorporates
Dantzig and Paranjape. That is why; we can say that our method can solve any difficulties
for solving LP.
Chapter 7
CONCLUSION
In this research, we have generalized the simplex method of one basic variable
replacement by non basic variables to simplex method of more than one (P, where P • 1)
basic variables replacement by non basic variables, which has already been discussed in
chapter-5. We also developed a computer technique for solving LP problems of replacing
more than one basic variable by non-basic variables at each simplex iteration. It is also
applicable in the case where the Paranjape’s method stops in a table having only one basic
variable to be replaced. Our method incorporates with the usual simplex method to
overcome that problem. We compared the results obtained by our method with that of
Dantzig’s methods and also show the differences between these methods with illustrative
numerical examples that our method takes less iteration than Dantzig’s one basic variable
replacement method, which reduce the iteration time, labor as well as computational cost.
We can solve a linear programming problem by graphical method as well as
numerical method. We can use graphical method when the problem is in two dimensional.
For this method, it is necessary to plot the graph accurately which is very difficult and also
time consuming. To overcame this difficulties, in this thesis we developed a computational
technique using mathematica codes to show the feasible region of two-dimensional linear
programming problems and it also give the optimal solution. In usual simplex method, we
need to use artificial variables and have to apply 2 Phase simplex method or Big-M simplex
method when the set of constraints is not in canonical form. It needs many iterations which is also
time consuming and clumsy. But by applying our computational technique using mathematica
codes we can solve any types of problem easily, which has already been discussed in
chapter-2 with illustrative examples.
The practical purpose of optimization is to formulate a mathematical programming
model of real life problems, solve these and use the solution for different organizational
point of view. In this thesis, we first formulate two linear programming model for sizeable
large-scale real life LP problems, which involves a numerous amount of data, constraints
and variables. A small problem can be solve with the help of pencil and paper. But large-
scale real life problem can not be solved by hand calculations. To overcome the
93
complexities of large-scale linear programming (LP) problem, it requires computer-
oriented solution. Our computer technique easily overcomes these complexities.
Our computer techniques can solve any types of LP problems of any dimension
where the set of constraints is not in a conical form. Because simplex method is only
applicable when the set of constraints in a conical form (i.e., Ax ≤ b, ∀ x ∈ X). In that case
another method is necessary for solving the problem but our technique will be applicable
in both types of problem.
Our computer techniques also introduce a decision making rule that defines the
different variables and types of variable or a system such that an objective defined by the
decision maker is optimized. It also reduce the activities for expressing LP problem in
standard form by introducing slack or surplus or artificial variables where it was necessary
for solving LP problem.
Finally, we may conclude that the linear programming method along with our
computer program is a mighty method for large-scale real life optimization problem,
where it can be applied. To do this, one has to build the required mathematical
programming model of the problem and required computer program. Hence our computer
oriented solution procedure saves time & labor and it can solve the problem of any
dimension.
94
REFERENCES
Agarwal, S.C. and Verma, R. K., 3-Variables Replacement in Linear Programming Problems, Acta Ciencia Indica, Vol.3, No.1, P 181-188, (1977)
Dantzig,G.B., Linear Programming and Extension, Princeton University Press, Prinecton,N.J., (1962)
Eugere, D., Schaum’s Outline-Mathematica, Mc-Graw-Hill, Newyork., (2001)
Hadley, G., Linear Programming, Addison Wesley,Pub Co.,London, (1972)
Kambo, N.S. “Mathematical Programming Techniques”, Affiliated East-West press PVT.
LTD, New Delhi.(1984)
Kambo, N.S., Introduction to operations Research, MeGrew-Hill, Inc.New York, (1988)
Marcus, M. “A Survey of Finite mathematics”, Houghton Mifflin Co., Boston (1969),
p(312-316).
Paranjape, S.R., Simplex Method: Two Variables Replacement, Management Science, Vol. 12, No. 1, P 135-141, (1965)
Swarup, K., CCERO (Belgium), 8(No-2), P 132-136, (1966)
Winston, W.L. “Operations Research: Applications and Algorithms”, International
Thomson Publishing, California, USA. (1993)
Wolfram, S., Mathematica, Addison- Wesley Publishing company, Menlo Park, California, Newyork,(2000)
95
CHAPTER # 1
INTRODUCTION
96
CHAPTER # 2
LINEAR PROGRAMMING MODELS: GRAPHICAL AND COMPUTER
METHODS
97
CHAPTER # 3
SIMPLEX METHOD AND COMPUTER ORIENTED ALGORITHM FOR
SOLVING LINEAR PROGRAMMING PROBLEMS
98
CHAPTER # 4
MORE THAN ONE BASIC VARIABLES REPLACEMENT IN SIMPLEX
METHOD FOR SOLVING LINEAR PROGRAMMING PROBLEMS
99
CHAPTER # 5
GENERALIZATION OF SIMPLEX METHOD FOR SOLVING LINEAR
PROGRAMMING PROBLEMS
100
CHAPTER # 6
COUNTER EXAMPLES OF MORE THAN ONE BASIC VARIABLES
REPLACEMENT AT EACH ITERATION OF SIMPLEX METHOD
101
CHAPTER # 7
CONCLUSION