Introduction Price Endogenous Improve E�ciency Linear Approximation
Solving Non-linear Programming Problems
Pei Huang1
1Department of Agricultural Economics
Texas A&M University
Based on materials written by Gillig & McCarl and improved upon by many previous lab instructors
Special thanks to Mario Andres Fernandez
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 1 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Outline
1 Introduction
2 Price Endogenous Problem
3 Starting Points and Bounds
4 Linear Approximation
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 2 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Non-Linear Programming
We often encounter problems that cannot be solved by LP algorithms,
in which the objective function or constraints are in non-linear forms.
Algebraically, the optimal conditions are solved by KKT conditions
(see Chapter 12, McCarl and Spreen Book).
Empirically, some algorithms are used to �nd the optimal solution, for
example, hill climbing algorithm.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 3 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Non-Linear Programming
We often encounter problems that cannot be solved by LP algorithms,
in which the objective function or constraints are in non-linear forms.
Algebraically, the optimal conditions are solved by KKT conditions
(see Chapter 12, McCarl and Spreen Book).
Empirically, some algorithms are used to �nd the optimal solution, for
example, hill climbing algorithm.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 3 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Non-Linear Programming
We often encounter problems that cannot be solved by LP algorithms,
in which the objective function or constraints are in non-linear forms.
Algebraically, the optimal conditions are solved by KKT conditions
(see Chapter 12, McCarl and Spreen Book).
Empirically, some algorithms are used to �nd the optimal solution, for
example, hill climbing algorithm.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 3 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
NLP Set-up
As before
De�ne setsData assignmentDe�ne variablesDe�ne equations (now non-linear)Model statement
Except now the last step is: Solve modelname using NLP maximizing
variable
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 4 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Price Endogenous Problem
Demand : Pd = ad −bdQd
Supply : Ps = ad +bsQs
The mathematical formation for this problem is:
The problem maximizes welfare.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 5 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Example
max 6Qd−0.15Q2d−Qs−0.1Q2
s
Qd −Qs ≤0Qd , Qs ≥0
Use NLP solver
in GAMS
The starting
values for search
are set to zeros.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 6 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Example
max 6Qd−0.15Q2d−Qs−0.1Q2
s
Qd −Qs ≤0Qd , Qs ≥0
Use NLP solver
in GAMS
The starting
values for search
are set to zeros.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 6 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Example
max 6Qd−0.15Q2d−Qs−0.1Q2
s
Qd −Qs ≤0Qd , Qs ≥0
Use NLP solver
in GAMS
The starting
values for search
are set to zeros.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 6 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
NLP Solvers
Gradient based search
With a starting point and a direction, the algorithm repeatedly search
for optimal.
Potential problems
Solver may only �nd a local solution.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 7 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
NLP Solvers
Gradient based search
With a starting point and a direction, the algorithm repeatedly search
for optimal.
Potential problems
Solver may only �nd a local solution.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 7 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
NLP Solvers
Gradient based search
With a starting point and a direction, the algorithm repeatedly search
for optimal.
Potential problems
Solver may only �nd a local solution.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 7 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Starting Points
Initial values for variables
Syntax
variablename.L(setdependency)=startingvalue;
Default starting point is zero or the variable lower bound.
Starting point in right neighborhood is likely to return a desirable
solution.
Initial values close to optimal one reduces work required to �nd the
optimal solution.
Poor initial values can lead to numerical problems. Starting points can
help avoid such.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 8 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Starting Points
Initial values for variables
Syntax
variablename.L(setdependency)=startingvalue;
Default starting point is zero or the variable lower bound.
Starting point in right neighborhood is likely to return a desirable
solution.
Initial values close to optimal one reduces work required to �nd the
optimal solution.
Poor initial values can lead to numerical problems. Starting points can
help avoid such.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 8 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Starting Points
Initial values for variables
Syntax
variablename.L(setdependency)=startingvalue;
Default starting point is zero or the variable lower bound.
Starting point in right neighborhood is likely to return a desirable
solution.
Initial values close to optimal one reduces work required to �nd the
optimal solution.
Poor initial values can lead to numerical problems. Starting points can
help avoid such.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 8 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Starting Points
Initial values for variables
Syntax
variablename.L(setdependency)=startingvalue;
Default starting point is zero or the variable lower bound.
Starting point in right neighborhood is likely to return a desirable
solution.
Initial values close to optimal one reduces work required to �nd the
optimal solution.
Poor initial values can lead to numerical problems. Starting points can
help avoid such.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 8 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Starting Points
Initial values for variables
Syntax
variablename.L(setdependency)=startingvalue;
Default starting point is zero or the variable lower bound.
Starting point in right neighborhood is likely to return a desirable
solution.
Initial values close to optimal one reduces work required to �nd the
optimal solution.
Poor initial values can lead to numerical problems. Starting points can
help avoid such.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 8 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Starting Points
Initial values for variables
Syntax
variablename.L(setdependency)=startingvalue;
Default starting point is zero or the variable lower bound.
Starting point in right neighborhood is likely to return a desirable
solution.
Initial values close to optimal one reduces work required to �nd the
optimal solution.
Poor initial values can lead to numerical problems. Starting points can
help avoid such.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 8 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Example
Suppose we solve a simple CS-PS maximization problem without
specifying starting points
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 9 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Example
The NLP solver takes 22 iterations to �nd the solution.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 10 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Example
If we specify starting points before solving the model
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 11 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Example
The NLP solver only takes 8 iterations to �nd the solution.
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 12 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Bounds
Set upper bound and lower bound for speci�c variables
More realistic
Keep algorithm in a range
Improve solution feasibility and presolve performance
Syntax
variablename.LO(setdependency)=lowerbound;variablename.UP(setdependency)=upperbound;
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 13 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Bounds
Set upper bound and lower bound for speci�c variables
More realistic
Keep algorithm in a range
Improve solution feasibility and presolve performance
Syntax
variablename.LO(setdependency)=lowerbound;variablename.UP(setdependency)=upperbound;
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 13 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Linear Approximation
Generally, GAMS takes much more time to solve a NLP problem than
a LP problem.
We sometimes can linearly approximate the NLP problem, and then
solve it as a LP problem.
A typical example in our class is the MOTAD
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 14 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Linear Approximation
Generally, GAMS takes much more time to solve a NLP problem than
a LP problem.
We sometimes can linearly approximate the NLP problem, and then
solve it as a LP problem.
A typical example in our class is the MOTAD
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 14 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Linear Approximation
Generally, GAMS takes much more time to solve a NLP problem than
a LP problem.
We sometimes can linearly approximate the NLP problem, and then
solve it as a LP problem.
A typical example in our class is the MOTAD
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 14 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
MOTAD Model
The algebraic form
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 15 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Linear Approximation for MOTAD
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 16 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Linear Approximation for MOTAD
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 16 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Linear Approximation for MOTAD
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 17 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Linear Approximation for MOTAD
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 18 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Linear Approximation for MOTAD
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 19 / 19
Introduction Price Endogenous Improve E�ciency Linear Approximation
Linear Approximation for MOTAD
Pei Huang | Texas A&M University | AGEC 641 Lab Session, Fall 2013 19 / 19