+ All Categories
Home > Documents > Appendix A Solving Systems of Nonlinear Equations978-3-319-69407... · 2017. 12. 5. · Solving...

Appendix A Solving Systems of Nonlinear Equations978-3-319-69407... · 2017. 12. 5. · Solving...

Date post: 25-Jan-2021
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
24
Appendix A Solving Systems of Nonlinear Equations Chapter 4 of this book describes and analyzes the power flow problem. In its ac version, this problem is a system of nonlinear equations. This appendix describes the most common method for solving a system of nonlinear equations, namely, the Newton-Raphson method. This is an iterative method that uses initial values for the unknowns and, then, at each iteration, updates these values until no change occurs in two consecutive iterations. For the sake of clarity, we first describe the working of this method for the case of just one nonlinear equation with one unknown. Then, the general case of n nonlinear equations and n unknowns is considered. We also explain how to directly solve systems of nonlinear equations using appropriate software. A.1 Newton-Raphson Algorithm The Newton-Raphson algorithm is described in this section. A.1.1 One Unknown Consider a nonlinear function f .x/ W R ! R. We aim at finding a value of x so that: f .x/ D 0: (A.1) To do so, we first consider a given value of x, e.g., x .0/ . In general, we have that f x .0/ ¤ 0. Thus, it is necessary to find x .0/ so that f x .0/ C x .0/ D 0. © Springer International Publishing AG 2018 A.J. Conejo, L. Baringo, Power System Operations, Power Electronics and Power Systems, https://doi.org/10.1007/978-3-319-69407-8 271
Transcript
  • Appendix ASolving Systems of Nonlinear Equations

    Chapter 4 of this book describes and analyzes the power flow problem. In its acversion, this problem is a system of nonlinear equations. This appendix describesthe most common method for solving a system of nonlinear equations, namely, theNewton-Raphson method. This is an iterative method that uses initial values for theunknowns and, then, at each iteration, updates these values until no change occursin two consecutive iterations.

    For the sake of clarity, we first describe the working of this method for the case ofjust one nonlinear equation with one unknown. Then, the general case of n nonlinearequations and n unknowns is considered.

    We also explain how to directly solve systems of nonlinear equations usingappropriate software.

    A.1 Newton-Raphson Algorithm

    The Newton-Raphson algorithm is described in this section.

    A.1.1 One Unknown

    Consider a nonlinear function f .x/ W R! R. We aim at finding a value of x so that:

    f .x/ D 0: (A.1)

    To do so, we first consider a given value of x, e.g., x.0/. In general, we have thatf

    �x.0/

    � ¤ 0. Thus, it is necessary to find �x.0/ so that f �x.0/ C�x.0/� D 0.

    © Springer International Publishing AG 2018A.J. Conejo, L. Baringo, Power System Operations, Power Electronics and PowerSystems, https://doi.org/10.1007/978-3-319-69407-8

    271

    https://doi.org/10.1007/978-3-319-69407-8

  • 272 A Solving Systems of Nonlinear Equations

    Using Taylor series, we can express f�x.0/ C�x.0/� as:

    f�x.0/ C�x.0/� D f �x.0/�C�x.0/

    �df .x/

    dx

    �.0/C

    ��x.0/

    �2

    2

    �d2f .x/

    dx2

    �.0/C : : :

    (A.2)

    Considering only the first two terms in Eq. (A.2) and since we seek to find �x.0/

    so that f�x.0/ C�x.0/� D 0, we can approximately compute �x.0/ as:

    �x.0/ � � f�x.0/

    �df .x/

    dx

    �.0/ : (A.3)

    Next, we can update x as:

    x.1/ D x.0/ C�x.0/: (A.4)

    Then, we check if f�x.1/

    � D 0. If so, we have found a value of x that satisfiesf .x/ D 0. If not, we repeat the above step to find �x.1/ so that f �x.1/ C�x.1/� D 0and so on.

    In general, we can compute x.�/ as:

    x.�C1/ D x.�/ � f�x.�/

    �df .x/

    dx

    �.�/ ; (A.5)

    where � is the iteration counter.Considering the above, the Newton-Raphson method consists of the following

    steps:

    • Step 0: initialize the iteration counter (� D 0) and provide an initial value for x,i.e., x D x.�/ D x.0/.

    • Step 1: compute x.�C1/ using Eq. (A.5).• Step 2: check if the difference between the values of x in two consecutive

    iterations is lower than a prespecified tolerance �, i.e., check ifˇˇx.�C1/ � x.�/ˇˇ

    < �. If so, the algorithm has converged and the solution is x.�C1/. If not, continueat Step 3.

    • Step 3: update the iteration counter � � C 1 and continue at Step 1.Illustrative Example A.1 Newton-Raphson algorithm for a one-unknown problem

    We consider the following quadratic function:

    f .x/ D x2 � 3xC 2;

  • A Solving Systems of Nonlinear Equations 273

    whose first derivative is:

    df .x/

    dxD 2x � 3:

    The Newton-Raphson algorithm proceeds as follows:

    • Step 0: we initialize the iteration counter (� D 0) and provide an initial value forx, e.g., x.�/ D x.0/ D 0.

    • Step 1: we compute x.1/ using the equation below:

    x.1/ D x.0/ ��x.0/

    �2 � 3x.0/ C 22x.0/ � 3 D 0 �

    02 � 3 � 0C 22 � 0 � 3 D 0:6667:

    • Step 2: we compute absolute value of the difference between x.1/ and x.0/, i.e.,j0:6667 � 0j D 0:6667. Since this difference is not small enough, we continue atStep 3.

    • Step 3: we update the iteration counter � D 0C 1 D 1 and continue at Step 1.• Step 1: we compute x.2/ using the equation below:

    x.2/ D x.1/��x.1/

    �2 � 3x.1/ C 22x.1/ � 3 D 0:6667�

    0:66672 � 3 � 0:6667C 22 � 0:6667 � 3 D 0:9333:

    • Step 2: we compute the absolute value of the difference between x.2/ and x.1/,i.e., j0:9333 � 0:6667j D 0:2666. Since this difference is not small enough, wecontinue at Step 3.

    • Step 3: we update the iteration counter � D 1C 1 D 2 and continue at Step 1.This iterative algorithm is repeated until the difference between the values of x in

    two consecutive iterations is small enough. Table A.1 summarizes the results. Thealgorithm converges in four iterations for a tolerance of 1 � 10�4.

    Note that the number of iterations needed for convergence by the Newton-Raphson algorithm is small.

    Table A.1 IllustrativeExample A.2: results

    Iteration x

    0 0

    1 0.6667

    2 0.9333

    3 0.9961

    4 1.0000

  • 274 A Solving Systems of Nonlinear Equations

    A.1.2 Many Unknowns

    The Newton-Raphson method described in the previous section is extended in thissection to the general case of a system of n nonlinear equations with n unknowns,as the one described below:

    8ˆ̂̂ˆ̂<

    ˆ̂̂ˆ̂:

    f1 .x1; x2; : : : ; xn/ D 0;f2 .x1; x2; : : : ; xn/ D 0;

    :::

    fn .x1; x2; : : : ; xn/ D 0;

    (A.6)

    where fi .x1; x2; : : : ; xn/ W Rn ! R, i D 1; : : : ; n, are nonlinear functions.The system of equations (A.6) can be rewritten in compact form as:

    f .x/ D 0; (A.7)

    where:

    • f .x/ D Œf1 .x/ f2 .x/ : : : fn .x/�> D 0: Rn ! Rn,• x D Œx1 x2 : : : xn�>,• 0 D Œ0 0 : : : 0�>, and• > denotes the transpose operator.

    Given an initial value for vector x, i.e., x.0/, we have, in general, that f�x.0/

    � ¤ 0.Thus, we need to find �x.0/ so that f

    �x.0/ C�x.0/� D 0. Using the first-order Taylor

    series, f�x.0/ C�x.0/� can be approximately expressed as:

    f�x.0/ C�x.0/� � f �x.0/�C J.0/�x.0/; (A.8)

    where J is the n � n Jacobian:

    J D

    2

    666666664

    @f1 .x/@x1

    @f1 .x/@x2

    � � � @f1 .x/@xn

    @f2 .x/@x1

    @f2 .x/@x2

    � � � @f2 .x/@xn

    ::::::

    : : ::::

    @fn .x/@x1

    @fn .x/@x2

    � � � @fn .x/@xn

    3

    777777775

    : (A.9)

    Since we seek f�x.0/ C�x.0/� D 0, from Eq. (A.8) we can compute �x.0/ as:

    �x.0/ � � �J.0/��1 f �x.0/� : (A.10)

  • A Solving Systems of Nonlinear Equations 275

    Then, we can update vector x as:

    x.1/ D x.0/ C�x.0/: (A.11)

    In general, we can update vector x as:

    x.�C1/ D x.�/ � �J.�/��1 f �x.�/� ; (A.12)

    where � is the iteration counter.Considering the above, the Newton-Raphson algorithm consists of the following

    steps:

    • Step 0: initialize the iteration counter (� D 0) and provide an initial value forvector x, i.e., x D x.�/ D x.0/.

    • Step 1: compute the Jacobian J using (A.9).• Step 2: compute x.�C1/ using matrix equation (A.12).• Step 3: check every element of the absolute value of the difference between

    the values of vector x in two consecutive iterations is lower than a prespecifiedtolerance �, i.e., check if

    ˇ̌x.�C1/ � x.�/ ˇ̌ < �. If so, the algorithm has converged

    and the solution is x.�C1/. If not, continue at Step 4.• Step 4: update the iteration counter � � C 1 and continue at Step 1.

    For the sake of clarity, this iterative algorithm is schematically described throughthe flowchart in Fig. A.1.

    Illustrative Example A.2 Newton-Raphson algorithm for a two-unknown problem

    We consider the following system of two equations and two unknowns:

    (f1 .x; y/ D xC xy � 4;f2 .x; y/ D xC y � 3:

    We aim at finding the values of x and y so that f1 .x; y/ D 0 and f2 .x; y/ D 0. Todo so, we use the Newton-Raphson method.

    First, we compute the partial derivatives:

    8ˆ̂̂ˆ̂̂ˆ̂

  • 276 A Solving Systems of Nonlinear Equations

    +1 ←

    ∣∣∣x( +1)−x ( )

    ∣∣∣ < ? END

    YES

    NO

    Compute x ( +1) using (A.12)

    Compute the Jacobian J ( ) using (A.9)

    x= x(0)

    = 0

    n

    n n

    n

    n

    n en

    Fig. A.1 Algorithm flowchart for the Newton-Raphson method

    Second, we build the Jacobian matrix:

    J D�

    1C y x1 1

    :

    Then, we follow the iterative procedure described above:

    • Step 0: we initialize the iteration counter (� D 0) and provide initial values forvariables x and y, e.g., x.�/ D x.0/ D 1:98 and y.�/ D y.0/ D 1:02, respectively.

    • Step 1: we compute the Jacobian matrix J at iteration � D 0:

    J.0/ D�

    1C y.0/ x.0/1 1

    D

    �1C 1:02 1:98

    1 1

    D

    �2:02 1:98

    1 1

    :

  • A Solving Systems of Nonlinear Equations 277

    Table A.2 IllustrativeExample A.2: results

    Iteration x y

    0 1.9800 1.0200

    1 1.9900 1.0100

    2 1.9950 1.0050

    3 1.9975 1.0025

    4 1.9987 1.0013

    5 1.9994 1.0006

    6 1.9997 1.0003

    7 1.9998 1.0002

    8 1.9999 1.0001

    9 2.0000 1.0000

    • Step 2: we compute x.1/ and y.1/ using the matrix equation below:

    �x.1/

    y.1/

    D

    �x.0/

    y.0/

    �1C y.0/ x.0/

    1 1

    �1 �x.0/ C x.0/y.�/ � 4

    x.0/ C y.0/ � 3

    D�

    1:98

    1:02

    �2:02 1:98

    1 1

    �1 ��4 � 10�40

    D

    �1:9900

    1:0100

    :

    • Step 3: we compute the difference between x.1/ and x.0/, i.e., j1:9900 � 1:98j D0:01, as well as the differences between y.1/ and y.0/, i.e., j1:0100 � 1:02j D 0:01.Since these differences are not small enough, we continue with Step 4.

    • Step 4: we update the iteration counter � D 0C1 D 1 and continue with Step 1.This iterative algorithm is repeated until the differences between the values

    of x and y in two consecutive iterations are small enough. Table A.2 providesthe evolution of the values of these unknowns. The algorithm converges in nineiterations for a tolerance of 1 � 10�4.

    Note that the number of iterations needed by the Newton-Raphson algorithm israther small.

    Next, we consider a different initial solution. Table A.3 provides the results. Inthis case, the algorithm converges in 11 iterations for a tolerance of 1 � 10�4.

    We conclude that the initial solution does not have an important impact onthe number of iterations required for convergence, provided that convergence isattained. However, convergence is not necessarily guaranteed, and the Jacobianmay be singular at any iteration. Further details on convergence guarantee and onconvergence speed are available in [1].

  • 278 A Solving Systems of Nonlinear Equations

    Table A.3 IllustrativeExample A.2: resultsconsidering a different initialsolution

    Iteration x y

    0 2.1000 0.9000

    1 2.0500 0.9500

    2 2.0250 0.9745

    3 2.0125 0.9875

    4 2.0062 0.9938

    5 2.0031 0.9969

    6 2.0016 0.9984

    7 2.0008 0.9992

    8 2.0004 0.9996

    9 2.0002 0.9998

    10 2.0001 0.9999

    11 2.0000 1.0000

    A.2 Direct Solution

    Generally, the Newton-Raphson method does not need to be implemented. An off-the-self routine (in GNU Octave [2] or MATLAB [3]) embodying the Newton-Raphson algorithm can be used to solve systems of nonlinear equations.

    Illustrative Examples A.1 and A.2 are solved below using GNU Octave routines.

    A.2.1 One Unknown

    The GNU Octave [2] routines below solve Illustrative Example A.1:

    1 clc2 fun = @NR1;3 x0 = [0]; x = fsolve(fun,x0)

    1 function F = NR1(x)2 %3 F(1)=x(1)*x(1)-3*x(1)+2;

    The solution provided by GNU Octave is:

    1 x = 1.00000

  • A Solving Systems of Nonlinear Equations 279

    A.2.2 Many Unknowns

    The GNU Octave routines below solve Illustrative Example A.2:

    1 clc2 fun = @NR2;3 x0 = [1.98,1.02]; x = fsolve(fun,x0)

    1 function F = NR2(x)2 %3 F(1)=x(1)+x(1)*x(2)-4;4 F(2)=x(1)+x(2)-3;

    The solution provided by GNU Octave is:

    1 x =2 1.9994 1.0006

    A.3 Summary and Further Reading

    This appendix describes the Newton-Raphson method, which is the most commonmethod for solving systems of nonlinear equations, as those considered in Chap. 4of this book. The Newton-Raphson method is based on an iterative procedure thatupdates the value of the unknowns involved until the changes in their values in twoconsecutive iterations are small enough.

    Different illustrative examples are used to show the working of the Newton-Raphson method. Additionally, this appendix explains also how to directly solve asystem of nonlinear equations using appropriate software, such as GNU Octave [2].

    Additional details can be found in the monograph by Chapra and Canale onnumerical methods in engineering [1].

    References

    1. Chapra, S.C., Canale, R.P.: Numerical Methods for Engineers, 6th edn. McGraw-Hill, New York(2010)

    2. GNU Octave (2016): Available at www.gnu.org/software/octave3. MATLAB (2016): Available at www.mathworks.com/products/matlab

    www.gnu.org/software/octavewww.mathworks.com/products/matlab

  • Appendix BSolving Optimization Problems

    This appendix provides an overview of the general structure of some of theoptimization problems considered through the chapters of this book, namely, linearprogramming, mixed-integer linear programming, and nonlinear programmingproblems.

    B.1 Linear Programming Problems

    The simplest instance of an optimization problem is a linear programming (LP)problem. All variables of an LP problem are continuous and its objective functionand constrains are linear.

    B.1.1 Formulation

    The general formulation of an LP problem is as follows:minxi;8i

    X

    i

    Cixi (B.1a)

    subject toX

    i

    Aijxi D Bj; j D 1; : : : ; m; (B.1b)X

    i

    Dikxi � Ek; k D 1; : : : ; o; (B.1c)

    xi 2 R; i D 1; : : : ; n; (B.1d)

    © Springer International Publishing AG 2018A.J. Conejo, L. Baringo, Power System Operations, Power Electronics and PowerSystems, https://doi.org/10.1007/978-3-319-69407-8

    281

    https://doi.org/10.1007/978-3-319-69407-8

  • 282 B Solving Optimization Problems

    where:

    • R is the set of real numbers,• Ci,8i, are the cost coefficients of variables xi,8i, in the objective function (B.1a),• Aij, 8i, and Bj are the coefficients that define equality constraints (B.1b), 8j,• Dik, 8i, and Ek are the coefficients that define inequality constraints (B.1c), 8k,• n is the number of continuous optimization variables,• m is the number of equality constraints, and• o is the number of inequality constraints.

    In compact form, the LP problem (B.1) can be written as:minx

    C>x (B.2a)

    subject to

    Ax D B; (B.2b)Dx � E; (B.2c)x 2 Rn�1; (B.2d)

    where:

    • superscript > denotes the transpose operator,• C 2 Rn�1 is the cost coefficient vector of the variable vector x in the objective

    function (B.2a),• A 2 Rm�n and B 2 Rm�1 are the matrix and the vector of coefficients that define

    equality constraint (B.2b), and• D 2 Ro�n and E 2 Ro�1 are the matrix and the vector of coefficients that define

    inequality constraint (B.2c).

    Some examples of LP problems are the dc optimal power flow problem analyzedin Chap. 6 or the economic dispatch problem described in Chap. 7 of this book.

    B.1.2 Solution

    One of the most common and efficient methods for solving LP problems is thesimplex method [2]. A detailed description of this method can be found, forinstance, in [4].

    LP problems can be also solved using one of the many commercially availablesoftware tools. For example, in this book we use CPLEX [5] under GAMS [3].

    Illustrative Example B.1 Linear programmingWe consider a generating unit with a capacity of 10 MW and a variable cost of

    $21/MWh. This generating unit has to decide its power output for the following 6 h,

  • B Solving Optimization Problems 283

    knowing that the electric energy prices in these hours are $10/MWh, $15/MWh,$22/MWh, $30/MWh, $24/MWh, and $20/MWh, respectively.

    Considering these data, we formulate the following LP problem:maxp1;p2;p3;p4;p5;p6

    10p1 C 15p2 C 22p3 C 30p4 C 24p5 C 20p6� 21 .p1 C p2 C p3 C p4 C p5 C p6/

    subject to

    0 � p1 � 10;0 � p2 � 10;0 � p3 � 10;0 � p4 � 10;0 � p5 � 10;0 � p6 � 10:

    The solution of this problem is (note that a superscript � in the variables belowindicates optimal value):

    p�1 D 0;p�2 D 0;p�3 D 10 MW;p�4 D 10 MW;p�5 D 10 MW;p�6 D 0:

    This solution renders an objective function value of $130.�

    A simple input GAMS [3] file to solve Illustrative Example B.1 is providedbelow:

    1 variables z, p1, p2, p3, p4, p5, p6;

    3 equations fobj, eq1a, eq1b, eq2a, eq2b, eq3a, eq3b, eq4a, eq4b,eq5a, eq5b, eq6a, eq6b;

    5 fobj.. z=e=10*p1+15*p2+22*p3+30*p4+24*p5+20*p6-21*(p1+p2+p3+p4+p5+p6);

  • 284 B Solving Optimization Problems

    7 eq1a.. 0=l=p1;8 eq1b.. p1=l=10;9 eq2a.. 0=l=p2;10 eq2b.. p2=l=10;11 eq3a.. 0=l=p3;12 eq3b.. p3=l=10;13 eq4a.. 0=l=p4;14 eq4b.. p4=l=10;15 eq5a.. 0=l=p5;16 eq5b.. p5=l=10;17 eq6a.. 0=l=p6;18 eq6b.. p6=l=10;

    20 model example_lp /all/;21 solve example_lp using lp maximizing z;

    23 display z.l, p1.l, p2.l, p3.l, p4.l, p5.l, p6.l;

    The part of the GAMS output file that provides the optimal solution is givenbelow:

    1 ---- 23 variable z.l = 130.0002 variable p1.l = 0.0003 variable p2.l = 0.0004 variable p3.l = 10.0005 variable p4.l = 10.0006 variable p5.l = 10.0007 variable p6.l = 0.000

    B.2 Mixed-Integer Linear Programming Problems

    A mixed-integer linear programming (MILP) problem is an LP problem in whichsome of the optimization variables are not continuous but integer.

    B.2.1 Formulation

    The general formulation of a MILP problem is as follows:minxi;8iIy`;8`

    X

    i

    Cixi CX

    `

    R`y` (B.3a)

  • B Solving Optimization Problems 285

    subject to

    X

    i

    Aijxi CX

    `

    G`ju` D Bj; 8j; (B.3b)X

    i

    Dikxi CX

    `

    H`ku` � Ek; 8k; (B.3c)

    xi 2 R; 8i; (B.3d)y` 2 I; ` D 1; : : : ; p; (B.3e)

    where:

    • I is the set of integer variables,• Ci, 8i, and R`, 8`, are the cost coefficients of variables xi, 8i, and y`, 8`,

    respectively, in the objective function (B.3a),• Aij, 8i; G`j, 8`; and Bj are the coefficients that define equality constraints (B.3b),8j,

    • Dik, 8i; H`k, 8`; and Ek are the coefficients that define inequalityconstraints (B.3c), 8k, and

    • p is the number of integer optimization variables,

    In compact form, MILP problem (B.3) can be written as:minx;y

    C>xC R>y (B.4a)

    subject to

    AxCGy D B; (B.4b)DxCHy � E; (B.4c)x 2 Rn�1; (B.4d)y 2 Ip�1; (B.4e)

    where:

    • C 2 Rn�1 and R 2 Rp�1 are the cost coefficient vectors of the variable vectors xand y, respectively, in objective function (B.2a),

    • A 2 Rm�n, G 2 Rm�p, and B 2 Rm�1 are the matrices and vector of coefficientsthat define equality constraint (B.2b),

    • D 2 Ro�n, H 2 Ro�p, and E 2 Ro�1 are the matrices and vector of coefficientsthat define inequality constraint (B.2c),

    Some examples of MILP problems are the unit commitment problem describedin Chap. 7 or the self-scheduling problem analyzed in Chap. 8 of this book.

  • 286 B Solving Optimization Problems

    B.2.2 Solution

    MILP problems can be solved using branch-and-cut methods. A detailed descriptionof these methods can be found, for instance, in [4].

    MILP problems can also be solved using one of the many commercially availablesoftware tools. For example, in this book we use CPLEX [5] under GAMS [3].

    Illustrative Example B.2 Mixed-integer linear programmingWe consider again the data of Illustrative Example B.1. However, in this case, we

    assume that the generating unit has a minimum power output of 2 MW and a fixedcost of $25.

    Considering these data, we formulate the following MILP problem:minp1;p2;p3;p4;p5;p6;u1;u2;u3;u4;u5;u6

    10p1 C 15p2 C 22p3 C 30p4 C 24p5 C 20p6� 21 .p1 C p2 C p3 C p4 C p5 C p6/� 25 .u1 C u2 C u3 C u4 C u5 C u6/

    subject to

    2u1 � p1 � 10u1;2u2 � p2 � 10u2;2u3 � p3 � 10u3;2u4 � p4 � 10u4;2u5 � p5 � 10u5;2u6 � p6 � 10u6;u1; u2; u3; u4; u5; u6 2 f0; 1g:

    In this example it is necessary to include binary variables to represent the on/offstatus of the generating unit at each time period.

    The solution of this problem is (note that a superscript � in the variables belowindicates optimal value):

    p�1 D 0;p�2 D 0;p�3 D 0;p�4 D 10 MW;p�5 D 10 MW;

  • B Solving Optimization Problems 287

    p�6 D 0;u�1 D 0;u�2 D 0;u�3 D 0;u�4 D 1;u�5 D 1;u�6 D 0:

    Contrary to the solution of Illustrative Example B.1, we can see that in thisexample it is not optimal to turn on the generating unit at the third time periodas a result of its fixed cost.

    This solution renders an objective function value of $70.�

    A simple input GAMS [3] file to solve Illustrative Example B.2 is providedbelow:

    1 variables z, p1, p2, p3, p4, p5, p6;

    3 binary variables u1, u2, u3, u4, u5, u6;

    5 equations fobj, eq1a, eq1b, eq2a, eq2b, eq3a, eq3b, eq4a, eq4b,eq5a, eq5b, eq6a, eq6b;

    7 fobj.. z=e=10*p1+15*p2+22*p3+30*p4+24*p5+20*p6-21*(p1+p2+p3+p4+p5+p6)-5*(u1+u2+u3+u4+u5+u6);

    9 eq1a.. 2*u1=l=p1;10 eq1b.. p1=l=10*u1;11 eq2a.. 2*u2=l=p2;12 eq2b.. p2=l=10*u2;13 eq3a.. 2*u3=l=p3;14 eq3b.. p3=l=10*u3;15 eq4a.. 2*u4=l=p4;16 eq4b.. p4=l=10*u4;17 eq5a.. 2*u5=l=p5;18 eq5b.. p5=l=10*u5;19 eq6a.. 2*u6=l=p6;20 eq6b.. p6=l=10*u6;

    22 model example_milp /all/;23 solve example_milp using mip maximizing z;

    25 display z.l, p1.l, p2.l, p3.l, p4.l, p5.l, p6.l, u1.l, u2.l, u3.l, u4.l, u5.l, u6.l;

  • 288 B Solving Optimization Problems

    The part of the GAMS output file that provides the optimal solution is givenbelow:

    1 ---- 25 variable z.l = 70.0002 variable p1.l = 0.0003 variable p2.l = 0.0004 variable p3.l = 0.0005 variable p4.l = 10.0006 variable p5.l = 10.0007 variable p6.l = 0.0008 variable u1.l = 0.0009 variable u2.l = 0.00010 variable u3.l = 0.00011 variable u4.l = 1.00012 variable u5.l = 1.00013 variable u6.l = 0.000

    B.3 Nonlinear Programming Problems

    A nonlinear programming (NLP) problem is an optimization problem in which theobjective function and/or some of the constraints are nonlinear.

    B.3.1 Formulation

    The general formulation of an NLP problem is as follows:minxi;8i

    f .x1; : : : ; xn/ (B.5a)

    subject to

    Aj .x1; : : : ; xn/ D 0; 8j; (B.5b)Dk .x1; : : : ; xn/ � 0; 8k; (B.5c)xi 2 R; 8i; (B.5d)

    where:

    • f .x1; : : : ; xn/: Rn ! R is the nonlinear objective function (B.5a),• Aj .x1; : : : ; xn/: Rn ! R are the nonlinear functions that define equality

    constraints (B.5b), 8j, and• Dk .x1; : : : ; xn/: Rn ! R are the nonlinear functions that define inequality

    constraints (B.5c), 8k.

  • B Solving Optimization Problems 289

    Problem (B.5) can be rewritten in compact form as:minx

    f .x/ (B.6a)

    subject to

    A .x/ D 0; (B.6b)D .x/ � 0; (B.6c)x 2 Rn; (B.6d)

    where:

    • f .x/: Rn ! R is the nonlinear objective function (B.6a),• A .x/: Rn ! Rm is the nonlinear function that defines equality constraint (B.6b),

    and• D .x/: Rn ! Ro is the nonlinear function that defines inequality con-

    straint (B.6c).

    Some examples of NLP problems are the state estimation problem described inChap. 5 or the ac optimal power flow problem analyzed in Chap. 6 of this book.

    B.3.2 Solution

    Solving NLP problems is generally more complicated than solving LP problems orMILP problems.

    NLP problems can be solved using one of the many commercially availablesoftware tools. For example, in this book we use CONOPT [1] under GAMS [3].

    Further information about NLP problems can be found, for instance, in [4].

    Illustrative Example B.3 Nonlinear programmingWe consider again the data of Illustrative Example B.1. However, in this case, we

    assume that the generating unit has a quadratic cost function so that the cost is:

    ct D 15pt C 2p2t ; t D 1; : : : ; 6:

    Considering these data, we formulate the following NLP problem:maxp1;p2;p3;p4;p5;p6

    10p1 C 15p2 C 22p3 C 30p4 C 24p5 C 20p6� 15 .p1 C p2 C p3 C p4 C p5 C p6/� 2 �p21 C p22 C p23 C p24 C p25 C p26

  • 290 B Solving Optimization Problems

    subject to

    0 � p1 � 10;0 � p2 � 10;0 � p3 � 10;0 � p4 � 10;0 � p5 � 10;0 � p6 � 10:

    The solution of this problem is (note that a superscript � in the variables belowindicates optimal value):

    p�1 D 0;p�2 D 0;p�3 D 1:75 MW;p�4 D 3:75 MW;p�5 D 2:25 MW;p�6 D 1:25 MW:

    This solution renders an objective function value of $47.5.�

    A simple input GAMS [3] file to solve Illustrative Example B.3 is providedbelow:

    1 variables z, p1, p2, p3, p4, p5, p6;

    3 equations fobj, eq1a, eq1b, eq2a, eq2b, eq3a, eq3b, eq4a, eq4b,eq5a, eq5b, eq6a, eq6b;

    5 fobj.. z=e=10*p1+15*p2+22*p3+30*p4+24*p5+20*p6-15*(p1+p2+p3+p4+p5+p6)-2*(p1*p1+p2*p2+p3*p3+p4*p4+p5*p5+p6*p6);

    7 eq1a.. 0=l=p1;8 eq1b.. p1=l=10;9 eq2a.. 0=l=p2;10 eq2b.. p2=l=10;11 eq3a.. 0=l=p3;12 eq3b.. p3=l=10;13 eq4a.. 0=l=p4;14 eq4b.. p4=l=10;15 eq5a.. 0=l=p5;16 eq5b.. p5=l=10;

  • B Solving Optimization Problems 291

    17 eq6a.. 0=l=p6;18 eq6b.. p6=l=10;

    20 model example_nlp /all/;21 solve example_nlp using nlp maximizing z;

    23 display z.l, p1.l, p2.l, p3.l, p4.l, p5.l, p6.l;

    The part of the GAMS output file that provides the optimal solution is givenbelow:

    1 ---- 23 variable z.l = 47.5002 variable p1.l = 0.0003 variable p2.l = 0.0004 variable p3.l = 1.7505 variable p4.l = 3.7506 variable p5.l = 2.2507 variable p6.l = 1.250

    B.4 Summary and Further Reading

    This appendix provides brief formal descriptions of the three types of optimizationproblems considered in this book, namely, LP problems, MILP problems, and NLPproblems. A detailed description of these problems can be found in the monographby Sioshansi and Conejo [4].

    References

    1. CONOPT (2016). Available at www.conopt.com/2. Dantzig, G.B.: Linear Programming and Extensions. Princeton University Press, Princeton, NJ

    (1963)3. GAMS (2016). Available at www.gams.com/4. Sioshansi, R., Conejo, A.J.: Optimization in Engineering. Models and Algorithms. Springer,

    New York (2017)5. The ILOG CPLEX (2016). Available at www.ilog.com/products/cplex/

    www.conopt.com/www.gams.com/www.ilog.com/products/cplex/

  • Index

    Aac source

    Angular frequency, 18Ordinary frequency, 18Period, 18Phasorial representation, 18Root mean square, 18Sinusoidal representation, 18

    Active and reactive power decoupling,81

    Admittance matrix, 101Alternating current (ac), 17

    BBalanced three-phase circuits, 17

    Active power, 43Apparent power, 44Balanced three-phase sequence, 18Common star connection, 38Currents, 23Delta currents, 25Equivalence wye-delta, 28Exercises, 52How to measure power?, 44Instantaneous power, 43Line currents, 23Line voltages, 22Magnitudes, 21Negative sequence, 20Phase voltages, 22Positive sequence, 19Power, 42

    Reactive power, 44Voltages, 21

    EEconomic dispatch, 197, 209

    Capacity limits of transmission lines, 212Cost function, 213Description, 211Example, 210, 214Example: impact of transmission capacity

    limits, 215Example: locational marginal prices, 216Example: marginal prices, 211Formulation, 213GAMS code, 225Locational marginal prices, 216Marginal prices, 211Power balance, 213Power bounds, 212Power flows through transmission lines,

    211Reference node, 212

    Electrical line, 71Capacity, 79Efficiency, 80Geometric mean radius, 78Inductance, 76Model, 71Parameters, 75Reactance, 79Regulation, 80Resistance, 75Resistivity, 75

    © Springer International Publishing AG 2018A.J. Conejo, L. Baringo, Power System Operations, Power Electronics and PowerSystems, https://doi.org/10.1007/978-3-319-69407-8

    293

    https://doi.org/10.1007/978-3-319-69407-8

  • 294 Index

    GGenerator and motor, 56

    Efficiency, 58Three-phase generator, 56Three-phase motor, 57

    IIntroduction, 1

    KKirchhoff’s laws, 99

    LLoad, 68

    Constant impedance, 69Constant power, 71Constant voltage, 71Induction motor, 69Induction motor efficiency, 70Model, 69–71

    MMagnetic constant, 76Market clearing auction, 242

    Bids, 233Consumer surplus, 244Consumption bid curve, 244Example, 248, 252, 257Formulation, 246Formulation: multi period, 251Formulation: single period, 247Formulation: transmission-constrained

    multi-period, 256GAMS code, 264Introduction, 234Locational marginal prices, 257Market clearing price, 250Market operator, 234Offers, 233Participants, 242Producer surplus, 244Production offer curve, 243Profit of generating units, 250Social welfare, 244

    NNetwork-constrained unit commitment, 197

    Example, 218

    Formulation, 217GAMS code, 226

    Newton-Raphson method, 111

    OOptimal power flow, 165

    Active power limits, 168Concluding remarks, 185dc example, 178dc formulation, 177dc GAMS code, 189Description, 166Example, 171Formulation, 170GAMS code, 185Introduction, 165Objective function, 169Power balance, 166Power flows through transmission lines,

    167Reactive power limits, 168Security, 179Solution, 171Transmission line limits, 168Voltage angle limits, 169Voltage magnitude limits, 169

    Optimization problems, 281Linear programming, 281Linear programming: example, 282Linear programming: formulation, 281Linear programming: GAMS code, 283Linear programming: simplex method,

    282Linear programming: solution, 282Mixed-integer linear programming, 284Mixed-integer linear programming:

    branch-and-bound methods, 286Mixed-integer linear programming:

    example, 286Mixed-integer linear programming:

    formulation, 284Mixed-integer linear programming: GAMS

    code, 287Mixed-integer linear programming:

    solution, 286Non-linear programming, 288Non-linear programming: example, 289Non-linear programming: formulation,

    288Non-linear programming: GAMS code,

    290Non-linear programming: solution, 289

  • Index 295

    PPer-unit system, 46

    Base value, 48Definition, 46Example, 47, 50Procedure, 51

    Permittivity, 79Phasor, 18Power, 42

    Active power, 43Apparent power, 44How to measure power?, 44Instantaneous power, 43Reactive power, 44

    Power flow, 97Applications, 97Concluding remarks, 130dc formulation, 121Decoupled, 119Distributed slack, 120Equations, 104Example, 117Example in Octave, 130Exercises, 132Introduction, 97Nodal equations, 98Outcome, 114Slack, PV, and PQ nodes, 109Solution, 110

    Power markets, 10Day-ahead market, 12Futures market, 11Intra-day markets, 12Pool, 11Real-time market, 12

    Power systemFundamentals, 17Model components, 55

    Power system componentsExamples, 83

    Power system operations, 9Day-ahead operation, 9Hours before power delivery, 10Minutes before power delivery, 10

    Power system structure, 1Centralized operation, 7Distribution, 4Economic layer, 5Generation, 2Market operation, 8Physical layer, 1Regulatory layer, 7Supply, 4Transmission, 3

    Power transformer, 58Connections, 61Denomination, 66Model, 67Per-unit analysis, 66Transformation ratio, 60

    SScope of the book, 12

    What we do, 13What we do not do, 13

    Security-constrained optimal power flow, 179n � 1 security, 180n � k security, 180Concluding remarks, 185Corrective approach, 180Description, 180Example, 182Formulation, 180GAMS code, 190Introduction, 179Preventive approach, 180

    Self-scheduling, 234Description, 234Example, 237, 240Formulation, 236GAMS code, 263Introduction, 233

    Self-scheduling and market clearing auction,233

    Final remarks, 262Introduction, 233

    Sinusoidal ac source, 18State estimation, 137

    Cumulative distribution function, 155Erroneous measurement detection, 152Erroneous measurement detection: �2 test,

    152Erroneous measurement detection:

    Example, 153Erroneous measurement identification, 154Erroneous measurement identification:

    Example, 155Erroneous measurement identification:

    Normalized residual test, 154Estimation, 140Estimation: Example, 143Exercises, 160Measurements, 138Non-observable, 148Observability, 145Observability: Example, 148Observable, 148

  • 296 Index

    State estimation (cont.)Residuals, 154System state, 137

    Systems of nonlinear equations, 110, 271Direct solution, 278Direct solution: many unknowns, 279Direct solution: one unknown, 278Jacobian, 274Newton-Raphson algorithm, 271Newton-Raphson algorithm: example, 272,

    275Newton-Raphson algorithm: many

    unknowns, 274Newton-Raphson algorithm: one unknown,

    271Taylor series, 272, 274

    UUnit commitment, 197, 198

    Costs of generating units, 199Costs of generating units: Fixed costs, 199Costs of generating units: Shut-down costs,

    200

    Costs of generating units: Start-up costs,200

    Costs of generating units: Variable costs,200

    Example, 205Formulation, 204GAMS code, 223Generating units, 199Logical expressions, 201Logical expressions: Example, 201Network-constrained unit commitment,

    216Planning horizon, 199Power balance, 204Power bounds, 202Ramping limits, 202Security constraints, 204

    Unit commitment and economic dispatch,197

    Exercises, 228Final remarks, 222GAMS codes, 222Introduction, 197

    Appendix A Solving Systems of Nonlinear EquationsA.1 Newton-Raphson AlgorithmA.1.1 One UnknownA.1.2 Many Unknowns

    A.2 Direct SolutionA.2.1 One UnknownA.2.2 Many Unknowns

    A.3 Summary and Further ReadingReferences

    Appendix B Solving Optimization ProblemsB.1 Linear Programming ProblemsB.1.1 FormulationB.1.2 Solution

    B.2 Mixed-Integer Linear Programming ProblemsB.2.1 FormulationB.2.2 Solution

    B.3 Nonlinear Programming ProblemsB.3.1 FormulationB.3.2 Solution

    B.4 Summary and Further ReadingReferences

    Index


Recommended