+ All Categories
Home > Documents > INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods...

INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods...

Date post: 10-Jul-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
170
INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER CONE, AND SEMIDEFINITE PROGRAMMING Hande Yurttan Benson A DISSERTATION PRESENTED TO THE FACULTY OF PRINCETON UNIVERSITY IN CANDIDACY FOR THE DEGREE OF DOCTOR OF PHILOSOPHY RECOMMENDED FOR ACCEPTANCE BY THE DEPARTMENT OF OPERATIONS RESEARCH AND FINANCIAL ENGINEERING June 2001
Transcript
Page 1: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

INTERIOR-POINT METHODS FOR NONLINEAR,

SECOND-ORDER CONE, AND SEMIDEFINITE

PROGRAMMING

Hande Yurttan Benson

A DISSERTATION

PRESENTED TO THE FACULTY

OF PRINCETON UNIVERSITY

IN CANDIDACY FOR THE DEGREE

OF DOCTOR OF PHILOSOPHY

RECOMMENDED FOR ACCEPTANCE

BY THE DEPARTMENT OF

OPERATIONS RESEARCH AND FINANCIAL ENGINEERING

June 2001

Page 2: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

c©Copyright by Hande Yurttan Benson, 2001. All rights reserved.

Page 3: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

Abstract

Interior-point methods have been a re-emerging field in optimization since the mid-

1980s. We will present here ways of improving the performance of these algorithms for

nonlinear optimization and extending them to different classes of problems and application

areas.

At each iteration, an interior-point algorithm computes a direction in which to proceed,

and then must decide how long of a step to take. The traditional approach to choosing a

steplength is to use a merit function, which balances the goals of improving the objective

function and satisfying the constraints. Recently, Fletcher and Leyffer reported success with

using a filter method, where improvement of any of the objective function and constraint

infeasibility is sufficient. We have combined these two approaches and applied them to

interior-point methods for the first time and with good results.

Another issue in nonlinear optimization is the emergence of several popular problem

classes and their specialized solution algorithms. Two such problem classes are Second-

Order Cone Programming (SOCP) and Semidefinite Programming (SDP). In the second

part of this dissertation, we show that problems from both of these classes can be re-

formulated as smooth convex optimization problems and solved using a general purpose

interior-point algorithm for nonlinear optimization.

iii

Page 4: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

Acknowledgements

I cannot thank Bob Vanderbei and Dave Shanno enough for their brilliant mentorship

and for teaching me everything I know about optimization. Much of the research presented

here has been a direct result of their ideas and experience. I have very much appreciated

their advice, support and encouragement.

I would like to thank my other committee members, C.A. Floudas and E. Cinlar, for

their insightful questions and comments on my work.

Financial support during the completion of this research has mainly come from Prince-

ton University in the form of teaching assistantships. I would like to express my gratitude to

DIMACS for providing a graduate fellowship, and to NSF and ONR grants for supporting

my research activities at various times.

I would also like to thank all of the professors and fellow students of the Operations

Research and Financial Engineering department for all that I have learned over the last

four years.

This dissertation would not have been possible without the help of my family. I would

like to thank my husband, Dan, who provides endless love and patience and much needed

advice and support, and the whole Benson family for giving me yet another wonderful

family to be a part of. I would also like to thank my brother, Ersin, and his wife, Vanessa,

for all of their love and support. Most importantly, this work is dedicated to my parents

iv

Page 5: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

ACKNOWLEDGEMENTS v

Iffet and Necdet Yurttan, who have made more sacrifices and given me more love and

support than any child could have asked of her parents.

Page 6: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

Contents

Abstract iii

Acknowledgements iv

List of Figures ix

List of Tables x

Chapter 1. Introduction 1

1. Thesis Outline. 5

Part 1. The interior-point algorithm: description and improvements 7

Chapter 2. Interior-Point Methods for Nonlinear Programming. 8

1. The Interior-Point Algorithm. 8

2. The barrier parameter. 12

Chapter 3. Steplength control: Background. 13

1. Merit functions. 13

2. Filter Methods. 16

Chapter 4. Steplength control in interior-point methods. 19

1. Three Hybrid Methods. 21

2. Sufficient reduction and other implementation details. 24

Chapter 5. Numerical Results: Comparison of Steplength Control Methods. 29

vi

Page 7: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CONTENTS vii

Part 2. Extensions to Other Problem Classes. 41

Chapter 6. Extension to Second-Order Cone Programming. 42

1. Key issues in nonsmoothness. 43

2. Alternate Formulations of Second–Order Cone Constraints. 45

Chapter 7. Numerical Results: Second-Order Cone Programming 51

1. Antenna Array Weight Design 53

2. Grasping Force Optimization 56

3. FIR Filter Design 57

4. Portfolio Optimization 58

5. Truss Topology Design 59

6. Equilibrium of a system of piecewise linear springs 61

7. Euclidean Single Facility Location 63

8. Euclidean Multiple Facility Location 64

9. Steiner Points 66

10. Minimal Surfaces 67

11. Plastic Collapse 68

12. Results of numerical testing. 69

Chapter 8. Extension to Semidefinite Programming. 90

1. Characterizations of Semidefiniteness 91

2. The Concavity of the dj’s. 95

Chapter 9. Numerical Results for Semidefinite Programming 105

1. The AMPL interface. 105

Page 8: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CONTENTS viii

2. Algorithm modification for step shortening. 106

3. Applications. 107

4. Results of Numerical Testing. 116

Chapter 10. Future Research Directions. 119

Chapter 11. Conclusions. 123

Bibliography 125

Appendix A. Numerical Results for Steplength Control. 131

Appendix B. Solving SDPs using AMPL. 152

1. The SDP model. 152

2. The AMPL function definition. 154

3. Step-shortening in LOQO. 159

Page 9: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

List of Figures

1 Fletcher and Leyffer’s filter method adapted to the barrier objective. 20

2 A barrier objective filter that is updated with the barrier parameter µ

at each iteration. 22

1 Performance profiles of LOQO and the hybrid algorithms with respect

to runtime. 40

2 Performance profiles of LOQO and the hybrid algorithms with respect

to iteration counts. 40

ix

Page 10: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

List of Tables

1 Comparison of LOQO to FB on commonly solved problems. 32

2 Comparison of LOQO to FO on commonly solved problems. 32

3 Comparison of LOQO to FP on commonly solved problems. 33

4 Comparison of FB to FO on commonly solved problems. 33

5 Comparison of FB to FP on commonly solved problems. 34

6 Comparison of FO to FP on commonly solved problems. 34

1 Runtimes for models which can be formulated as SOCPs. 86

1 Iteration counts and runtimes for semidefinite programming models

from various application areas. 117

2 Iteration counts and runtimes for small truss topology problems from

the SDPLib test suite. 118

1 Comparative results for different steplength control methods on the

CUTE test suite. 132

x

Page 11: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CHAPTER 1

Introduction

Much of the theory used in nonlinear programming today dates back to Newton and

Lagrange. Newton’s Method for finding the roots of an equation has been used to find

the roots of the first-order derivatives in an unconstrained nonlinear optimization problem.

The theory of Lagrange multipliers has expanded the range of problems to which Newton’s

Method can be applied to include constrained problems.

In the middle part of the 20th century, Frisch [28] proposed logarithmic barrier methods

to transform optimization problems with inequality constraints to unconstrained optimiza-

tion problems. Fiacco and McCormick’s important work [23] on this approach made it

the focus of much of the research in nonlinear optimization in the 1960s. However, com-

plications arising from ill-conditioning in the numerical algebra made logarithmic barrier

methods fall into disfavor. For much of the 1970s and early 1980s, the field of nonlinear

optimization was dominated by augmented Lagrangian methods and sequential quadratic

programming.

In the meantime, the field of linear programming had placed much of its focus on

Dantzig’s simplex method [19]. Introduced in 1947, this method was the standard for linear

optimization problems, and it performed quite well in practice. With the advances made

in complexity theory, however, it was no longer important to have empirical performance—

any acceptably fast algorithm had to also have a theoretical worst-case runtime that was

1

Page 12: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. INTRODUCTION 2

polynomial in the size of the problem. Klee and Minty provided an example in [42], for

which the simplex method had exponential runtime.

In 1980, Khachian proposed the ellipsoid method for linear programming [41]. It re-

ceived much acclaim, due mainly to the fact that it had a theoretical worst-case complexity

which was polynomial in the problem size. However, further work to implement the algo-

rithm showed that it performed close to its worst-case in practice.

The big advance in linear programming came with Karmarkar’s seminal 1984 paper [40],

in which he proposed a projective interior-point method that had a polynomial worst-case

complexity and the polynomial was of a lower order than Khachian’s. The linear program-

ming community was naturally excited about the result, but there was much curiosity as

to how it would behave in practice. In a follow-up to his paper, Karmarkar, et al. [1]

presented claims that an implementation of the algorithm performed up to 50 times faster

than the simplex method. There was a rush by researchers to verify this claim. Although

it was not verified by the teams of Vanderbei, et al. [65] and Gill, et al. [30], the empirical

studies showed, nonetheless, that it was faster than the simplex method on some problems

and comparable to it on the rest.

Thus, the linear programming community had found itself an algorithm that performed

well in practice and had satisfactory theoretical complexity. An important result that was a

part of Gill et al.’s [30] work was to show that Karmarkar’s projective interior-point method

was equivalent to Fiacco and McCormick’s logarithmic barrier method. With this result,

earlier work on the logarithmic barrier method was revived, and interior-point methods for

nonlinear programming became an active field of research. In fact, the premiere archive of

papers in this field, Interior-Point Methods Online [12], which starts in 1994, and Kranich’s

Page 13: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. INTRODUCTION 3

bibliographic collection [43] which covers previous works, together now boast over 2500

papers. Most of these papers have been written after 1984.

While reviving the research in barrier methods, however, the suspected effects of ill-

conditioning had to be examined. Surprisingly, in [69], Wright showed that such effects

were quite benign, and the theory that the undesirable behavior of these methods were due

to their primal nature became more commonplace. This explained the success of primal-

dual interior-point methods, which is the focus of much of nonlinear programming research

today.

With the vast improvements in computer power, such as faster CPUs and larger memory,

and the emergence of elegant and powerful modelling environments, such as ampl [27],

as well as the development of large test sets, such as CUTE [17] and SDPLib [10], the

development and implementation of algorithms have become top priorities. Many of the

algorithms differ from each other in a handful of aspects:

(1) Treatment of equality constraints/free variables

(2) Numerical algebra routines

(3) Treatment of indefiniteness

(4) Method of choosing a stepsize

(1) and (2) deal with the internal formulation of the problem for the algorithm and the

efficient and reliable design of its numerical aspects, respectively. The treatment of indefi-

niteness is required for general purpose algorithms that handle nonconvexity. The last item,

the method of choosing a stepsize for the solution of the Newton system has received much

interest with the discussion of merit functions and the recent emergence of filter methods

by Fletcher and Leyffer [26], and it will be the focus of Part I of this dissertation.

Page 14: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. INTRODUCTION 4

With the advances in nonlinear programming research, one class of problems in partic-

ular, convex programming problems, has been extensively studied. The pivotal complexity

results by Nesterov and Nemirovskii [52] showed that by using self-concordant barrier func-

tions, it is possible to construct interior-point algorithms that have worst-case polynomial

runtimes. In their work, they proposed such barriers for several subclasses of problems,

including second-order cone programming and semidefinite programming. Both of these

subclasses have been the focus of much research in the last decade. In fact, DIMACS held

a special-year to study these subclasses and even hosted a computational challenge [18]

where algorithms to solve them were presented.

Second-order cone programming problems arise in many important engineering appli-

cations, ranging from financial optimization to structural design. Many of these examples

are surveyed in two recent papers by Lobo, et al. [46] and Vanderbei and Benson [67].

Similarly, semidefinite programming problems have also been the subject of much research,

partly due to the fact that many NP-hard combinatorial problems have relaxations which

are semidefinite programming problems. An example of such a relaxation is in the case of

the Max-Cut problem, and it is described in detail by Goemans and Williamson [31]. Both

second-order cone programming and semidefinite programming problems are large-scale,

convex optimization problems from real-world applications, it is therefore important to

find efficient solutions to them. Using Nesterov and Nemirovskii’s complexity results, there

have been numerous specialized algorithms developed, such as Andersen, et al.’s mosek [2]

and its add-on as described in [21] for second-order cone programming, Benson and Ye’s

dsdp [56] and Helmberg’s SBmethod [34] for semidefinite programming, and Sturm’s

SeDuMi [62] for both second-order cone and semidefinite programming. There are also

Page 15: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. THESIS OUTLINE. 5

algorithms available for specific problem instances, such as Burer, Monteiro and Zhang’s

bmz [13] for the Max-Cut SDP relaxation.

The issue with these subclasses is that Nesterov and Nemirovskii’s results for the self-

concordant barrier only allow for specialized algorithms. However, after the emergence of

Karmarkar’s work, interior-point methods have unified the fields of linear and nonlinear

programming in terms of solution methods, and one would expect that an algorithm can be

made general enough to handle all these different classes of problems and still be efficient

and reliable. In fact, both second-order cone programming and semidefinite programming

(which can be seen as a generalization of linear programming) can be handled as general

purpose nonlinear programming problems, and it is the goal of Part II of this dissertation

to outline this process and provide empirical results.

1. Thesis Outline.

In this dissertation, we will present a state-of-the-art interior-point method and present

ways to improve its performance and extend its usage beyond the nonlinear programming

paradigm.

Part I will be focused on the algorithm itself. In Chapter 2, we will present a primal-dual

interior-point algorithm, which is a simplified version of the one implemented currently in

Vanderbei’s loqo [64]. As discussed in the introduction, we will give ways to improve

this algorithm’s method for choosing step sizes. In Chapter 3, we will introduce both the

traditional merit function approach and filter methods recently introduced [26] by Fletcher

and Leyffer. In Chapter 4, we will present and discuss in some detail new hybrid methods

for choosing the step size. There, we will show that the use of such a hybrid method can

Page 16: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. THESIS OUTLINE. 6

allow for more aggressive steps and improve the performance of the algorithm indeed. The

numerical results given in Chapter 5 will support this conclusion.

In Part II, we will focus more on extending this algorithm to the two subclasses discussed

above: second-order cone programming and semidefinite programming, in Chapters 6 (with

numerical results in Chapter 7) and 8 (with numerical results in Chapter 9), respectively.

We will present ways to smooth the second-order cone constraints to allow for the use

of an interior-point algorithm to solve them, and we will also reformulate the semidefinite

programming problem as a standard form, convex, smooth nonlinear programming problem.

The extensive numerical results presented throughout this dissertation provide much

support for the theoretical conclusions given in Parts I and II. The comparative testing for

the hybrid methods presented in Part I is performed on problems from the CUTE [17],

Hock and Schittkowski [35], and Schittkowski [58] test suites. Numerical results for the

reformulated second-order cone problems and semidefinite programming problems include

problems from the DIMACS Challenge set [18] and SDPLib [10].

Finally, we will discuss future research directions in Chapter 10.

Page 17: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

Part 1

The interior-point algorithm: description

and improvements

Page 18: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CHAPTER 2

Interior-Point Methods for Nonlinear Programming.

In its standard form, a nonlinear programming problem (NLP) is

(1)minimize f(x)

subject to h(x) ≥ 0,

where x ∈ Rn, f : Rn → R, and h : Rn → Rm. When f is a convex function and h

are concave functions of x, the problem is said to be convex. Also, for reasons which will

become clear during the discussion of the interior-point algorithm, f and h are assumed to

be twice continuously differentiable.

1. The Interior-Point Algorithm.

In this section, we will outline a primal-dual interior-point algorithm to solve the opti-

mization problem given by (1). A more detailed version of this algorithm, which handles

equality constraints and free variables as well, is implemented in loqo. More information

on those features can be found in [64].

First, slack variables, wi, are added to each of the constraints to convert them to

equalities.

minimize f(x)

subject to h(x)− w = 0

w ≥ 0,

where w ∈ Rm. Then, the nonnegativity constraints on the slack variables are eliminated

by placing them in a barrier objective function, giving the Fiacco and McCormick [23]

8

Page 19: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. THE INTERIOR-POINT ALGORITHM. 9

logarithmic barrier problem:

minimize f(x)− µm∑

i=1

logwi

subject to h(x)− w = 0.

The scalar µ is called the barrier parameter. Now that we have an optimization problem

with no inequalities, we form the Lagrangian

Lµ(x,w, y) = f(x)− µm∑

i=1

logwi − yT (h(x)− w),

where y ∈ Rm are called the Lagrange multipliers or the dual variables.

In order to achieve a stationary point of the Lagrangian function, we need the first-order

optimality conditions:

∂L

∂x= ∇f(x)− A(x)Ty = 0

∂L

∂w= −µW−1e+ y = 0

∂L

∂y= h(x)− w = 0,

where

A(x) = ∇h(x)

is the Jacobian of the constraint functions h(x), W is the diagonal matrix with elements wi,

and e is the vector of all ones of appropriate dimension. The first and the third equations

are more commonly referred to as the dual and primal feasibility conditions. When µ = 0,

the second equation gives the complementarity condition that wiyi = 0 for i = 1, . . . ,m.

Before we begin to solve this system of equations, we multiply the second set of equations

by W to give

−µe+WY e = 0,

Page 20: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. THE INTERIOR-POINT ALGORITHM. 10

where Y is the diagonal matrix with elements yi. Note that this equation implies that y is

nonnegative, and this is consistent with the fact that it is the vector of Lagrange multipliers

associated with a set of constraints that were initially inequalities.

We now have the standard primal-dual system

(2)

∇f(x)− A(x)Ty = 0

−µe+WY e = 0

h(x)− w = 0

In order to solve this system, we use Newton’s Method. Doing so gives the following

system to solve:H(x, y) 0 −A(x)T

0 Y W

A(x) −I 0

∆x

∆w

∆y

=

−∇f(x) + A(x)Ty

µe−WY e

−h(x) + w

,

where, the Hessian, H, is given by

H(x, y) = ∇2f(x)−m∑

i=1

yi∇2h(x).

We symmetrize this system by multiplying the first equation by -1 and the second equation

by −W−1: −H(x, y) 0 A(x)T

0 −W−1Y −I

A(x) −I 0

∆x

∆w

∆y

=

∇f(x)− A(x)Ty := σ

−µW−1e+ y := −γ

−h(x) + w := ρ

.

Here, σ, γ, and ρ depend on x, y, and w, even though we do not show this dependence

explicitly in our notation. Note that ρ measures primal infeasibility, and using an analogy

with linear programming, we refer to σ as the dual infeasibility.

Page 21: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. THE INTERIOR-POINT ALGORITHM. 11

It is easy to eliminate ∆w from this system without producing any additional fill-in in

the off-diagonal entries. Thus, ∆w is given by

∆w = WY −1(γ −∆y).

After the elimination, the resulting set of equations is the reduced KKT system:

(3)

−H(x, y) A(x)T

A(x) WY −1

∆x

∆y

= −

σ

ρ+WY −1y

.This system is solved by using LDLT factorization, which is a modified version of Cholesky

factorization, and then performing a backsolve to obtain the step directions.

The algorithm starts at an initial solution (x(0), w(0), y(0)) and proceeds iteratively to-

ward the solution through a sequence of points which are determined by the search direc-

tions obtained from the reduced KKT system as follows:

x(k+1) = xk + α(k)∆x(k),

w(k+1) = wk + α(k)∆w(k),

y(k+1) = yk + α(k)∆y(k),

where 0 < α ≤ 1 is the steplength and the superscripts denote the iteration number.

Currently, in loqo, the steplength is chosen using a merit function which ensures that a

balanced improvement toward optimality and feasibility is achieved at each iteration.

It is obvious that the steplength, α, can have a large effect on the number of iterations

required to reach the optimum. In Chapter 3, we will present the traditional merit function

approach and the recent filter approach by Fletcher and Leyffer [26]. We will discuss

several variants of both approaches in order to find an aggressive yet reliable way to pick

the steplength, α.

Page 22: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE BARRIER PARAMETER. 12

2. The barrier parameter.

Before we finish this chapter on our interior-point algorithm, it is important that we

examine the barrier parameter, µ, in some detail. Traditionally, µ is chosen to be

µ = λwTy

m,

where 0 ≤ λ < 1. As reported by Vanderbei and Shanno [66], the interior-point algorithm

presented above performs best when the complementarity products wiyi go to zero at a

uniform rate, and, when at a point that is far from uniformity, a large µ promotes uniformity

for the next iteration. We measure the distance from uniformity by

ξ =miniwiyi

wTy/m.

This means that 0 < ξ ≤ 1, and ξ = 1 only when all of the wiyi’s are constant over all

values of i. Therefore, Vanderbei and Shanno [66] use the following heuristic to compute

the barrier parameter at each iteration:

µ = λmin

((1− r)

1− ξ

ξ, 2

)3wTy

m,

where r is a steplength parameter set to 0.95 and λ is set to 0.1. This computation is

performed at the beginning of each iteration, using the values of w and y computed with

the step taken at the end of the previous iteration.

Page 23: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CHAPTER 3

Steplength control: Background.

With interior-point methods for linear and quadratic programming, the steplength is

controlled using a ratio test which ensures that the nonnegative variables stay nonnegative.

However, with general nonlinear programming the situation is more complicated. While

computing a steplength for an interior-point iterate, one sometimes faces a contradiction

between reducing the objective function and satisfying the constraints. In fact, it may be

the case that a small reduction in the objective function leads to a large increase in the

infeasibility. It is important to have a method to balance these contradicting goals. In this

chapter, we will outline two such existing methods: Merit functions and filter methods.

1. Merit functions.

Traditionally, merit functions have been the method of choice to provide the balance

between optimality and feasibility. A merit function consists of some combination of a

measure of optimality and a measure of feasibility, and a step is taken if and only if it leads

to a sufficient reduction in the merit function. In order to achieve sufficient reduction,

backtracking, that is, systematically reducing the steplength, may be necessary.

One example of a merit function is Han’s `1 exact merit function [33]:

ψ1(x, β) = f(x) + β‖ρ(x,w)‖1,

where ρ(x,w) = w − h(x). The term exact refers to the fact that for any β within a

certain range, a minimizer of the original optimization problem is guaranteed to be a local

13

Page 24: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. MERIT FUNCTIONS. 14

minimum of ψ1(x, β). This is a very good property to have, however, the `1 exact merit

function is nondifferentiable due to the norm and results in numerical problems in practice.

Another merit function is the `2 merit function used by El-Bakry, Tapia, Tsuchiya and

Zhang [22]:

ψ2(x,w, y) = ‖∇f(x)− A(x)Ty‖22 + ‖WY e‖2

2 + ‖ρ(x,w)‖22.

El-Bakry et. al. presented a globally convergent algorithm using this merit function un-

der the usual conditions, provided that H(x, y) + A(x)TW−1Y A(x) remained nonsingular

throughout. However, Shanno and Simantraki [59] showed that on the Hock and Schit-

tkowski test suite [35] a variant of this algorithm fails on some problems due to singularity.

Also, while the algorithm is usually efficient, it sometimes converges to local maxima or

saddle points.

Because of the drawbacks of these two merit functions, Vanderbei and Shanno [66] refer

back to Fiacco and McCormick’s [23] penalty function for equality constrained problems

and the following merit function:

ψ3(x,w, β) = f(x) +β

2‖ρ(x,w)‖2

2.

Vanderbei and Shanno’s algorithm was presented in Chapter 2, and it uses a logarithmic

barrier objective function. The merit function in the context of their algorithm is

ψβ,µ(x,w) = f(x)− µ

m∑i=1

logwi +β

2‖ρ(x,w)‖2

2.

ψβ,µ has the disadvantage that β is required to go to infinity in order to guarantee conver-

gence to a feasible point, which is hoped to be a local minimum of the original problem.

Page 25: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. MERIT FUNCTIONS. 15

In practice, Vanderbei and Shanno report that the disadvantage seems to be quite unim-

portant, so this is why they have chosen this merit function to use in the loqo algorithm.

In [66], Vanderbei and Shanno also present theoretical results about the behavior of the

merit function, ψβ,µ. We will repeat them here. The matrix H(x, y) + A(x)TW−1Y A(x),

more commonly referred to as the dual normal matrix, will also play a special role in the

following theorem:

Theorem 1. (Vanderbei and Shanno). Suppose that the dual normal matrix is positive

definite. Let

ρ = ρ(x,w) = w − h(x), and b = bµ(x,w) = f(x)− µm∑

i=1

logwi.

Then the search directions have the following properties:

(1) If ρ = 0, then ∇xb

∇wb

T ∆x

∆w

≤ 0.

(2) There exists βmin ≥ 0 such that, for every β > βmin,∇xψβ,µ

∇wψβ,µ

T ∆x

∆w

≤ 0.

In both cases, equality holds if and only if (x,w) satisfies (2) for some y.

This theorem suggests that when the problem is strictly convex, the search directions

given by (3) are descent directions for ψβ,µ for a large enough β. The positive definiteness

condition on the dual normal matrix, however, may not always hold. The authors, then,

propose using

H(x, y) = H(x, y) + λI, λ ≥ 0

Page 26: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. FILTER METHODS. 16

instead of H(x, y) in the definition of the search directions. The diagonal perturbation λ is

chosen large enough so that H(x, y) is positive definite, and Theorem 1 follows. Note that

by diagonal dominance of positive definite matrices, such a λ can always be found.

It is Vanderbei and Shanno’s merit function, ψβ,µ, that will be considered as the state

of the art for this dissertation. We will use it to construct our hybrid methods in the next

chapter and compare against it in the when presenting numerical results in Chapter 5.

One thing to note about the merit function is that at each iteration, the steplength is

chosen such that the new iterate will provide a sufficient reduction in ψβ,µ. The amount of

the reduction required is determined by an Armijo Rule [6]:

(4) ψβ,µ(x(k+1), w(k+1)) < ψβ,µ(x(k), w(k)) + ε

∇xψβ,µ(x(k), w(k))

∇wψβ,µ(x(k), w(k))

T ∆x(k)

∆w(k)

,where ε is a small constant, chosen in our implementation to be 10−6. Note that by Theorem

1, the last term is negative, so it does indeed describe that we are requiring a reduction.

2. Filter Methods.

Recently, Fletcher and Leyffer [26] studied solving the nonlinear programming problem

(1) using a sequential-quadratic programming (SQP) algorithm that employed a different

type of steplength control. An SQP algorithm is an active-set method that tries to locate the

optimal solution by finding the inequality constraints that are equalities at the optimum.

Since there are possibly an exponential number of sets of these active constraints, a smart

way to pick a set is to work with a quadratic approximation to the problem which is easier

to solve:

min1

2∆xTQ∆x+∇f(x)T ∆x

Page 27: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. FILTER METHODS. 17

s.t. A(x)∆x+ h(x) ≥ 0,(5)

‖∆x‖2 ≤ r,

where Q is some positive definite approximation to the Hessian matrix of f(x), and r is

the trust region radius. SQP algorithms have also traditionally used a merit function to

balance the goals of reducing the objective function and reducing infeasibility. Such a merit

function is

(6) ψβ(x) = f(x) +β

2‖ρ−(x)‖2

where ρ−(x) is the vector with elements ρ−i (x) = min(hi(x), 0). Here, reducing ψβ(x) clearly

ensures that either the objective function or the infeasibility is reduced.

The goal of Fletcher and Leyffer’s work was to replace the use of a merit function in their

SQP algorithm with a requirement that improvement be made over all previous iterations

in any of its two components: (a) a measure of objective progress and (b) a measure of

progress toward feasibility. They define a filter to be a set of pairs f(x(k)) and ‖ρ−i (x(k))‖,

and a new point x(k+1) is admitted to the filter if it is not dominated by any point already

in the filter. A point x(j) is said to dominate x(k+1) if

‖ρ−i (x(j))‖2 ≤ ‖ρ−i (x(k+1))‖2,(7)

f(x(j)) ≤ f(x(k+1)).

If there is a point x(j) in the filter such that (7) is satisfied, an acceptable point is determined

either by reducing the trust region radius r or by a feasibility restoration step.

In order to ensure that there is sufficient progress towards the optimal solution at each

iteration, Fletcher and Leyffer have modified their filter to include an Armijo Rule. Doing

Page 28: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. FILTER METHODS. 18

this allows one to define an envelope around the filter, so that new points that are arbitrarily

close to the filter are not admitted.

Also, when a sufficient level of feasibility is reached, it can be the case that a miniscule

improvement in feasibility can increase the objective function significantly. At such a point,

however, we should not even be concerned with improving the feasibility any further. To

avoid such a deviation from the optimal solution, Fletcher and Leyffer have included a

condition in the filter that when the norm of the infeasibility is sufficiently small, a reduction

in the objective function is required, subject to an Armijo Rule.

In [26], Fletcher and Leyffer report good numerical results with this filter approach on

problems from the CUTE test suite [17]. Their new code filterSQP consistently outperforms

their previous code `1SQP, which employs the merit function given by (6). Encouraged by

these results, we have decided to try the filter method approach in the context of interior-

point methods and the implementation of our algorithm, loqo. As we will describe in

the next chapter, it is not possible to apply filter methods to loqo without modification.

Therefore, we will propose several hybrid methods to control steplength.

Page 29: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CHAPTER 4

Steplength control in interior-point methods.

As Theorem 1 in the previous chapter states, there exists a β at each iteration such that

the search direction which solves (3) is a descent direction for the merit function ψβ,µ(x,w)

given by (6). This implies that a steplength α can be found at each iteration to reduce

ψβ,µ(x,w). The reduction can come from two sources: The barrier objective function

bµ(x,w) = f(x)− µm∑

i=1

log(wi)

or the norm of the infeasibility

‖ρ(x,w)‖.

Therefore, Theorem 1 guarantees that at least one of these quantities will be reduced at

each iteration, and this immediately suggests using a filter consisting of pairs of points

(b(k)µ , ‖ρ(k)‖), where

b(k)µ = bµ(x(k), w(k)), and ‖ρ(k)‖ = ‖ρ(x(k), w(k))‖.

An example of such a filter consisting of four points is shown in Figure 1.

In interior-point methods, however, the barrier parameter changes from one iteration

to the next. We will denote by µ(k−1) the barrier parameter used in iteration k, since it

is computed from (x(k−1), w(k−1)). As discussed above, a steplength, α, exists at iteration

k+1 that reduces either b(k)

µ(k) or ‖ρ(k)‖. But, since b(k)

µ(k) is different from the b(k)

µ(k−1) that was

accepted into the filter, we might not find a steplength that will give a point acceptable to

the filter at iteration k+1. In fact, Figure 1 depicts two possible locations for (b(k)

µ(k) , ‖ρ(k)‖),19

Page 30: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

4. STEPLENGTH CONTROL IN INTERIOR-POINT METHODS. 20

b

|| ||

(b(4) __ (4)||) (4)

(b(4) __ (4)||) (3)

(b(4) __ (4)||) (4)

Case 1Case 2(b(2) __ (2)||)

(1)

(b(1) __ (1)||) (0)

(b(3) __ (3)||) (2)

Figure 1. Fletcher and Leyffer’s filter method adapted to the barrier objective.

where k = 4. In Case 1, we have b(4)

µ(4) < b(4)

µ(3) , and we are guaranteed to find a point

acceptable to the filter in iteration 5. However, in Case 2, it is impossible to find a steplength

that will give us such a point.

In general, in order to guarantee that we can find a point (b(k+1)

µ(k) , ‖ρ(k+1)‖2) that is

acceptable to the filter, it is sufficient to have

(8) b(k)

µ(k) < b(k)

µ(k−1) .

This inequality holds if

µ(k) < µ(k−1)

and

−m∑

i=1

log(w(k+1)i ) ≥ 0.

In fact, it is usually the case that the barrier parameter, µ, is monotonically decreasing, and

always so as the optimum is approached. Also, in loqo, the treatment of free variables

Page 31: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. THREE HYBRID METHODS. 21

and equality constraints, as described in [66] and [39], ensures that slack variables will

approach zero. Thus, (8) will hold as the algorithm approaches the optimum, and the

suggested filter is plausible.

However, (8) may not always hold, as loqo does not reduce the barrier parameter µ

monotonically, and at early iterations it can increase from one iteration to the next. We

cannot, therefore, implement a filter method in our algorithm without modifying Fletcher

and Leyffer’s [26] approach or modifying the µ calculation, which we did not try. In the rest

of this chapter, we will present three filter-based algorithms and discuss their properties.

1. Three Hybrid Methods.

1.1. Hybrid #1: Filter method using the barrier objective. As the first vari-

ation on Fletcher and Leyffer’s filter method, we have created a filter saving three values

at each point, (f(x(k)),∑m

i=1 log(w(k)i ), ‖ρ(x(k), w(k))‖). Each time a new µ(k) is calculated,

each barrier objective function is calculated using this new value, and a new filter is con-

structed. A new point (x(k+1), w(k+1)) is admitted to the filter if there is no point in the

filter satisfying

(9) bµ(k)(x(j), w(j)) ≤ bµ(k)(x(k+1), w(k+1))

and

‖ρ(x(j), w(j))‖2 ≤ ‖ρ(x(k+1), w(k+1))‖2.

This filter is shown in Figure 2.

Note that requiring condition (9) be satisfied imposes the stronger condition that if the

new point reduces the current barrier objective function, it must reduce it over all previous

points for this same value of the barrier parameter. However, there is still no guarantee

Page 32: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. THREE HYBRID METHODS. 22

b

|| ||

old filternew filter

(b(4) __ (4)||) (4)

(b(4) __ (4)||) (3)

(b(2) __ (2)||) (3)

(b(1) __ (1)||) (3)

(b(3) __ (3)||) (3)

(b(2) __ (2)||) (4)

(b(1) __ (1)||) (4)

(b(3) __ (3)||) (4)

Figure 2. A barrier objective filter that is updated with the barrier param-

eter µ at each iteration.

that a new point acceptable to the filter can be found at each iteration. In Figure 2, we

depict one possible scenario, where b(1) is reduced by such an amount that b(4) is no longer

in the filter. In that situation, we cannot find a steplength that will give a point acceptable

to the filter.

The question then arises as to what to do when a new trial point (x(k+1), w(k+1)) is not

acceptable to the filter. Since we know that there exists a β such that the search vector

(∆x,∆w,∆y) is a descent vector for ψβ,µ(x,w), one strategy is to compute the β as in

standard loqo [66] and do a linear search to reduce ψβ,µ(x,w). While the new point must

improve either the infeasibility or the barrier objective over the previous point, it need not

be acceptable to the filter. Nonetheless, we accept the new point as the current point and

continue.

To summarize, our first hybrid approach uses the objective function, the barrier term

and the norm of the infeasibility to create a filter of triples. The filter is updated with the

Page 33: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. THREE HYBRID METHODS. 23

current value of the barrier parameter, µ, at each iteration. Since it may still be the case

that we cannot find a steplength to give a point acceptable to the filter at some iteration,

we also employ a merit function to test and accept a new point.

1.2. Hybrid #2: Filter using the objective function. A second possibility for

a filter algorithm is simply to keep the pairs f(x(k)) and ‖ρ(x(k), w(k))‖, and admit a new

point to the filter if there is no point (x(j), w(j)) with

f(x(j)) ≤ f(x(k+1)),

‖ρ(x(j), w(j))‖2 ≤ ‖ρ(x(k+1), w(k+1))‖2.

The justification for this approach follows from the fact that if µ(k) ≤ µ(k+1) and the pair

(x(k+1), w(k+1)) is feasible and minimizes b(x,w, µ(k)), then f(x(k+1)) ≤ f(x(k)) (see Fiacco

and McCormick [23]). If the new point is not feasible, then it may be admitted to the filter

for reducing infeasibility. If infeasibility is not reduced, then the barrier objective must be,

and a sufficient reduction should also reduce the objective function.

However, it may still be the case that we cannot find a steplength to give a point

acceptable to the filter. Again, we employ a merit function as in Hybrid #1 to resolve this

issue.

1.3. Hybrid #3: Filter based only on the previous iteration. In the case that

we cannot find a steplength to give a point acceptable to the filter in Hybrid # 1, another

possibility is simply to backtrack by reducing the step size until either the infeasibility or

the barrier objective function is sufficiently reduced from the previous iteration. Clearly,

if ψβ,µ(x,w) can be reduced, then for some steplength α, we can achieve such a reduction

and no penalty parameter β need be computed.

Page 34: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. SUFFICIENT REDUCTION AND OTHER IMPLEMENTATION DETAILS. 24

Early success with the strategy of backtracking using ψβ,µ(x,w) even if the new point

was not necessarily acceptable to the filter led us to try one more strategy, one that uses

no filter whatsoever. Instead, at each iteration we simply backtrack until a reduction in

either the infeasibility or the barrier objective over the previous point is achieved, that is,

(10) bµ(k)(x(k+1), w(k+1)) ≤ bµ(k)(x(k), w(k))

and

‖ρ(x(k+1), w(k+1))‖2 ≤ ‖ρ(x(k), w(k))‖2.

This approach clearly avoids the need for the penalty parameter in the merit function, and

is in the spirit of the filter, but is less complex.

2. Sufficient reduction and other implementation details.

In all of the above, we require a sufficient decrease in either the infeasibility and or

the barrier objective. In practice, we impose an Armijo-type condition on the decrease.

Specifically, in hybrid #1 we require that either

(11) b(k+1)

µ(k) ≤ b(j)

µ(k) + εα

∇xb

(k)

µ(k)

∇wb(k)

µ(k)

T ∆x(k)

∆w(k)

or

(12) ‖ρ(k+1)‖2 ≤ ‖ρ(j)‖2 + 2εα

∇xρ

(k)

∇wρ(k)

T

ρ(k)

T ∆x(k)

∆w(k)

for all (x(j), w(j)) in the filter.

Note that the Armijo condition imposed on the barrier objective and the infeasibility are

different from the standard Armijo condition. In its standard form, a measure of sufficient

Page 35: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. SUFFICIENT REDUCTION AND OTHER IMPLEMENTATION DETAILS. 25

decrease from the jth iteration would be based on∇b(j)µ(k) , ∇ρ(j), ∆x(j), and ∆w(j). However,

we are not guaranteed that (∆x(j),∆w(j)) are descent directions for b(j)

µ(k) . To see this, note

that (∆x(j),∆w(j)) are indeed descent directions for b(j)

µ(j) , and the following inequality holds:

∇f(x(j))T ∆x(j) − µ(j)eT (W (j))−1∆w(j) ≤ 0.

The inequality that we want to hold is

∇f(x(j))T ∆x(j) − µ(k)eT (W (j))−1∆w(j) ≤ 0.

If eT (W (j))−1∆w(j) ≤ 0, this inequality is guaranteed to hold only if µ(k) ≤ µ(j). Otherwise,

it is guaranteed to hold only if µ(k) > µ(j). Since neither of these conditions can be assumed

to hold, we cannot use (∆x(j),∆w(j)) as descent directions for ∇b(j)µ(k) .

However, we should note that the aim of the Armijo condition is to create an “envelope”

around the filter, it would be sufficient to note that ∆x(k) and ∆w(k) are descent directions

for either b(k)

µ(k) or ‖ρ(k)‖2. Therefore, the condition given by (11) achieves our goal and is

easy to implement.

If the case where ∆x(k) and ∆w(k) are not descent directions for b(k)

µ(k) , it is still easy to

approximate a sufficient reduction. First, we note that

∇xρ

(k)

∇wρ(k)

T

ρ(k)

T ∆x(k)

∆w(k)

= ρ(k)T

∇xρ

(k)

∇wρ(k)

∆x(k)

∆w(k)

≈ −ρ(k)Tρ(k)

≤ 0.

The approximation is obtained using Newton’s Method. Therefore, ∆x(k) and ∆w(k) are

descent directions for ‖ρ(k)‖2. We can always define a valid envelope for the infeasibility,

Page 36: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. SUFFICIENT REDUCTION AND OTHER IMPLEMENTATION DETAILS. 26

so we can use this information to approximate what the corresponding envelope for the

barrier objective would be. The dual variables are a measure of the proportion of marginal

change in the barrier objective to marginal change in the infeasibility. Using the `∞ norm

of the vector of dual variables to obtain a proportional envelope for the barrier objective

will suffice.

We have thus shown that we can guarantee the existence of an envelope around the filter

simply by using information from the previous iteration. Furthermore, we are guaranteed

that we can always find an α that will give us a sufficient decrease over the previous

iteration. This allows us to obtain a sufficient decrease at each iteration.

For the filter algorithm that uses the objective function, (11) is replaced with

(13) f (k+1) ≤ f (j) + εα(∇f (j)

)T∆x(j)

which is the standard Armijo condition.

For the third filter-based algorithm, we only compare against the last iterate, so (11) is

replaced with

(14) b(k+1)

µ(k) ≤ b(k)

µ(k) + εα

∇xb

(k)

µ(k)

∇wb(k)

µ(k)

T ∆x(k)

∆w(k)

,and (12) is replaced with

(15) ‖ρ(k+1)‖2 ≤ ‖ρ(k)‖2 + 2εα

∇xρ

(k)

∇wρ(k)

T

ρ(k)

T ∆x(k)

∆w(k)

Note that the last two expressions correspond to the standard Armijo condition.

We have also incorporated into our code measures to avoid a large increase in either

the barrier objective (or objective) or the infeasibility in exchange for a small decrease in

Page 37: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. SUFFICIENT REDUCTION AND OTHER IMPLEMENTATION DETAILS. 27

the other. If we have a solution that satisfies

‖ρ(x(k), w(k))‖ ≤ 10−10,

then we require a sufficient decrease in the barrier objective (or objective) function for the

next iteration or a decrease in the infeasibility of at least one order of magnitude in the

next iteration.

Also, if the primal feasibility is exactly 0, we insist on a sufficient decrease in the barrier

objective (or objective) function for the next iteration.

Finally, we should note that in order to save time, a maximum of 10 stepcuts can be

performed at each iteration. This is the default value in loqo originally, and it has been

implemented in the hybrid algorithms as well.

2.1. Feasibility Restoration. Hybrid #3, as presented above, may run into further

numerical difficulties when the iterates are feasible and close to optimality. It may, in fact,

be the case that the current point is superoptimal and the infeasibility is a very small value

less than 10−10. Then, the maximum number of 10 step cuts may not reduce the barrier

objective (or objective) function to be less than the superoptimal value, and we may not

be able to reduce the infeasibility by an order of magnitude, either. The algorithm simply

gets stuck at this point, doing 10 step cuts at each iteration, and it will either fail without

achieving the default levels of accuracy or it will slow down considerably.

However, this is an easy situation to remedy. When the feasibility level is so low and

10 step cuts are being performed at each iteration, the feasibility improvement required is

changed from one order of magnitude back to the Armijo condition, and this allows the

Page 38: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. SUFFICIENT REDUCTION AND OTHER IMPLEMENTATION DETAILS. 28

algorithm to step back to a more feasible solution and attain an optimal solution with the

default levels of accuracy.

In this chapter, we have presented three filter-based algorithms for use with interior-

point methods. All three of these algorithms have been implemented on the interior-

point algorithm of loqo. Extensive numerical testing has been performed comparing

these variants to each other and to the original algorithm, which uses a merit function.

The results are presented in the next chapter.

Page 39: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CHAPTER 5

Numerical Results: Comparison of Steplength Control Methods.

Fletcher and Leyffer report encouraging numerical results for the performance of the

filter method sequential quadratic programming code, filterSQP, as compared to their orig-

inal code, `1SQP, and to the Conn, Gould, Toint trust-region code, Lancelot. In this study,

it is our goal to ascertain the effects of using filter-based methods and merit function in

an interior-point method setting. To the best of our knowledge, no such previous study

exists that compares the two approaches as implemented within the framework of the same

interior-point algorithm.

We have tried many variants of the filter-based approach, and, in this section, we will

discuss numerical results for the three best versions discussed in the previous sections. We

will provide pairwise comparisons between these methods and the current version of loqo

and also among each other. Thus, the four implementations

• No filter, with merit function (LOQO)

• Filter using the barrier objective function, with merit function (FB)

• Filter using objective function, with merit function (FO)

• Filter on previous iteration only, no merit function (FP)

As any code using Newton’s method requires second partial derivatives, we have chosen

to formulate the models in AMPL [27], a modelling language that provides analytic first

and second partial derivatives. In order to construct a meaningful test suite, we have been

engaged in reformulating from standard input format (SIF) to AMPL all models in the

29

Page 40: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 30

CUTE [17] (constrained and unconstrained testing environment) test suite. To date, we

have converted and validated 699 models. For those problems with variable size, we have

used the largest suggested number of variables and constraints, except in the case of the

ncvxqp family of problems and fminsurf, where the largest suggested sizes were beyond

the capabilities of all solvers. In addition, we have expressed the entire Schittkowski [58]

test suite in AMPL. Together, this comprises a test suite of 889 AMPL models, which form

the test set for this study. These models vary greatly in size and difficulty and have proved

useful in drawing meaningful conclusions. All of the AMPL models used in our testing are

available at [63].

The CUTE suite contains some problems that are excluded from our set. We have not

yet converted to AMPL any models requiring special functions as well as some of the more

complex models. We will continue to convert the remainder of the suite to AMPL models

as time allows, but believe that the results of this section show that the current test suite

is sufficiently rich to provide meaningful information.

We have made the algorithm variants from loqo Version 5.06, which was called from

AMPL Version 20000814. All testing was conducted on a SUN SPARC Station running

SunOS 5.8 with 4GB of main memory and a 400MHz clock speed.

Since detailed results are too voluminous to present here, we provide summary statistics

and pairwise comparisons of the algorithms in Tables 1-6. Tables with more detailed

comparisons can be found in Appendix 1. Each comparison is broken down by size of

the problems, where we define the size by the number of variables plus the number of

constraints in the model. “Small” problems have size less than 100, “Medium” problems

have size 100 to less than 1000, “Large” problems have size 1000 to less than 10000, and

Page 41: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 31

“Very Large” problems have size 10000 or more. Note that the size reported may differ

from the model itself, since ampl preprocesses a problem before passing it to the solvers.

The total number of problems in each category is as follows:

Small 584

Medium 100

Large 108

Very Large 97

In Tables 1-6, we provide total iteration counts and runtimes for those problems where

one of the solvers took less iterations to reach the optimum than the other. Since these are

pairwise comparisons each table contains information on a different group of problems, that

is, the 18 problems where (FB) outperforms (LOQO) as reported in Table 1 and the 18

where (FO) outperforms (LOQO) as reported in Table 2 are not the same set of problems.

That is why the iteration and runtime totals are different.

We have included problems that were not solved with the original settings of the loqo

parameters but were able to be solved by tuning. The parameters that we most often

tune are bndpush, initial value of of the slack variables, inftol, primal and dual infeasibility

tolerance, and sigfig, number of digits of agreement between the primal and dual solutions.

For a summary of which problems need tuning and their respective tuning parameters, see

[39]. In our pairwise comparisons, we only include those problems that were either not

tuned by either solver or had the same tuning parameters for both.

Page 42: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 32

LOQO FB

Iter Time Iter Time

Small LOQO Better 41 1487 35.66 2009 47.03

FB Better 86 5429 12.19 3493 5.17

Medium LOQO Better 8 515 44.32 611 52.76

FB Better 18 1606 64.26 1289 49.38

Large LOQO Better 9 769 346.05 820 356.53

FB Better 17 2102 1252.68 2017 1173.42

Very Large LOQO Better 2 94 39.07 135 53.73

FB Better 25 1152 2099.54 832 1280.19

Table 1. Comparison of LOQO to FB on commonly solved problems.

LOQO FO

Iter Time Iter Time

Small LOQO Better 44 1344 34.89 1895 44.64

FO Better 71 4097 10.89 2345 4.53

Medium LOQO Better 7 491 43.98 559 48.82

FO Better 18 1811 136.96 1303 73.06

Large LOQO Better 7 526 1850.43 617 3567.35

FO Better 21 2653 1903.04 2558 1752.32

Very Large LOQO Better 4 326 757.80 387 811.64

FO Better 25 1104 1631.07 796 832.19

Table 2. Comparison of LOQO to FO on commonly solved problems.

Page 43: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 33

LOQO FP

Iter Time Iter Time

Small LOQO Better 47 1969 36.77 2649 46.13

FP Better 105 6533 13.73 4292 5.75

Medium LOQO Better 8 802 50.42 868 55.22

FP Better 15 1154 29.93 972 25.95

Large LOQO Better 5 222 18.09 254 20.00

FP Better 23 2590 1566.87 2278 1296.61

Very Large LOQO Better 4 168 202.60 205 255.86

FP Better 27 1214 2101.54 854 1202.33

Table 3. Comparison of LOQO to FP on commonly solved problems.

FB FO

Iter Time Iter Time

Small FB Better 38 2383 3.37 3021 6.04

FO Better 22 1150 2.00 1005 1.74

Medium FB Better 5 734 104.68 1043 106.42

FO Better 5 803 66.40 741 61.68

Large FB Better 6 716 1852.40 793 3563.77

FO Better 9 825 683.33 778 623.26

Very Large FB Better 7 558 1253.43 574 1297.49

FO Better 5 441 2131.25 254 889.05

Table 4. Comparison of FB to FO on commonly solved problems.

Page 44: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 34

FB FP

Iter Time Iter Time

Small FB Better 25 853 1.35 1223 2.09

FP Better 53 2915 4.46 2485 3.38

Medium FB Better 6 621 13.44 645 13.10

FP Better 4 702 65.39 642 60.58

Large FB Better 2 77 7.97 79 7.42

FP Better 11 918 387.40 667 217.38

Very Large FB Better 3 136 178.25 168 228.73

FP Better 17 1299 3955.88 740 2220.56

Table 5. Comparison of FB to FP on commonly solved problems.

FO FP

Iter Time Iter Time

Small FO Better 29 1513 3.09 2072 4.92

FP Better 75 5341 9.22 4325 5.44

Medium FO Better 7 684 27.87 796 13.34

FP Better 3 508 81.79 373 81.12

Large FO Better 6 239 334.62 254 340.12

FP Better 13 1520 3894.62 1225 2046.96

Very Large FO Better 4 154 384.49 200 337.35

FP Better 18 914 1763.67 744 1368.88

Table 6. Comparison of FO to FP on commonly solved problems.

Page 45: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 35

Using a filter can allow the algorithm to take bolder steps. A step that may otherwise

have been shortened because it did not reduce a linear combination of the barrier objective

and the primal infeasibility may be allowed through the filter by reducing either one. This

behavior is even more evident in (FP), where any type of reduction over the previous iterate

is admissible. There are six issues to consider here:

(1) Tables 1 through 6 show that, in general, taking bolder steps allows the algorithm

to reach the optimum quicker, as the filter algorithms perform better than (LOQO)

more often than vice versa. In fact, the bolder the step, the better, as (FP) seems

to be superior to the other three algorithms in the number of problems solved

in less iterations. An example of such a problem is discs: (LOQO) solves it in

389 iterations, (FO) solves it in 126, (FB) solves it in 62 and (FP) solves it in 59

iterations.

(2) On the other hand, taking a large step in the wrong direction may make the al-

gorithm stray and take extra iterations to reach the optimum, or even fail. The

numbers of cases where this behavior occurs are provided in the tables. An exam-

ple of such a problem is bt8: (LOQO) solves it in 292 iterations, (FO) in 169, but

(FP) in 328, and (FB) fails without tuning.

(3) More freedom in taking steps at the beginning reduces the need to tune the algorithm

for the initialization of slack variables: The problems that (FB), (FO), and (FP)

can solve with default parameters, but need bndpush tuning for (LOQO) are as

follows:

Page 46: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 36

(FB): eigenbco, hs107, meyer3, s237, s361, s376, steenbre

(FO): eigenbco, meyer3, s237, s361

(FP): eigenbco, hs107, meyer3, s237, s355, s361

(4) Taking large steps in the wrong direction may cause the algorithm to fail, but we can

remedy it by changing the initialization of the slacks: The problems that require

(FB), (FO), and (FP) to be tuned for bndpush, but can be solved by (LOQO) with

default parameters are as follows:

(FB): bt8, ncvxqp5, trimloss, twirism1

(FO): hvycrash, ncvxqp5, oet7, trimloss, twirism1

(FP): concon, cresc100, fminsrf2, hvycrash, mconcon, ncvxqp5, s373,

twirism1

Also, all three hybrids require a different bndpush of 0.1 rather than the 10

required by (LOQO) on palmer5b and a bndpush of 0.1 rather than the 1000

required by (LOQO) on steenbrg. (FP) requires a bndpush of 0.01 rather than

the 0.1 required by the other three solvers on mdhole, a bndpush of 0.01 rather

than the 100 required by the other three for orthregd, and a bndpush of 10000

rather than the 1000 required by steenbre.

(5) Taking bolder steps may allow us to reach better accuracy levels: The problems on

which (FB), (FO), or (FP) obtain better levels of accuracy than (LOQO) are as

follows:

(FB): brainpc2, brainpc6, brainpc9, kissing, orthrds2, palmer7e

(FO): brainpc2, brainpc6, brainpc9, kissing, palmer7e, vanderm3

(FP): orthrds2, palmer7e, steenbrd, vanderm3

Page 47: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 37

(6) Less restricted steps may keep us from reaching good accuracy levels: The problems

on which (FB), (FO), or (FP) obtain worse levels of accuracy than (LOQO) are

as follows:

(FB): bloweyc, ncvxqp6, vanderm1, vanderm2

(FO): bloweyc, orthrgds, vanderm1, vanderm2

(FP): bloweyc, ncvxqp6, vanderm1, vanderm2

It can be seen from the numerical results in the tables and the tuning details given above

that situations (1), (3), and (5) occur more frequently than (2), (4), and (6), and, in general,

filter-based methods outperform (LOQO), with (FP) emerging as the winner.

Since the test set contains nonconvex problems, the algorithms may end up at different

optima, and the quality of the solution attained may also be different. The problems on

which (FB), (FO), or (FP) obtain better optima than (LOQO) are as follows:

(FB): deconvc, hs015, hs070, ngone, robot

(FO): deconvc, hs015, ngone, robot

(FP): deconvc, himmelp6, hs015, hs070, hs098, ngone, orthrgds, s365mod

However, it is just as likely that the different optima reported by the filter-based algo-

rithms will be worse than (LOQO)’s. The problems on which (FB), (FO), or (FP) obtain

worse optima than (LOQO) are as follows:

(FB): deconvb, ncvxqp9, oet7, orthrege

(FO): deconvb, ncvxqp9, oet7, orthrege, s393

(FP): deconvb, haldmals, ncvxqp9, oet7, orthrege, s393

Also, on some of the problems, the filter-based algorithms may reach worse optima

than (LOQO) without tuning, whereas (LOQO) reaches the better optimum only when

tuned and fails otherwise. These problems are brainpc5 for (FO) and (FP), and brainpc7

and s374 for just (FP). Conversely, (LOQO) may have reached a local optimum on some

Page 48: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 38

problems, and the other algorithms reach a better optimum with tuning and fail other-

wise. These problems are brainpc3 and brainpc8 for (FB) and (FP), and brainpc3 and

orthrgdm for (FO).

Finally, none of the filter-based algorithms are able to solve spiral. This is a problem

where conservative steps are required to reach the optimum, but the filter-based algorithms

stray too much and cannot come back to an optimal solution. On the other hand, (FB)

and (FO) are the only algorithms that can solve s374, and (FO) is the only algorithm than

can solve allinitc.

0.2. Performance Profiles. Recently, Dolan and More proposed in [20] a new way

to compare the performance of several algorithms running on the same set of problems.

Their approach is simply to compute an estimate of the probability that an algorithm

performs within a multiple of the runtime or iteration count (or any other metric) of the

best algorithm.

Assume that we are comparing the performance of ns solvers on np problems. For the

case of using runtime as the metric in the profile, let

tp,s = computing time required to solve problem p by solver s.

Then, the performance ratio of solver s on problem p is defined by

ρp,s =tp,s

mintp,s : 1 ≤ s ≤ ns.

Here, we also assume that the set of problems we are working on are such that all solvers

can find a solution to all of the problems. Thus, ρp,s <∞.

Page 49: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 39

In order to evaluate the performance of the algorithm on the whole set of problems, one

can use the quantity

φs(τ) =1

np

sizep : 1 ≤ p ≤ np, ρp,s ≤ τ.

The function φs : R → [0, 1] is called the performance profile and represents the cumulative

distribution function of the performance ratio.

In Figures 1 and 2, we present performance profiles using two different metrics. In this

study, we have included problems that were either solved untuned by all solvers or used

the same tuning for all, and we also required that all of the solvers find the same optimum.

Figure 1 shows the performance profiles of the four solvers with respect to runtime. There

are 186 models considered here—the reason for this number being so low is that we have

not included models whose runtimes were less than 1 CPU second. On such problems,

small differences in system performance would have provided an unfair comparison. For a

more detailed comparison, we have included Figure 2, which uses the iteration counts as

the system metric. This study included 784 models.

It is clear from both figures that (FP) is superior to the other three codes, both in terms

of runtime and iteration counts. On the other hand, (FO) seems to be superior to (FB)

in terms of runtime. This may be due to both of them performing quite similarly on the

iteration counts, but with (FO) requiring less work per iteration.

Page 50: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 40Performance Analysis - Runtime

0

0.2

0.4

0.6

0.8

1

1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5

tau

phi

LOQOFBFOFP

Figure 1. Performance profiles of LOQO and the hybrid algorithms with

respect to runtime. Performance Analysis - Iteration Count

0.75

0.8

0.85

0.9

0.95

1

1 1.1 1.2 1.3 1.4 1.5

tau

phi

LOQOFBFOFP

Figure 2. Performance profiles of LOQO and the hybrid algorithms with

respect to iteration counts.

Page 51: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

Part 2

Extensions to Other Problem Classes.

Page 52: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CHAPTER 6

Extension to Second-Order Cone Programming.

Second-order cone programming has been an area of intense research in the last decade.

These problems arise in a number of application areas, ranging from financial engineering

to civil engineering to electrical engineering. Lobo, et al. have published a survey paper

[46] formulating many of these problems as second-order cone programming problems.

In standard form, a second-order cone programming problem (SOCP) is defined as

follows:

(16)minimize gTx

subject to cTi x+ di − ‖Aix+ bi‖ ≥ 0, i = 1, . . . ,m.

Here, x ∈ Rn, g ∈ Rn, and ci ∈ Rn, di ∈ R, Ai ∈ R(ki×n), and bi ∈ Rki , i = 1, . . . ,m. The

norm used is the Euclidean norm defined by ‖u‖ = (uTu)1/2.

The Hessian of the constraint function hi is given by

∇2hi = −(

ATi Ai

‖Aix+ bi‖+

[(Aix+ bi)TAi]

T [(Aix+ bi)TAi]

‖Aix+ bi‖3

).

This indicates that the SOCP has concave constraint functions. Since it also has a linear

objective function, the problem is convex.

Constraints of the form

cTx+ d− ‖Ax+ b‖ ≥ 0

42

Page 53: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. KEY ISSUES IN NONSMOOTHNESS. 43

are called second-order cone constraints because the affinely defined variables u = Ax + b

and t = cTx+ d are constrained to belong to the second-order cone defined by

(u, t) ∈ Rk+1 : ‖u‖ ≤ t.

This cone is also called the Lorentz cone or, because of its shape, the ice cream cone.

The standard form of the SOCP as given in (16) fits the NLP paradigm given by

(1). Moreover, it is a convex programming problem, so a general purpose NLP solver

such as loqo should find a global optimum for these problems quite efficiently. However,

since we are using an interior-point algorithm, we require that the constraints be twice

continuously differentiable—the Euclidean norm fails that criterion. In fact, the second-

order cone constraints are nonsmooth when u = 0.

The nonsmoothness can cause problems if

(1) an intermediate solution is at a place of nondifferentiability

(2) an optimal solution is at a place of nondifferentiability.

The first problem is easily handled for the case of interior-point algorithms by simply

randomizing the initial solution. Doing so means that the probability of encountering a

problematic intermediate solution is zero. However, if the nondifferentiability is at an

optimal solution, it is harder to avoid. In fact, it will be shown in the numerical results

section that the second case occurs often, and the algorithm fails on these problems. We

will now examine the nature of this nondifferentiability and propose ways to avoid it.

1. Key issues in nonsmoothness.

We start by proposing a small sample problem, which is a nonsmooth SOCP. We will

then look at several solution methods.

Page 54: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. KEY ISSUES IN NONSMOOTHNESS. 44

1.1. Illustrative example. Here’s our example illustrating the difficulty of nondiffer-

entiability at optimality:

(17)minimize ax1 + x2

subject to |x1| ≤ x2.

The parameter a is a given real number satisfying −1 < a < 1. The optimal solution

is easily seen to be (0, 0). Part of the stopping rule for any primal-dual interior-point

algorithm is the attainment of dual feasibility; that is, σ = ∇f(x) + AT (x)y = 0. For the

problem under consideration, the condition for dual feasibility is

(18)

a

1

+

sgn(x1)

−1

y = 0.

We have written sgn(x1) for the derivative of |x1|, but what about x1 = 0? Here, any value

between −1 and 1 is a valid subgradient. But a specific choice must be selected a priori,

since solvers don’t work with set-valued functions. For lack of a better choice, suppose that

we pick 0 as a specific subgradient of the absolute value function at the origin. That is, we

adopt the common convention that sgn(0) = 0.

From the second equation in (18), we see that y = 1. Therefore, the first equation

reduces to sgn(x1) = a. But, at optimality, x1 = 0 and so in order to get dual feasibility

we must have that

sgn(0) = a.

Of course, if solvers worked with set-valued functions, then sgn(0) would be the interval

[−1, 1] and this condition would be a ∈ [−1, 1] and there would be no problem. But for

now solvers work only with singly valued functions and the particular value we picked for

Page 55: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. ALTERNATE FORMULATIONS OF SECOND–ORDER CONE CONSTRAINTS. 45

sgn(0), namely 0, might not be equal to a in which case an interior-point method can’t

possibly produce a dual feasible solution.

Note that there are two important properties at work in this example:

(1) the constraint function failed to be differentiable at the optimal point, and

(2) that constraint’s inequality was tight at optimality, thereby forcing the correspond-

ing dual variable to be nonzero.

2. Alternate Formulations of Second–Order Cone Constraints.

The type of behavior seen in the two examples given above is sometimes visible in the

different types of problems presented by Lobo, et al. in their paper [46]. There are two

ways we propose to avoid the problem:

(1) reformulate the cone constraint to have a smooth concave function

(2) when available, use a formulation of the problem that does not involve a nondif-

ferentiable term

The latter way to avoid nondifferentiability is problem specific and cannot be generalized

for the purposes of this chapter, but we will return to it in the numerical results chapter

when discussing several application areas for SOCPs. Quite often, a differentiable linear or

nonlinear version of the problem is the original formulation, and perhaps not surprisingly,

can be solved more efficiently and accurately than the SOCP version.

In this section, we consider a few alternatives for expressing SOCPs as smooth and

convex problems. The first two alternatives are perturbation techniques, where the resulting

second-order cone constraints are smooth, but they may not be quite equivalent to the

original constraints. The other three are reformulation techniques, where the new problem

is indeed equivalent to the original.

Page 56: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. ALTERNATE FORMULATIONS OF SECOND–ORDER CONE CONSTRAINTS. 46

2.1. Smoothing by Perturbation. One way to avoid the nondifferentiability prob-

lem is simply to have a positive constant in the Euclidean norm. That is, we replace the

original constraint with

√ε2 + ‖u‖2 ≤ t,

where ε is a small constant usually chosen to be around 10−6. The perturbation yields a

second-order cone constraint, and, therefore, the resulting problem is convex. With the

perturbation, it is also smooth.

Even though ε may be small enough so that the perturbation is absorbed into the

numerical accuracy level of the algorithm, this reformulation is nonetheless not exactly

equivalent to the original constraint. Because of this reason, we would like to keep the

perturbation as small as possible. This may, however, require trying out different values of

ε until we find the smallest one for which the algorithm can find an optimal solution to the

problem. Especially for a large problem, though, it may be rather time consuming to solve

the same problem several times. Therefore, the perturbation can instead be a variable,

that is, we can replace the second-order cone constraint with

√v2 + ‖u‖2 ≤ t

v > 0,

where v ∈ R is a variable. Without the positivity constraint on the perturbation, the

dual feasibility conditions of the problem given in the previous section reduce to (18). The

positivity constraint, without loss of generality, allows us to solve the problem, and the

strict inequality is not a concern for interior–point methods.

Page 57: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. ALTERNATE FORMULATIONS OF SECOND–ORDER CONE CONSTRAINTS. 47

2.2. Smoothing by Reformulation. Although the perturbation approach works

quite well in practice, one may argue that it is better to have a problem that is exactly

equivalent to the original one, but smooth. Of course, it would also be good to keep the

favorable characteristic of convexity in the problem. Keeping this in mind, we now present

three different reformulation alternatives.

2.2.1. Smoothing by Squaring. The most natural reformulation of the second-order cone

constraint is

‖u‖2 − t2 ≤ 0

t ≥ 0.

The nonnegativity constraint is required for the feasible region to stay convex, and the

constraint function

γ(u, t) = ‖u‖2 − t2

is smooth everywhere. However, it is not convex as its Hessian clearly shows:

∇γ = 2

u

−t

, ∇2γ = 2

I 0

0 −1

.Even though the feasible region is convex, its representation as the intersection of nonconvex

inequalities can lead to slow convergence to dual feasibility, as described in [66]. Therefore,

one would expect this reformulation not to work well.

2.2.2. Convexification by Exponentiation. Using γ, we were able to get a smooth re-

formulation of the second-order cone constraint. To overcome the nonconvexity of γ, we

can compose a smooth convex function that maps the negative halfline into the negative

Page 58: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. ALTERNATE FORMULATIONS OF SECOND–ORDER CONE CONSTRAINTS. 48

halfline with it. The exponential function is such a smooth convex function, so let

ψ(u, t) = e(‖u‖2−t2)/2 − 1.

To check the convexity of ψ(u, t):

∇ψ = e(‖u‖2−t2)/2

u

−t

,

∇2ψ = e(‖u‖2−t2)/2

I + uuT −tu

−tuT 1 + t2

= e(‖u‖2−t2)/2

I +

u

−t

[ uT −t] .

The second expression for the Hessian clearly shows that ∇2ψ is positive definite.

Even though the exponential function gives a reformulation that is both smooth and

convex, it does not behave well in practice because of scaling issues. In fact, when ‖u‖ is

on the order of 10, e(‖u‖2) is a very large value. Therefore, this approach rarely works in

practice.

2.2.3. Convexification by Ratios. Another way to avoid the trouble of nonconvexity is

to use

‖u‖2

t− t ≤ 0

t > 0.

Indeed, the constraint function

η(u, t) =‖u‖2

t− t,

Page 59: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. ALTERNATE FORMULATIONS OF SECOND–ORDER CONE CONSTRAINTS. 49

is convex:

∇η =

2ut−1

−‖u‖2t−2 − 1

,

∇2η = 2t−1

I −ut−1

−uT t−1 ‖u‖2t−2

= 2t−1

I

−uT t−1

[ I −ut−1

].

The second expression for the Hessian clearly shows that it is positive definite. The

strict inequality constraint on t is not a concern when using interior–point methods, and, in

fact, many SOCPs only have a constant term in the affine expression that defines t. Now,

we have convex functions defining our constraints and we have eliminated the problem of

nonsmoothness.

Note that an additional benefit of this approach is that it yields a sparser Hessian than

the original formulation when u ∈ Rn, n ≥ 4. The sparsity patterns for the Hessians of the

cone constraint and the ratio constraint are as follows:

(19)

u1 u2 u3 u4 t

u1

u2

u3

u4

t

∗ ∗ ∗ ∗

∗ ∗ ∗ ∗

∗ ∗ ∗ ∗

∗ ∗ ∗ ∗

u1 u2 u3 u4 t

u1

u2

u3

u4

t

∗ ∗

∗ ∗

∗ ∗

∗ ∗

∗ ∗ ∗ ∗ ∗

Page 60: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. ALTERNATE FORMULATIONS OF SECOND–ORDER CONE CONSTRAINTS. 50

In the next chapter, we present results from numerical experience with these reformu-

lations. As noted, the variable perturbation and the ratio reformulation will perform quite

well in practice.

Page 61: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CHAPTER 7

Numerical Results: Second-Order Cone Programming

As noted in the previous chapter, Second-Order Cone Programming is an active area

of research, not only for the available theoretical foundations of Nesterov and Nemirovskii

[52] for producing efficient algorithms, but also for the plethora of application areas where

such problems arise. In this chapter, we will present some of the most popular application

areas and provide numerical testing results.

Many of these examples come from a survey paper by Lobo, et. al. [46]. Some of these

problems are not originally second-order cone programming problems, but they have been

reformulated as such. Perhaps not surprisingly, the original formulations of these problems

work better than the SOCP reformulations, and we will note those cases throughout the

chapter.

Also, for all the problems, we note how the different smoothing techniques work, focus-

ing on the variable perturbation and the ratio reformulation method. Our goal is to find a

reformulation that works well on both the SOCPs that need smoothing and those that do

not.

0.3. Using linear equality constraints in SOCPs. Many of the models that will

be presented in the following sections on various application areas have linear equality

constraints in the problem. There are three ways to remove these constraints to obtain an

SOCP in standard form as given by (16):

51

Page 62: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

7. NUMERICAL RESULTS: SECOND-ORDER CONE PROGRAMMING 52

(1) Use the equality constraint with a scalar left-hand side as a definition and substi-

tute

(2) Express the linear equality constraint as a second-order cone constraint:

Ax = b

can be written as

‖Ax− b‖ ≤ 0.

(3) Split up the equality constraint into two linear inequalities by writing

Ax = b

as

Ax− b ≤ 0, and Ax− b ≥ 0.

However, all three of these approaches have drawbacks:

• The first two approaches may significantly increase the number of nonzeros in the

KKT matrix, making the problem much more time consuming to solve.

• The second approach ensures that we have a nonsmooth problem since Ax = b at

optimality.

• The third approach may lead to numerical algebra problems by producing a sin-

gular KKT matrix.

These are all serious drawbacks that may interfere with our ability to solve the given

problems. Since the AMPL modelling language allows us to express linear equalities (and

pretty much any other type of constraint we wish to add) in an SOCP, and all the solvers

Page 63: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. ANTENNA ARRAY WEIGHT DESIGN 53

can handle them, we have opted instead to leave the linear equalities in the problem. The

same thing is true of the non-SOCP formulations of the problems as well.

1. Antenna Array Weight Design

We are given n antennae evenly spaced on the y-axis in R2. These antennae receive

a signal and our aim is to determine the best way to combine the individual outputs of

the antennae to ensure certain characteristics in the aggregate output. The input signal

has a fixed wavelength, λ, but it can arrive from any direction θ. The possible values for

θ are divided into 2 sets: the sidelobe S where we want to attenuate the signal as much

as possible, and the mainlobe M where we try to obtain some desired characteristic. For

some angles in M , we may wish to have the output be exactly equal to a certain value and

for others we simply provide an upper bound. Here is the optimization problem:

minimize s

subject to |G(θ)|2 ≤ s, θ ∈ S

|G(θ)|2 ≤ u(θ)2, θ ∈M

G(θ) = G0(θ), θ ∈ P

G(θ) =n∑

k=1

wk exp

(−i2π

λyk sin θ

), θ ∈ S ∪M.

Here, s is a scalar variable that corresponds to the upper limit on the output, G(θ), from the

sidelobe S. It is being minimized, consistent with the fact that we are trying to attenuate

the output from S. For the characteristics we are trying to preserve for the mainlobe M , u

provides upper bounds and G0 provides values for a subset, denoted by P , of the angles in

M . The last constraint, of course, reminds us that G(θ) is an output due to some input—

this is an aggregate output consisting of the weighted sum of all the individual outputs.

Page 64: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. ANTENNA ARRAY WEIGHT DESIGN 54

As usual, i =√−1 and yk is the position of the kth antenna on the y-axis. The weights,

wk, are the variables whose values will determine the optimal way of combining the output

from the antennae to obtain the characteristics outlined in the objective and the other

constraints.

This model is a convex NLP with some concave quadratic constraint functions. We

have created an ampl model of it called antenna. The problem, however, is not an SOCP.

Lobo, et. al. [46] suggest making a change of variables first, by using t =√s. By the

monotonicity of the square root function, we can simply minimize t instead of s, and the

first set of constraints becomes

‖G(θ)‖2 ≤ t2.

Now, in order to write the problem as an SOCP, we can just write the first set of constraints

as

‖G(θ)‖ ≤ t

and the second set of constraints as

‖G(θ)‖ ≤ |u(θ)|.

The ampl model for this problem is in antenna socp.

Various other formulations for this problem were studied by Vanderbei and Coleman in

[16]. In these formulations, instead of minimizing the bound on it, the sidelobe output itself

is minimized while still being bounded above, but this time by a constant. The objective

function is expressed as a sum of the sidelobe outputs that will be minimized. There are

several ways to express this sum; for example, an `1 norm can be used:

(20)∑θ∈S

|G(θ)|.

Page 65: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. ANTENNA ARRAY WEIGHT DESIGN 55

Note that this is called an `1 norm because it is a sum of norms. Another possibility is to

use the square of an `2 norm:

(21)∑θ∈S

|G(θ)|2.

These formulations are given in antenna L1 and antenna L2, respectively. There is

the issue of nonsmoothness for the model using the `1 norm, so we have also made a

model called antenna L1 vareps, which uses a variable perturbation of the norms in the

constraint functions.

Neither of these variations is in the form of an SOCP, but they can be formulated as

such. The `1 objective function, (20), can be replaced with

∑θ∈S

t(θ)

and a group of second-order cone constraints

|G(θ)| ≤ t(θ), θ ∈ S.

We have created a model called antenna L1 socp for this problem. The issue of nonsmooth-

ness comes up here as well. Therefore, we have also created antenna L1 socp vareps and

antenna L1 socp ratio for the two smoothing techniques, variable perturbation and ratio

reformulation, discussed in the previous chapter.

The problem with the `2 objective function (21) can also be reformulated by replacing

(21) with minimizing a variable t such that

√∑θ∈S

|G(θ)|2 ≤ t.

We have created a model called antenna L2 socp for this problem.

Page 66: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. GRASPING FORCE OPTIMIZATION 56

We should also note that the ampl models presented here correspond to the nb set

from the DIMACS Implementation Challenge [18].

2. Grasping Force Optimization

Given a set of m robot fingers, we want to find the minimum amount of normal force

required to grasp an object in 3 dimensions. The normal force is exerted by each finger and

points directly into the object. The fingers are placed at p1, . . . , pm ∈ R3, and the force

applied at the ith contact point is given by Fi ∈ R3. We denote by vi ∈ R3 the inward

pointing unit normal vector at the ith contact point, so the normal component of the force

is given by vivTi Fi.

The problem is to minimize the maximum normal component of force. This minimiza-

tion is subject to force balance constraints, with the forces from the robot fingers balancing

a force applied externally at the center of mass. This external force is denoted by Fext. A

second constraint deals with torque balance, where the torque produced by the robot fin-

gers counteracts an externally applied torque, Text, about the origin. Finally, there is also

a constraint to make sure that the tangential component of force is smaller in magnitude

than the coefficient of friction, µ, times the normal component of force. This model is given

by

minimize t

subject to vTi Fi ≤ t, i = 1, . . . ,m,

m∑i=1

Fi = −Fext

m∑i=1

pi × Fi = −Text

‖(I − vivTi )Fi‖ ≤ µvT

i Fi, i = 1, . . . ,m.

Page 67: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

3. FIR FILTER DESIGN 57

This model fits the SOCP paradigm. We have implemented it in an ampl model called

grasp. The specific instance that we consider is a set of 1000 robot fingers supporting a

parabolic cone whose lower surface is given by

p(3) = p(1)2 + p(2)2.

The fingers are placed equidistant on a unit circle on the cone. In this instance, at least one

of the forces is zero at optimality, so the problem is nonsmooth. Thus, we have also created

a variable perturbed model, grasp vareps, and a model with the ratio reformulation,

grasp ratio.

3. FIR Filter Design

We now consider the minimax dB linear phase lowpass finite impulse response (FIR)

filter design problem as described in [46]:

minimize t

subject to 1/t≤ 2

n/2−1∑k=0

hk cos((k − (n− 1)/2)ω)≤ t, 0 ≤ ω ≤ ωp,

−β ≤ 2

n/2−1∑k=0

hk cos((k − (n− 1)/2)ω)≤ β, ωs ≤ ω ≤ π,

t≥ 1,

where n is an even integer, β is a small positive number, and 0 < ωp < ωs < π. The

coefficients hk, k = 0, . . . , n/2− 1, are the variables.

Since there is an infinite number of possible values for ω, there is an infinite number of

constraints. To remedy this problem, we discretize the interval from 0 to π and let ω take

on the discretized values.

Page 68: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

4. PORTFOLIO OPTIMIZATION 58

The problem has only one nonlinearity, and it is the 1/t in the first set of constraints.

Nonetheless, the problem is convex and smooth because a function of the form u − 1/t is

concave and twice differentiable when t > 0. Since the third constraint says that t ≥ 1,

nonsmoothness at optimality is not an issue here.

We have created an ampl model called fir for the instance with n = 20, β = 0.01,

ωp = 90, ωs = 120, and the interval from 0 to 180 discretized at 1 increments. This

problem is not an SOCP due to the 1/t term, and Lobo, et. al. [46] suggest replacing the

term with a new variable u

u ≤ 2

n/2−1∑k=0

hk cos((k − (n− 1)/2)ω) ≤ t, 0 ≤ ω ≤ ωp,

and adding a new (second-order cone) constraint:

√4 + (u− t)2 ≤ u+ t,

which is equivalent to the requirement that 1/t ≤ u. This SOCP is given in the ampl model

fir socp. It should be noted that the Hessian of the SOCP model is much sparser than the

original formulation since we have replaced a group of nonlinear constraints with a group

of linear constraints and added one nonlinear constraint that only involves 2 variables.

This problem is discussed in great detail in [53, 11]. The interior-point approach to

solving the original formulation of the problem is studied in [70].

4. Portfolio Optimization

Given a collection of investments J , let µ be a Gaussian vector, where µj denotes the

return on a $1 investment after one year’s investment in j ∈ J , and let µ denote its mean

and Σ its covariance matrix. We want to find what fraction xj of one’s assets to invest in

Page 69: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. TRUSS TOPOLOGY DESIGN 59

each j ∈ J . The overall annual return, referred to as the reward, for the portfolio defined

by a specific choice of the xj’s is given by r =∑

j

µjxj, and it is a Gaussian random variable

with mean r =∑

j

µjxj and variance σr = xT Σx. The variance is also referred to as the

risk.

Suppose that we wish to maximize our expected reward while keeping the risk within

a certain bound. We can express this optimization problem as follows:

(22)

maximize∑

j

µjxj

subject to∑jj′

xjσjj′xj′ ≤ smax

∑j

xj = 1

xj ≥ 0, for all j ∈ J.

Formulation (22) is a convex programming problem with a linear objective function.

Using real data, we created the ampl model, optreward for problem (22).

It is very easy to convert this problem into an SOCP: we simply take the square root

of the quadratic constraints. This model is formulated in ampl as optreward socp.

5. Truss Topology Design

In this problem, we are trying to build a structure that will be able to carry a given

external load (see [7]). We are given n linear elastic bars, or elements, connecting m nodes

and made from limited material with total volume v. The problem is to build the stiffest

structure to hold the load. The optimization variables, xe, are the cross sectional areas of

the bars, which each have length le.

Page 70: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

5. TRUSS TOPOLOGY DESIGN 60

Externally applied loads act at some or all of the nodes. Let f denote the vector of

applied loads, where each load is a vector in R3. Associated with each element e is a rank-

one positive semidefinite matrix Ke = aeaTe called the element stiffness matrix. The vector

ae is very sparse having only 6 nonzero elements for spatial trusses. The stiffness matrix

for the structure built with element cross sectional areas xe is then given by

K(x) =∑

e

xeKe.

To maximize the stiffness of the structure under load f , we minimize the elastic energy

fTK(x)−1f , subject to constraints on the total volume of material available and the cross

sectional areas of the elements.

(23)

minimize fTK(x)−1f

subject to∑

e

lexe ≤ v

xe ≥ 0, for all e.

The model gives an NLP, which can be a very large and complicated problem due to the

matrix inverse, K(x)−1, in the objective function. Even a problem with sparse K(x) may

end up with a very dense K(x)−1. Fortunately, there are formulations of (23) that are much

easier to solve. Problems 42 and 41 in [7] give linear and convex quadratic reformulations

of (23), respectively, and we have created the corresponding ampl models structure and

structure2. These models implement a specific problem instance called the Michell Truss

[50].

Page 71: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

6. EQUILIBRIUM OF A SYSTEM OF PIECEWISE LINEAR SPRINGS 61

In [46], Lobo, et. al. suggest another reformulation to write the problem in the form

of an SOCP as follows:

(24)

minimize∑

e

te

subject to∑

e

yeae = f

y2e ≤ texe

te ≥ 0, for all e

∑e lexe ≤ v

xe ≥ 0, for all e.

This problem is not quite an SOCP yet, but all we need is to change the second constraint

from

(25) y2e ≤ texe

to

(26)(4y2

e + (te − xe)2)1/2 ≤ te + xe.

We have created an ampl model called structure socp for this formulation.

6. Equilibrium of a system of piecewise linear springs

We wish to find the shape of a hanging chain of springs in equilibrium. The chain

has N of these springs, also referred to as nodes, and each spring has negligible mass.

The springs buckle under compression but exert a restoring force proportional to their

elongations when under tension. There is a weight hanging from each node. The problem

Page 72: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

6. EQUILIBRIUM OF A SYSTEM OF PIECEWISE LINEAR SPRINGS 62

is to minimize the potential energy of the system subject to given boundary conditions.

After some preliminary model reformulation (see [46]), the problem can be written as

minimize∑

j mjgyj + k2‖t‖2

subject to ‖pj − pj−1‖ − l0 ≤ tj, j = 1, . . . , N,

p0 = a

pN = b

t≥ 0.

Here, g is the acceleration due to gravity, k is the stiffness constant for the springs, l0 is

the rest length of each spring, mj is the mass of the j-th weight, pj = (xj, yj) denotes the

position vector of the j-th node, a is the position of one end point of the chain, b is the

position of the other end point, t = (t1, . . . , tN), and finally tj is an upper bound on the

spring energy of the j-th spring. The unknowns are the pj’s and the tj’s.

In the ampl model springs, we chose N = 100, g = 9.8, k = 100, mj = 1, a = (0, 0),

b = (2,−1), and l0 = 2‖b − a‖/N . Although the constraints are either linear or second-

order cone constraints, the problem, as formulated, is not a second-order cone programming

problem since the objective function is quadratic. However, it is a general nonlinear pro-

gramming problem and therefore can be attacked by loqo. In fact, the objective is convex

quadratic and the nondifferentiability of the second-order cone constraints only occurs at

points where two adjacent nodes occupy exactly the same position in space, which should

not happen in practice.

To convert the problem to a second-order cone programming problem, Lobo, et al., [46]

suggest adding one new scalar variable, y, to replace ‖t‖2 in the objective function and

Page 73: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

7. EUCLIDEAN SINGLE FACILITY LOCATION 63

then adding one extra second-order cone constraint

√(2‖t‖)2 + (1− y)2 ≤ 1 + y

(which is equivalent to the condition that y is an upper bound on ‖t‖2). For loqo, it is

unnecessary to linearize the objective function, but nevertheless, we created a model called

springs socp.

7. Euclidean Single Facility Location

Given m existing facilities, the Euclidean single facility location tries to place a new fa-

cility that minimizes the Euclidean distance from this new facility to the existing locations.

This problem is usually expressed as an unconstrained optimization problem:

(27) minimizem∑

i=1

wi‖x− ai‖,

where the data ai ∈ Rd, i = 1, . . . ,m are the locations of the existing facilities, wi ∈ R are

weights associated with each existing facility, and x ∈ Rd is the vector of variables repre-

senting the location of the new facility. In its original form, this is a convex optimization

problem, and can be solved quite easily in general.

However, because of the Euclidean norm in the objective function, it can also suffer

from nonsmoothness. This is the case when the new facility should be located in the

same location as an existing facility. A famous example of the problem with three existing

locations is Fermat’s problem. In that problem, the point that minimizes the distance from

the three vertices of a triangle needs to be found. In the case that all of the angles in the

triangle are less than 120, this point is located on the inside of the triangle at a point

that makes 120 angles with all the vertices and can be located quite easily. When one of

Page 74: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

8. EUCLIDEAN MULTIPLE FACILITY LOCATION 64

the angles in the triangle is 120 or larger, however, the point that minimizes the distance

from the other vertices is the vertex that corresponds to that angle. We have created both

of these cases in two ampl models: fermat with an acute triangle and fermat2 with an

obtuse triangle where the solution is at a vertex and is thus nonsmooth at optimality.

Let us return to the discussion of the general Euclidean single facility location problem

now. We have created an ampl model for this problem called esfl. This problem in

two dimensions has 1000 existing facility locations, but regardless of the number of such

locations, it has 2 variables and no constraints.

As Lobo, et. al. suggest [46], it is easy to convert this problem into an SOCP by

introducing new variables t ∈ Rm:

minimize∑m

i=1witi

subject to ti − ‖x− ai‖ ≥ 0, i = 1, . . . ,m.

This version of the Euclidean single facility location problem also suffers from nonsmooth-

ness problems. Moreover, it is very important to note the significant change in the size

of the problem as well: Our instance with 1000 existing facility locations now has 1002

variables and 1000 constraints, as compared to 2 and 0, respectively. This is a significant

change, and numerical experience with the ampl model for the SOCP, esfl socp, indicates

that as well.

8. Euclidean Multiple Facility Location

Given m existing facilities, the Euclidean multiple facility location problem is to place

n new facilities so that the weighted sum of the distances between the existing and the

new facilities and among the new facilities is minimized. This is a generalization of the

Euclidean single facility location problem (27) and can be expressed as an unconstrained

Page 75: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

8. EUCLIDEAN MULTIPLE FACILITY LOCATION 65

optimization problem as follows:

minimizem∑

i=1

n∑j=1

wij‖xj − ai‖+n∑

j=1

j−1∑j′=1

vjj′‖xj − xj′‖,

where the data ai ∈ Rd are the locations of the existing facilities, wij ∈ R are the weights

associated with the distances between the existing and the new facilities, and vjj′ ∈ R

are the weights associated with the distances among the new facilities, and the variables

xj ∈ Rd represent the locations of the new facilities.

Just as in the single facility problem, this problem also suffers from nonsmoothness

when a new facility is to be located at the same point as an existing facility or another new

facility. The ampl model that we have created, emfl, implements such a situation. It is

in 2 dimensions, with 200 existing facilities and 25 new facilities.

Lobo, et. al. suggest [46] writing the multiple facility location problem as an SOCP,

too. The resulting problem is as follows:

minimizem∑

i=1

n∑j=1

wijsij +n∑

j=1

j−1∑j′=1

vjj′tjj′

subject to ‖xj − ai‖ ≤ sij, 1 ≤ i ≤ m, 1 ≤ j ≤ n,

‖xj − xj′‖ ≤ tjj′ , 1 ≤ j < j′ ≤ n.

This problem, just like the original, is nonsmooth. Moreover, the SOCP variant of the

ampl model, emfl socp, has again grown in size from 50 variables and 0 constraints to

5350 variables and 5300 constraints. This growth is reflected in the runtimes as well.

For more information on problems that contain sums of Euclidean norms , see [5, 44,

48, 54, 71].

Page 76: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

9. STEINER POINTS 66

9. Steiner Points

Given a set of points pi ∈ R2, i = 1, . . . , n, the Steiner tree problem is to find the planar

straight-line graph that spans all of these points and has the shortest total distance. To

construct this tree, one can use intermediate points, called Steiner points. In an optimal

solution of such a problem, the graph constructed is a spanning tree of the system with the

n original points and at most n− 2 Steiner points. The problem is NP-hard in this form,

but for a given topology, it is possible to compute the locations of the Steiner points and

construct the spanning tree with minimum total distance.

Here, we discuss one such topology, the full Steiner topology and a given set of arcs in

the graph. We define a full Steiner topology as any tree graph that contains the original

n points and all of the n − 2 Steiner points. In this topology, all of the original points

have degree 1 and all of the Steiner points have degree 3. We denote the original points as

p1, p2, . . . , pn and the Steiner points as pn+1, pn+2, . . . , p2n−2. The set of arcs is denoted by

A. The full Steiner topology problem is

(28) minimize∑

(i,j)∈A

‖pi − pj‖,

where the original points p1, . . . , pn are fixed whereas the Steiner points pn+1, . . . , p2n−2 are

the optimization variables. This problem was first studied in [37, 38, 61].

We have created an ampl model of the instance given in [71]. Because some of the

Steiner points collapse onto each other, nonsmoothness is an issue here. This issue carries

over to the SOCP variant of (28) as well:

minimize∑

(i,j)∈A

tij

subject to tij − ‖pi − pj‖ ≥ 0, (i, j) ∈ A.

Page 77: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

10. MINIMAL SURFACES 67

As in the Euclidean facility location problems, the size increase for the SOCP variant is

also a problem.

10. Minimal Surfaces

Given a simply connected compact domain D in Rn, with a piecewise continuous bound-

ary ∂D, and a function γ defined on ∂D, define a surface on D that agrees with γ on the

boundary. The minimal surface problem is to find such a surface, u, that has the minimum

area. This problem is commonly referred to in literature as the soap bubble problem since

the film of soap that gets formed on a wire boundary always corresponds to the minimal

surface.

The optimization problem is formulated as follows:

(29)minimize

∫∫D

√1 +

(∂u

∂x

)2

+

(∂u

∂y

)2

dxdy

subject to u(x, y) = γ(x, y), (x, y) ∈ ∂D.

In order to have a finite number of variables and constraints, the domain D is discretized

using a rectangular lattice grid and the derivatives are approximated using finite differences.

This gives a convex NLP for which we have created an ampl model called minsurf. The

particular instance that we consider is a square D with γ as concave parabolas on each side

of the square. The domain is discretized into a 32× 32 grid.

Lobo, et. al. [46] propose writing the problem (29) as an SOCP. In fact, this can easily

be done as follows:

minimize

∫∫D

t(x, y)dxdy

subject to

√1 +

(∂u

∂x

)2

+

(∂u

∂y

)2

≤ t(x, y), (x, y) ∈ D,

u(x, y) = γ(x, y), (x, y) ∈ ∂D.

Page 78: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

11. PLASTIC COLLAPSE 68

and the same discretization as in (29) is applied. Since there is a constant term of 1 in

the norm, nonsmoothness is not a problem here. The problem is formulated in ampl as

minsurf socp.

11. Plastic Collapse

The plastic collapse problems and their formulations are given by Andersen and Chris-

tiansen in [4] and [3]. The problem is that of finding the maximal multiple of a fixed load

distribution that a rigid perfect plastic continuum can accommodate before collapsing. We

consider the discretized problem that is described in detail in [3]:

maximize λ

subject toκ∑

ν=1

Bνxν − cλ = 0

‖Qxν‖ ≤ 1, ν = 1, . . . , κ,

where λ ∈ R, the multiplier, and xν ∈ R3, the stress at node ν in the discretized problem,

are the optimization variables. The matrix Bν ∈ Rm×3 is the work rate for the stress field

in the plastic at node ν, and c ∈ Rm is the work rate of the external load. The second-order

cone constraint is derived from the yield condition at node ν, and

Q =

12−1

20

0 0 1

.

The data we have used to test this model comes from problem nql in the DIMACS

implementation challenge [18], and it is for a reformulation of the above problem, as given

in [4]. Christiansen and Andersen have transformed the problem by using the following

Page 79: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 69

substitutions:

yν =

12((xν)1 − (xν)2)

(xν)3

,ye = [(x1)2, . . . (xκ)2, λ]T

Aν = [2(Bν)1, (Bν)3],

E = [(B1)1 + (B1)2, . . . , (Bκ)1 + (Bκ)2,−c]

The problem, then, becomes

maximize λ

subject toκ∑

ν=1

Aνyν + Eye = 0

‖yν‖ ≤ 1, ν = 1, . . . , κ.

The ampl implementation of this model is given in plastic.

12. Results of numerical testing.

We have tested our models using some general purpose nonlinear solvers, loqo [64],

snopt [29], and knitro [15], and a state-of-the-art special purpose code for cone pro-

gramming, SeDuMi [62]. loqo is a general purpose interior-point method algorithm by

Vanderbei for nonlinear programming, and it is the algorithm that is the focus of this dis-

sertation. In our testing, we used loqo version 5.08. snopt is a sequential programming

algorithm by Gill, et al. We used version 092000 of snopt. The last general purpose solver

is knitro by Nocedal, Byrd, et. al. It is a trust-region algorithm that uses interior-point

methods. We used version 022201 of this algorithm. These three algorithms were called

from ampl, version 20000601.

We also present numerical results for experience with using a special purpose solver for

cone programming, SeDuMi. This solver was called from matlab, version 5.3.

Page 80: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 70

The default options for loqo were taken to be 8 digits of agreement between the primal

and dual solutions and an infeasibility tolerance of 10−7. For some of the models, however,

it was not possible to reach such levels of accuracy. We will note those models and the

corresponding tuning parameters in the following discussion. We have also turned on the

primal ordering option for some of the models, since it allows for a better symbolic Cholesky

factorization. For the rest of the solvers, we have not used any tuning.

We should also note that the timing mechanism of the special purpose algorithm, Se-

DuMi, is different than that of the general purpose algorithms, in that it does not take

into account the preprocessing and symbolic Cholesky factorization routines in the total

time. For large problems, where these routines take significant amounts of time, this may

account for some of the runtime differences in the algorithms.

There are three goals in performing the numerical testing. First, for the cases where

the original model is not an SOCP but can be reformulated as such, we wish to ascer-

tain whether the original formulation or the SOCP version is more amenable to finding a

solution. Second, we want to compare the performance of general purpose algorithms to

that of a special purpose algorithm. Finally, we want to evaluate the two approaches to

smoothing an SOCP: the variable perturbation and the ratio reformulation. Our aim with

this is to find an approach to use instead of the SOCP at all times, whether the problem is

nonsmooth or not, so that we can have a single reliable way to solve these problems using

an interior-point algorithm. We will keep these three goals in mind in the discussion below.

The numerical results are presented in Table 1 and are discussed below. The times

reported in the table are in CPU seconds. The following abbreviations are used for con-

ciseness:

Page 81: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 71

n number of variables

m number of constraints

(IL) the algorithm reached its iteration limit

(FA) the algorithm exited due to failure to find a sensible direction

(TR) knitro’s trust region radius got too small

(Time) the algorithm ran for 1000 CPU minutes without finding a solution

(ER) snopt reported an error in nonlinear evaluations

(n/a) the model was not implemented in matlab and

could not be tested with SeDuMi

(CI) snopt could not improve on the current solution

(NP) SeDuMi ran into numerical trouble

(UB) the algorithm concluded that the problem is unbounded

(PI) the algorithm concluded that the problem is primal infeasible

(DI) the algorithm concluded that the problem is dual infeasible

12.1. Antenna Array Design. As discussed, the Antenna Array Design problem can

be expressed in its standard form or as an SOCP. The model in standard form, antenna,

is solved faster by loqo than its SOCP version, antenna socp. The accuracy level of

both formulations is around 10−6. This is due to some of the norms being very small in

magnitude, and it is not possible for loqo to attain a more accurate solution. The SOCP

model is solved much faster by the special purpose algorithm, SeDuMi, than by loqo,

while the other two general purpose solvers perform even worse, with knitro reaching the

time limit without obtaining a solution and snopt taking a long time to reach the optimal

Page 82: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 72

solution. We have also made a version of the SOCP model with a variable perturbation,

antenna socp vareps, but loqo solves this version much slower and can only attain an

infeasibility level of 10−4. The final value for the perturbation is on the order of 10−12.

snopt, on the other hand, solves this problem with a comparable runtime to the SOCP

formulation, and knitro still reaches the time limit. The other SOCP reformulation is

antenna socp ratio. This version works much better, with a runtime for loqo that is

better than the original formulation and an accuracy level of 10−8. Both knitro and

snopt cannot solve this problem.

The other variations of the Antenna Array Design problem are the models that perform

the sidelobe minimization in the objective function. The first of these models is antenna L1.

As discussed above, loqo runs into nonsmoothness difficulties with this model. In fact,

none of the general purpose algorithms can solve this problem to the accuracy levels re-

quired. loqo is able to attain a solution with only 3 digits of agreement and an infeasi-

bility level of 0.1. Therefore, we have also created a model called antenna L1 vareps for

the variable perturbation in the objective function. It turns out that a perturbation of

10−8 is sufficient for loqo to reach a solution with default accuracy levels for this problem.

However, snopt cannot find a dual feasible solution to the problem, and knitro reaches

the time limit.

The SOCP version of this problem is antenna L1 socp. It also suffers from nonsmooth-

ness. Once again, all of the general purpose solvers fail on this problem. loqo can find a

primal feasible solution that agrees with the dual to 8 digits, but it cannot achieve a dual

feasibility level better than 10−1. The variable perturbed model, antenna L1 socp vareps

can be solved by loqo for a perturbation of 10−16. The runtime is better than that

Page 83: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 73

of SeDuMi on the SOCP version. We have also made a ratio reformulation model,

antenna L1 socp ratio, which was solved by loqo slightly slower than the variable per-

turbed model. snopt failed on all versions of the `1 models, and knitro again reached

the time limit on both problems.

The second variation uses a square of the `2 norm, so it is smooth. loqo can solve

this problem as given in the ampl model antenna L2 quite efficiently. Since this model

is not an SOCP, we have reformulated it as such in the model antenna L2 socp. This

reformulated model is solved easily by loqo as well. The runtime of loqo is comparable

to that of SeDuMi. For the SOCP model, we have also created antenna L2 socp vareps

and antenna L2 socp ratio. Both of these versions are solved by loqo, with the variable

perturbation model attaining a perturbation level of 10−8. Again, knitro reaches its

iteration or time limit on all of these problems. snopt can solve all of these models, but

does so very slowly.

For the Antenna Array Design problem, we address the three goals given above:

(1) Original Formulation vs. SOCP: For the antenna array problem and its `1 norm

variation, the original formulation worked better than the SOCP version. For the

`2 norm variation, the SOCP version outperformed the original slightly.

(2) General vs. Special Purpose Algorithm: For the antenna array problem, the special

purpose algorithm was much better than the general purpose algorithms. For

the `1 norm variation, the nonsmoothness in the problem does not allow a fair

comparison, but the runtime of the general purpose algorithm loqo on the variable

perturbation model was much better that of SeDuMi’s. For the `2 norm variation,

loqo and SeDuMi were comparable.

Page 84: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 74

(3) Reformulations: For the antenna array problem, the ratio reformulation outper-

formed the variable perturbation. For the `1 and the `2 norm variations, the

variable perturbation was slightly better.

12.2. Grasping Force Optimization. The Grasping Force Optimization problem is

an SOCP in its original form, so we have created the original model, grasp, the variable

perturbation model, grasp vareps, and the ratio reformulation model, grasp ratio. The

increase in the number of constraints in the ratio model is due to ampl. We were using a

defined variable for the right-hand side of the constraint, and, therefore, could not define

a simple bound on it. Instead, we needed to implement the nonnegativities as additional

constraints. The original model is nonsmooth, and, therefore, loqo cannot achieve an

infeasibility level less than 10−2. A perturbation of 10−8 allows loqo to solve this problem,

and the ratio reformulation also works. knitro cannot solve the problem in any form, and

snopt cannot even get close within 1000 CPU minutes. sedumi is solves the problem in

its original formulation quite easily.

Since the original formulation of this problem is an SOCP, we can only address two of

the three questions raised above:

(1) General vs. Special Purpose Algorithm: The nonsmoothness in the problem does

not allow us to compare the algorithms on the same problem. SeDuMi is faster on

the original problem than loqo on the variable perturbed model, but the runtimes

are comparable on the ratio reformulation model.

(2) Reformulations: The ratio reformulation is better.

12.3. FIR Filter Design. The FIR Filter Design problem has been implemented in

the ampl model fir. loqo and knitro are able to solve this problem, but snopt is much

Page 85: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 75

faster. The reason for the speed difference between snopt and loqo is that the constraints

are mostly linear, with a Hessian that only has 1 entry but a full Jacobian. This causes

both the symbolic and the numerical factorizations in loqo to be quite slow, resulting in

the time difference.

The model given in fir is a convex programming problem, but it is not an SOCP.

However, it can be reformulated as such, and this reformulation is implemented in fir socp.

Again, loqo and knitro can solve this problem, but snopt is much faster than loqo. The

same is true for the variable perturbed model fir socp vareps and the ratio reformulation

fir socp ratio. The special purpose algorithm SeDuMi exits because it fails to find a

sensible direction. In fact, SeDuMi can only solve small instances of this problem, and for

problems with more than 50 variables, it runs into numerical trouble.

For the FIR filter problem, we can address the three questions raised above:

(1) Original vs. SOCP Formulation: The two formulations are almost equivalent, with

the SOCP formulation being slightly slower. The difference between the models is

very small, so that is why the runtimes are close as well.

(2) General vs. Special Purpose Algorithm: The general purpose algorithms can solve

the problem efficiently, even for a large number of variables, but SeDuMi runs

into numerical problems and quits.

(3) Reformulations: The ratio reformulation is better.

12.4. Portfolio Optimization. For the Portfolio Optimization problem, we start out

with a problem that has quadratic constraints. This model is given in optreward. The

particular instance used in this problem is from the S&P 500 index during the business

days of January 2000. loqo can solve this problem, but only with an infeasibility on the

Page 86: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 76

order of 10−6. snopt and knitro also solve this problem easily, and faster than loqo.

Since this problem is not an SOCP in standard form, we have converted the problem to an

SOCP by taking the square-root of the quadratic constraints, as given in optreward socp.

loqo can solve this problem as well, but the infeasibility level is much higher now, at

10−3. snopt and knitro perform the same as before. SeDuMi does quite well on this

problem as well, solving it easily with a feasibility ratio of 1.0000 and 14 digits of agreement

between the primal and the dual objective functions. As usual, we have also created the

two SOCP variations, optreward socp vareps and optreward socp ratio. The variable

perturbation behaves like the SOCP formulation when using loqo and takes almost the

same number of iterations to obtain an answer with an infeasibility of 10−2. Moreover,

the perturbation necessary to get this result is 10−4, which is a significant amount. snopt

behaves the same as before, too, but the perturbation that it ends up with is four times

larger than loqo’s. knitro finds the same perturbation level, but it needs to work harder

than before to obtain this solution. We should note that even with these high perturbation

levels, all three solvers obtain the same optimum as the other models. The ratio reformu-

lation works quite well for all three solvers, especially for loqo as it quite easily attains

the optimal solution with default accuracy levels. It is interesting to note that when the

initial values of the slack variables, as given by the tuning parameter bndpush, are set to

1000, loqo is able to solve all of these problems in less than a second to the default levels

of accuracy.

We address the three questions raised above:

(1) Original vs. SOCP Formulation: The original formulation is better. It achieves a

more accurate solution in less time than the SOCP formulation.

Page 87: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 77

(2) General vs. Special Purpose Algorithm: The special purpose algorithm is faster on

this problem than all of the general purpose algorithms. However, when bndpush is

initialized to 1000, there is no significant difference between loqo and SeDuMi’s

runtimes.

(3) Reformulations: The ratio reformulation is better. In fact, it is the only formu-

lation out of all four that achieves the default accuracy levels and does it in less

time than the others take to find less accurate solutions.

12.5. Truss Design. The original Truss Design problem is quite hard to solve due to

the appearance of a matrix inverse which can result in the problem becoming quite dense.

Fortunately, it can also be expressed as a linear programming problem or as a convex

programming problem. These models are implemented in structure and structure2,

respectively. loqo solves the linear programming problem quite fast, but takes a long

time for the convex programming problem. For both of these problems, loqo can find a

good solution with default accuracy levels. The original problem can also be formulated as

an SOCP, and this formulation is given in the ampl model structure socp. The SOCP

is solved easily by SeDuMi. This formulation, however, is nonsmooth, so we have also

created structure socp vareps and structure socp ratio, the variable perturbation

and the ratio reformulation models, respectively. Because of the nonsmoothness, loqo

cannot obtain a solution with less than 10−3 dual infeasibility for structure socp, but

structure socp vareps gives an optimal solution with default accuracy for a perturbation

of 10−8. The ratio reformulation, on the other hand, cannot find a solution with less than

10−2 dual infeasibility.

Page 88: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 78

knitro cannot solve any of these problems and goes to either iteration or time limit.

snopt can solve the linear and convex problems, but rather slowly, and then times out on

the SOCP version and its variants.

For the Truss Design problem, we can address the three questions raised above:

(1) Original vs. SOCP Formulation: The SOCP formulation is the worst out of the

three formulations given. The linear programming version of the truss design

problem solves to default accuracy levels very quickly, with the convex program-

ming formulation a distant second. The SOCP formulation is even slower, and

for loqo it cannot even obtain a solution with the same accuracy levels due to

nonsmoothness.

(2) General vs. Special Purpose Algorithm: On the SOCP formulation, SeDuMi

outperforms loqo. However, we should note again that the best way to solve this

problem is by using the linear programming version.

(3) Reformulations: The variable reformulation is better because it can find a solution

with default accuracy levels. When the same worse accuracy levels are applied to

both reformulations, the runtimes are about the same.

12.6. Equilibrium of a system of piecewise linear springs. For the Spring Equi-

librium problem, we have created an ampl model called springs for the case with 1000

chainlinks. This problem is convex and has a quadratic objective function with linear and

second-order cone constraints. It is easily solved by loqo. However, due to the quadratic

objective function, the problem does not fit the SOCP paradigm, but it can be formulated

at such. We have formulated the SOCP problem in springs socp. As always, we also

provide the variable perturbed model in springs socp vareps and the ratio reformulation

Page 89: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 79

in springs socp ratio. The SOCP and variable perturbed models are solved quite slowly

by loqo, but it performs well on the ratio reformulation. SeDuMi runs into numerical

trouble and is unable to attain an optimal solution. knitro goes to iteration limit and

snopt times out on all of these models.

We can address the three questions raised above:

(1) Original vs. SOCP Formulation: The original formulation is much better. This

is because of the transformation of the quadratic term in the objective function

to have a model with a linear objective function and linear and second-order cone

constraints. The new second-order cone constraints contribute a large dense term

to the Hessian which slows the problem considerably.

(2) General vs. Special Purpose Algorithm: The special purpose algorithm approaches

the optimal solution but runs into numerical problems and quits after 17.70 sec-

onds. This runtime is comparable to that of loqo’s on the original problem, but

the general purpose algorithm is the only one that can find a solution to default

levels of accuracy.

(3) Reformulations: The ratio reformulation is much better than the variable reformu-

lation. This is because of the newly introduced second-order cone constraint. As

stated before, the ratio reformulation outperforms an SOCP (such as the variable

perturbed model) due to Hessian sparsity when the size of the block is larger than

3, and the larger the block, the more advantageous the ratio model gets. In the

case of our Springs Equilibrium model, there is a block of size 1000, and the ratio

model is about 50 times faster than the variable perturbed model.

Page 90: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 80

12.7. Euclidean Single Facility Location. We have created two types of examples

for the Euclidean single facility location problem. The first is Fermat’s problem on a

triangle. For this example, we have created the ampl models fermat and fermat2 for

the smooth and nonsmooth cases, respectively. We have created the variable perturbed

model fermat2 vareps to handle the nonsmoothness in fermat2. All three of the general

purpose solvers work well on the smooth problem, but they all fail on the nonsmooth

one. They can all solve the variable perturbed version of the nonsmooth problem with

perturbation levels of 10−12. Since these problems are not SOCPs, we have to reformulate

them as such. These models are given in fermat socp and fermat2 socp. We have also

created the variable perturbed problems fermat socp vareps and fermat2 socp vareps,

and the ratio reformulations fermat socp ratio and fermat2 socp ratio. All three of

the general purpose solvers can solve fermat and its variants quite efficiently, but loqo

and snopt cannot solve the nonsmooth SOCP. loqo also fails on the ratio reformulation

of the nonsmooth problem, unable to achieve more than 2 digits of agreement between the

primal and dual solutions. SeDuMi solves both SOCPs quickly.

The second type of model that we have created is for the more general Euclidean single

facility location problem. We have the ampl model esfl to represent this problem. It is

solved efficiently by all three of the general purpose solvers. This problem is also not an

SOCP, but we can reformulate it as such. The ampl model esfl socp gives the Euclidean

single facility location problem in the form of an SOCP. We have also created the variable

perturbed model esfl socp vareps and the ratio reformulation esfl socp ratio. Again,

all three of the general purpose solvers can solve the SOCP and its variants, but knitro is

Page 91: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 81

slow on the variable perturbed model and snopt is slow on the ratio reformlation. SeDuMi

solves the SOCP efficiently.

For the Euclidean single facility location problem, we can address the three questions

raised above:

(1) Original vs. SOCP Formulation: The original formulation is significantly better

than the SOCP formulation. Despite the number of existing facilities, the original

formulation always has two variables and no constraints. In contrast, the SOCP

has an added variable and constraint for each existing facility. Therefore, the

difference between the runtimes are significant.

(2) General vs. Special Purpose Algorithm: The runtimes of loqo and SeDuMi are

comparable on the SOCP formulation, but the most efficient way to solve this

problem is to use a general purpose algorithm on the original problem.

(3) Reformulations: The two reformulations are equivalent.

12.8. Euclidean Multiple Facility Location. We have also created an ampl model

for the Euclidean multiple facility location problem. This model is called emfl, and it

represents a nonsmooth problem. None of the three general purpose solvers can solve this

model. Because of the nonsmoothness of the original problem, we have also created a

variable perturbed version of our model, called emfl vareps. loqo and snopt can solve

this problem, both with a perturbation level of 10−16. knitro also solves this problem,

but it reaches a perturbation level of 10−14.

In standard form, the Euclidean multiple facility location problem is not an SOCP,

so we have formulated it as such in the ampl model emfl socp. Just like the original

problem, the SOCP model also suffers from nonsmoothness. None of the three general

Page 92: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 82

purpose solvers can solve this problem, but SeDuMi can solve it very efficiently. We have

also created the variable perturbed problem emfl socp vareps and the ratio reformulation

model emfl socp ratio. loqo and snopt both solve the variable perturbed model, but

fail on the ratio reformulation. knitro goes to the iteration limit on all of the SOCP

variants.

For the Euclidean multiple facility location problem, we can address the three questions

raised above:

(1) Original vs. SOCP Formulation: Original formulation is far better than the SOCP

formulation. Like the single facility location problem, the size of the original

problem is much smaller than the SOCP.

(2) General vs. Special Purpose Algorithm: Due to the nonsmoothness, it is hard to

make this comparison. Nonetheless, the runtime difference between the general

purpose solver’s performance on the variable perturbation model and the special

purpose solver’s performance on the SOCP suggest that the special purpose solver

is more efficient on these problems. It should be noted, however, that the general

purpose solver is much faster on the original problem.

(3) Reformulations: The variable perturbation is better since the ratio reformulation

fails.

12.9. Steiner Points. For the Steiner points problem, we have created an ampl model

called steiner. In its original form, the Steiner points problem is nonsmooth because some

of the points collapse onto each other, so we have also created a variable perturbed version

of the problem in the ampl model steiner vareps. None of the three general purpose

solvers can solve the original problem, but they can all solve the variable perturbed model.

Page 93: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 83

loqo and knitro obtain good perturbation levels of 10−16 and 10−14, respectively, whereas

snopt only gets to 10−4. The original problem, however, is not an SOCP, but it can be

formulated as such. This reformulation is given in the ampl model steiner socp. Just like

the original problem, the SOCP formulation also suffers from nonsmoothness, and none of

the three general purpose solvers can solve this problem. SeDuMi can solve this problem

quickly. We have also created the variable perturbed SOCP in steiner socp vareps and

the ratio reformulation in steiner socp ratio. All of the general purpose solvers can

solve the variable perturbation model efficiently, with perturbation levels of 10−12. loqo

and knitro also perform well on the ratio reformulation model, but snopt cannot solve

that problem.

For the Steiner points problem, we can address the three questions raised above:

(1) Original vs. SOCP Formulation: The original formulation is better, even though

the small problem and the nonsmoothness of the Euclidean norm do not allow us

to see this difference reflected in the runtimes. However, the original formulation

is much smaller in size than the SOCP formulation. As the problem size grows,

this difference should become more pronounced.

(2) General vs. Special Purpose Algorithm: Due to the nonsmoothness, it is not pos-

sible to compare the two algorithms on the same problem, but the general purpose

algorithm on the variable perturbed model outperforms the special purpose algo-

rithm on the SOCP. One would expect, however, the special purpose algorithm to

become more efficient as the problem size increases, as was the case in the facility

location problems.

(3) Reformulations: The two reformulations are equivalent.

Page 94: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 84

12.10. Minimal Surfaces. We have created an ampl model called minsurf for the

Minimal Surfaces problem. Since the problem is a convex NLP, loqo can solve it easily.

knitro and snopt can solve this problem as well, but they are much slower. In its original

form, the Minimal Surfaces problem is not an SOCP, but it can be formulated as such. This

formulation is given in the ampl model minsurf socp. All three of the general purpose

solvers can solve this problem, but SeDuMi is much faster than all of them. We have also

created the variable perturbed model minsurf socp vareps and the ratio reformulation

minsurf socp ratio. The general purpose solvers can solve these two problems, except

for knitro on the ratio reformulation, which goes to iteration limit. The perturbation

level is less than 10−12 for loqo and snopt, and 10−6 for knitro.

We can address the three questions raised above:

(1) Original vs. SOCP Formulation: The original formulation is better because it is

smaller and sparser.

(2) General vs. Special Purpose Algorithm: The special purpose algorithm does bet-

ter on the SOCP problem, but the general purpose algorithm solves the original

formulation faster.

(3) Reformulations: The ratio reformulation is sparser and faster than the variable

perturbed model.

12.11. Plastic Collapse. Finally, we have created an ampl model called plastic for

the Plastic Collapse problem. The data for it comes from the DIMACS test suite problem

nql30. loqo and knitro can solve this problem easily, but snopt is quite slow. SeDuMi,

on the other hand, runs into numerical problems around the original solution after 11.56

seconds. Since the original problem is an SOCP, we have also created a variable perturbed

Page 95: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 85

model in plastic vareps and the ratio reformulation in plastic ratio. loqo can solve

the variable perturted model for a perturbation of 10−14, but it is much slower than the

original SOCP. The reason for this time difference is the size of the of the second-order cone

constraint blocks being small—the addition of a variable to each block adds significantly

more nonzeros to the Hessian. The ratio reformulation also leads to a denser Hessian,

although the primal ordering option in loqo is able to find a better ordering than it did

for the variable perturbed problem. knitro can solve the ratio reformulation problem quite

efficiently, but it fails on the variable perturbed model. snopt can solve both problems,

but it is very slow.

Since the original formulation of this problem is an SOCP, we can only address two of

the three questions raised above:

(1) General vs. Special Purpose Algorithm: The general purpose algorithm is better

on this problem. It attains an optimal solution with the default accuracy levels,

whereas the special purpose algorithm runs into numerical trouble.

(2) Reformulations: The ratio reformulation is better because the solver can find a

good ordering to solve this problem faster than the variable perturbed model.

In general, when a convex, non-SOCP, formulation of the problem exists, it is better

than the SOCP formulation. This difference is most pronounced in the facility location

problems, especially the Euclidean single facility location problem. The problem size of the

SOCP formulation continues to grow with the addition of more existing facilities, whereas

the original problem always has two variables and no constraints. Correspondingly, the

runtimes are significantly different as well.

Page 96: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 86

Problem n m SOCP? LOQO KNITRO SNOPT SeDuMi

antenna 123 795 N 122.67 (IL) 1245.02 (n/a)

antenna socp 123 795 Y 142.10 (Time) 6407.91 64.4

antenna socp vareps 124 795 Y 189.40 (Time) 6327.38 (n/a)

antenna socp ratio 123 795 N 92.07 (Time) (IL) (n/a)

antenna L1 122 795 N (IL) (Time) (IL) (n/a)

antenna L1 vareps 123 795 N 274.02 (Time) (DI) (n/a)

antenna L1 socp 915 1855 Y (IL) (Time) (IL) 493.30

antenna L1 socp vareps 916 1588 Y 383.91 (Time) (Time) (n/a)

antenna L1 socp ratio 915 1588 N 399.35 (Time) (IL) (n/a)

antenna L2 122 840 N 89.83 (IL) 930.39 (n/a)

antenna L2 socp 123 841 Y 84.97 (Time) 2347.39 86.80

antenna L2 socp vareps 124 841 Y 86.49 (Time) 2889.21 (n/a)

antenna L2 socp ratio 123 841 N 97.16 (Time) 1515.40 (n/a)

grasp 3001 2006 Y (PI) (IL) (Time) 13.90

grasp vareps 3002 2006 Y 25.16 (IL) (Time) (n/a)

grasp ratio 3001 3006 N 16.40 (TR) (ER) (n/a)

fir 2501 243 N 72.74 271.84 18.67 (n/a)

fir socp 2502 244 Y 77.69 314.22 19.80 (FA)

fir socp vareps 2503 244 Y 90.73 384.59 23.27 (n/a)

fir socp ratio 2502 244 N 82.26 323.80 20.43 (n/a)

Table 1. Runtimes for models which can be formulated as SOCPs.

Page 97: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 87

Problem n m SOCP? LOQO KNITRO SNOPT SeDuMi

optreward 520 22 N 2.62 0.93 1.09 (n/a)

optreward socp 520 22 Y 2.68 0.94 1.09 0.8

optreward socp vareps 521 22 Y 2.76 1.35 1.10 (n/a)

optreward socp ratio 520 22 N 1.48 0.92 1.09 (n/a)

structure 822 11224 N 9.35 29.99 332.25 (n/a)

structure2 822 5621 N 148.64 44.35 2782.75 (n/a)

structure socp 16881 6464 Y 345.73 (IL) (Time) 59.89

structure socp vareps 16882 6464 Y 318.29 (IL) (Time) (n/a)

structure socp ratio 16881 6464 N 301.94 (Time) (Time) (n/a)

springs 2998 1000 N 24.75 (IL) (Time) (n/a)

springs socp 2999 1001 Y 2657.33 (IL) (Time) (NP)

springs socp vareps 3000 1001 Y 7260.22 (IL) (Time) (n/a)

springs socp ratio 2999 1001 N 55.77 (IL) (Time) (n/a)

fermat 2 0 N 0.01 0.01 0.01 (n/a)

fermat socp 5 3 Y 0.01 0.02 0.01 0.29

fermat socp vareps 6 3 Y 0.02 0.03 0.01 (n/a)

fermat socp ratio 5 3 N 0.01 0.02 0.01 (n/a)

fermat2 2 0 N (IL) (TR) (CI) (n/a)

fermat2 vareps 3 0 N 0.01 0.04 0.01 (n/a)

fermat2 socp 5 3 Y (IL) 0.02 (CI) 0.31

fermat2 socp vareps 6 3 Y 0.02 0.03 0.01 (n/a)

fermat2 socp ratio 5 3 N (IL) 0.03 0.08 (n/a)

Page 98: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 88

Problem n m SOCP? LOQO KNITRO SNOPT SeDuMi

esfl 2 0 N 0.35 0.37 0.13 (n/a)

esfl socp 5002 5000 Y 20.03 8.53 189.71 6.37

esfl socp vareps 5003 5000 Y 21.10 107.07 184.13 (n/a)

esfl socp ratio 5002 5000 N 20.94 21.06 7283.79 (n/a)

emfl 50 0 N (IL) (IL) (CI) (n/a)

emfl vareps 51 0 N 0.48 0.86 0.69 (n/a)

emfl socp 5350 5300 Y (IL) (IL) (CI) 13.24

emfl socp vareps 5351 5300 Y 58.06 (IL) 500.29 (n/a)

emfl socp ratio 5350 5300 N (IL) (IL) (UB) (n/a)

steiner 16 0 N (IL) (IL) (CI) (n/a)

steiner vareps 17 0 N 0.06 0.09 0.03 (n/a)

steiner socp 33 17 Y (IL) (TR) (CI) 0.37

steiner socp vareps 34 17 Y 0.05 0.17 0.19 (n/a)

steiner socp ratio 33 17 N 0.05 0.19 (CI) (n/a)

minsurf 961 0 N 0.87 13.99 186.27 (n/a)

minsurf socp 3009 2046 Y 9.90 76.76 5575.52 2.35

minsurf socp vareps 3010 2048 Y 14.53 132.84 6162.18 (n/a)

minsurf socp ratio 3009 2048 N 7.47 (IL) 9645.14 (n/a)

plastic 3601 3680 Y 35.31 46.78 2069.94 (NP)

plastic vareps 3602 3680 Y 121.69 (IL) 2239.33 (n/a)

plastic ratio 3601 3680 N 46.35 27.87 3080.22 (n/a)

Page 99: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

12. RESULTS OF NUMERICAL TESTING. 89

On the SOCP formulations, the special purpose algorithm is better than the general

purpose algorithm, in general. Some of this difference is due to the sparsity structure of the

problems. On denser problems, loqo takes a long time to factor the reduced KKT matrix

and the iterations are slow because of the dense matrix. SeDuMi, on the other hand, does

not include the factorization time in reporting its runtime and uses the Goldfarb-Scheinberg

[32] splitting of dense columns. In general, however, loqo solves the original formulation

of the problem much faster than SeDuMi solves the SOCP problem.

There is no clear winner for the variable perturbation vs. ratio reformulation question.

In general, the variable perturbation seems to be more reliable, however, the ratio refor-

mulation is faster on many problems. Therefore, the ratio reformulation should be used on

problems with large SOCP blocks. It has worked on all such problems and does so faster

than the variable perturbation and with a sparser Hessian. On the problems with small

blocks of size 2 or 3, the variable perturbation should be used.

Page 100: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CHAPTER 8

Extension to Semidefinite Programming.

This class of problems is especially important in a variety of engineering applications and

as relaxations of some NP-hard combinatorial problems. In standard form, a semidefinite

programming problem, or an SDP, is

(30)

minimize bTx

subject to Ci −n∑

j=1

Aijxj 0, i = 1, . . . ,m,

where x ∈ Rn, b ∈ Rn, and Ci ∈ R(ki×ki) and Aij ∈ R(ki×ki), i = 1, . . . ,m, j = 1, . . . , n.

Letting

Z = C −n∑

j=1

Ajxj,

a constraint of the form

Z 0

is called a (positive) semidefiniteness constraint, since the square matrix Z is constrained

to be a symmetric, positive semidefinite matrix.

The SDP problem is a generalization of the linear programming problem, where the

nonnegativity constraints on vectors have been replaced by positive semidefiniteness con-

straints on matrices.

90

Page 101: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. CHARACTERIZATIONS OF SEMIDEFINITENESS 91

The form given by (30) is the dual form of the SDP. The primal problem is

(31)

minimizem∑

j=1

Ci • Yi

subject to Aij • Yi = −bj, i = 1, . . . ,m, j = 1, . . . , n

Yi 0, i = 1, . . . ,m,

where A • B = Tr(ATB). Note that since the positive semidefinite cone is self-dual, the

primal problem also has a positive semidefiniteness constraint. We generally prefer the

dual form over the primal because of sparsity. We now present some possible approaches

to writing an SDP as an NLP in standard form.

1. Characterizations of Semidefiniteness

There have been several other efforts to express SDPs as NLPs. In the early 1980’s

Fletcher’s study [25] of the educational testing problem as given in [24] resulted in various

characterizations for the normal cone and the set of feasible directions associated with a

semidefiniteness constraint. He then developed an SQP method to solve a certain class of

SDPs. His approach is similar to what we will be using, so more discussion of it will follow

later in the chapter.

Fletcher and our approach mainly involve re-expressing the constraint in nonlinear form

to characterize the same feasible region. Another approach to expressing an SDP as an

NLP is to make a change of variables in the problem using a factorization of the semidefi-

nite matrix and eliminating the semidefiniteness constraint from the problem. Homer and

Peinado propose in [36] to use a factorization of the form V V T for the change of variables.

Recently, Burer, et. al. proposed in [14, 13] using the Cholesky factorization, LLT , to

transform a certain class of SDPs into NLPs. While these approaches work well for that

Page 102: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. CHARACTERIZATIONS OF SEMIDEFINITENESS 92

class of problems, the new quadratic equality constraints make the problem nonconvex and

it is not possible to apply them successfully to the general SDPs due to multiple local

optima. Nonetheless, the latter approach is efficient on the particular subset of SDPs. In

fact, Burer, et. al., also propose, in [57], using a low-rank factorization, which works even

better in practice.

For a complete survey of the state of the art in Semidefinite Programming as of 1996,

the reader is refered to Lewis and Overton [45].

As stated above, our approach to expressing an SDP as an NLP is to characterize the

feasible region of the semidefiniteness constraint using a set of nonlinear inequalities. We

present here three possible ways to do this:

1.1. Definition of Semidefiniteness—Semi-infinite LP. The most obvious char-

acterization is to simply use the definition of positive semidefiniteness:

(32) ξTZξ ≥ 0 ∀ξ ∈ Rn.

These constraints are linear inequalities. Using them would allow us to express an SDP

as a linear programming (LP) problem, which can be solved very efficiently using loqo

or any other solver that can handle LPs. However, there would be an uncountably infinite

number of constraints required to correctly characterize the feasible region. One thing that

can be done is to work with successive LP relaxations, each with a finite subset, say of size

2n2, of constraints. After solving each LP relaxation, it is easy to generate a new ξ whose

constraint is violated by the optimal solution to the LP relaxation, add this to the finite

collection (perhaps deleting the most nonbinding of the current constraints at the same

time) and solve the new LP relaxation. Doing so would produce a sequence of LP’s whose

Page 103: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. CHARACTERIZATIONS OF SEMIDEFINITENESS 93

solutions converge to the optimal solution to the SDP. This approach is promising, and it

will be a part of our future research.

1.2. Eigenvalues—Nonsmooth NLP. Another possible way to characterize a pos-

itive semidefiniteness constraint on Z is to replace it with the condition that all of the

eigenvalues of Z be nonnegative. Let λj(Z) denote the j-th smallest eigenvalue of Z.

There are two ways to reformulate the semidefiniteness constraint on Z:

(1) Use a group of constraints that specify that each eigenvalue is nonnegative: λj(Z) ≥

0, j = 1, . . . , n.

(2) Use a single constraint that requires only the smallest eigenvalue to be nonnegative:

λ1(Z) ≥ 0.

These two approaches are equivalent. We provide here a small example illustrating how

they work. Consider the n = 2 case:

Z =

z1 y

y z2

.In this case, the eigenvalues can be given explicitly:

λ1(Z) =((z1 + z2)−

√(z1 − z2)2 + 4y2

)/2

λ2(Z) =((z1 + z2) +

√(z1 − z2)2 + 4y2

)/2.

Here, λ1(Z) is a strictly concave and λ2(Z) is a strictly convex function. Therefore, if

we considered using the reformulation (1) described above, by stipulating that all of the

eigenvalues be nonnegative, the reformulated problem would no longer be a convex NLP.

Reformulation (2), however, gives a convex NLP. A proof that λ1 is concave in general can

be found in [55]. Nevertheless, it is also easy to see that λ1(Z) is a nonsmooth function

Page 104: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. CHARACTERIZATIONS OF SEMIDEFINITENESS 94

when the argument of the square root vanishes, or when λ1(Z) = λ2(Z). Therefore, we are

not able to obtain a smooth, convex NLP from the eigenvalue reformulation of the SDP

constraint.

1.3. Factorization—Smooth Convex NLP. A third characterization of semidefi-

niteness uses a factorization of the Z matrix. For every positive semidefinite matrix Z,

there exists a lower triangular matrix L and a diagonal matrix D such that Z = LDLT . It

is well-known (see, e.g., [49]) that this factorization exists and D is unique on the domain

of positive semidefinite matrices (L is unique only when Z is positive definite).

Let dj denote the j-th diagonal element of D viewed as a function defined on the space

of symmetric positive semidefinite matrices. In fact, this factorization can be defined for

a larger domain, which is the set of symmetric matrices that have nonsingular principle

submatrices. However, we will define dj(Z) = −∞ whenever Z is not positive semidefi-

nite. Then, it is possible to show that each of the dj’s is concave everywhere and twice

continuously differentiable on the set of positive definite matrices.

If the nonlinear programming algorithm that is used to solve the SDP is initialized with

a positive definite matrix Z and is such that it preserves this property from one iteration

to the next, then the constraints

(33) dj(Z) ≥ 0, j = 1, 2, . . . , n,

can be used in place of Z 0 to give a smooth convex NLP.

We will show in the next section that the dj’s are concave.

Page 105: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE CONCAVITY OF THE dj ’S. 95

2. The Concavity of the dj’s.

In this section we show that each diagonal element of D in an LDLT factorization of a

positive definite matrix Z is concave in Z.

Let Sn×n+ denote the set of symmetric positive semidefinite matrices and let Sn×n

++ denote

the subset of Sn×n+ consisting of the positive definite matrices. The following results are

well-known.

Theorem 1.

(1) The interior of Sn×n+ is Sn×n

++ (see, e.g., [9], p. 20).

(2) For every Z in Sn×n+ there exists a unit triangular matrix L and a unique diagonal

matrix D for which Z = LDLT (see, e.g., [68]).

Fix j ∈ 1, 2, . . . , n and let z, r, S denote the following blocks of Z:

(34) Z =

j

j

S r ∗

rT z ∗

∗ ∗ ∗

.

Theorem 2. For Z ∈ Sn×n++ , the matrix S is nonsingular and dj(Z) = z − rTS−1r.

Proof. It is easy to check that every principle submatrix of a positive definite matrix

is itself positive definite. Therefore S is positive definite and hence nonsingular. Now,

Page 106: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE CONCAVITY OF THE dj ’S. 96

factor Z into LDLT and partition L and D as we did Z:

L =

j

j

L0 0 0

uT 1 0

∗ ∗ ∗

, D =

j

j

D0 0 0

0 d 0

0 0 ∗

.

From Z = LDLT , it is easy to check that

S = L0D0LT0

r = L0D0u(35)

z = d+ uTD0u.(36)

From (35), we see that u = D−10 L−1

0 r. Substituting this expression into (36), we get

z = d+ rTL−T0 D−1

0 L−10 r

= d+ rTS−1r.

Theorem 3. The function d : R×Rn × Sn×n++ −→ R defined by d(z, r, S) = z − rTS−1r

is concave.

Proof. It suffices to show that f(r, S) = rTS−1r is convex in (r, S). To do so, we

look at the first and second derivatives, which are easy to compute using the identity

Page 107: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE CONCAVITY OF THE dj ’S. 97

∂S−1/∂sij = −S−1eieTj S

−1:

∂f

∂ri

= 2(rTS−1)i,

∂2f

∂ri∂rj

= 2S−1ij ,

∂f

∂sij

= −(rTS−1)i(rTS−1)j,

∂2f

∂sij∂skl

= (rTS−1)kS−1li (rTS−1)j + (rTS−1)iS

−1jk (rTS−1)l,

∂2f

∂ri∂skl

= −(rTS−1)kS−1li − S−1

ik (rTS−1)l.

Letting H denote the Hessian of f (with respect to each of the sij’s and the ri’s), we

compute ξTHξ using the above values, where ξ is a vector of the form

ξ =

[a11 a12 · · · a1n · · · an1 an2 · · · ann b1 b2 · · · bn

]T

.

Letting A = [aij] and b = [bi], we get that

ξTHξ =∑i,j,k,l

aij

(rTS−1eke

Tl S

−1eiejS−1r + rTS−1eie

Tj S

−1ekelS−1r)akl

+∑i,k,l

bi(−rTS−1eke

Tl S

−1ei − eTi S

−1ekeTl S

−1r)akl

+∑i,j,l

aij

(−rTS−1eie

Tj S

−1el − eTl S

−1eieTj S

−1r)bl

+∑i,j

bi(eT

j S−1ei + eT

i S−1ej

)bj

= 2(rTS−1AS−1AS−1r − rTS−1AS−1b− bTS−1AS−1r + bTS−1b

).

Page 108: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE CONCAVITY OF THE dj ’S. 98

Since Z is symmetric and positive definite, we can assume that A is symmetric too and we

get

ξTHξ = 2(rTS−1AS−1/2 − bTS−1/2

) (S−1/2AS−1r − S−1/2b

)= 2

∥∥S−1/2(AS−1r − b)∥∥2

≥ 0.

Thus, H is positive semidefinite, and f is convex.

Remark. The expressions for the derivatives with respect to the diagonal elements of

Z were also given by Fletcher in [25].

Since det(D) = det(LDLT ) = det(Z), it follows that −∑n

j=1 log(dj(Z)) is the usual

self-concordant barrier function for SDP (see, e.g., [51]).

2.1. Step shortening. As stated above, the reformulated problem is convex and

smooth only in the set of positive definite matrices. Therefore, we need to start with

a strictly feasible initial solution and keep all of our iterates strictly feasible as well. Even

though it is hard to guarantee that a strictly feasible initial solution can be easily supplied

in general nonlinear programming, it is actually fairly easy for our reformulation of the

semidefiniteness constraints. All we need is to set Z equal to a diagonally dominant matrix

where the diagonal entries are large enough to yield a positive semidefinite matrix.

Once we start in the interior of the feasible region, we need to stay in that region for the

rest of our iterations. However, there is no guarantee that the algorithm will behave that

way. Since loqo is an infeasible interior-point algorithm, it is expected that the iterates

will leave the interior of the positive semidefinite cone eventually.

Page 109: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE CONCAVITY OF THE dj ’S. 99

In order to see why the iterates become infeasible, let us observe how the infeasibility,

denoted by ρ, behaves. As discussed before,

hi(x) = wi − ρi

for each constraint i. We want hi(x) to start and remain strictly greater than 0, so it would

suffice to show that ρi can start and remain nonpositive. In linear programming, ρ changes

by a scaling factor from one iteration to the next, so if it starts nonpositive, it will stay

nonpositive.

In our NLP, however, this does not hold. An algebraic rearrangement of the Newton

equation associated with the i-th constraint yields

∇hi(x)T ∆x = ∆wi + ρi.

Using the concavity of hi, we bound the infeasibility, denoted by ρi, at the next iteration:

ρi = wi + α∆wi − hi(x+ α∆x)

≥ wi + α∆wi − hi(x)− α∇hi(x)T ∆x

= wi − hi(x)− α(∇hi(x)

T ∆x−∆wi

)= (1− α)ρi.

Instead of ρi being equal to a scaling factor times ρi, it is greater than that value. Therefore,

even if we start nonpositive, or even less than wi, there is no guarantee that we will stay

that way. In fact, computational experience shows that ρi does indeed go positive, even

passing wi to give a negative value for hi.

There is something that can be done to prevent this, however. An analogy from the

rest of the algorithm is useful here. In our algorithm, we need to keep the slack variables

Page 110: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE CONCAVITY OF THE dj ’S. 100

strictly positive at each iteration. This is exactly what we want for hi as well so that our

algorithm stays in the interior of the set of positive semidefinite matrices. We achieve the

preservation of the strict positivity of the slack variables by shortening our step, α, at each

iteration. Thus, we can use another step shortening procedure to do the same for the hi’s.

Since hi starts and remains positive, we can reset the slack variable wi to equal hi at each

iteration, also setting ρi = 0.

The extra step shortening allows us to start and remain in the set of positive definite

matrices. However, any time we shorten a step, we need to make sure that it does not

cause “jamming.” Jamming occurs when the iterates get stuck at a point and the steplength

keeps getting shortened so much that the algorithm cannot make any more progress toward

optimality. This phenomenon comes up in nonlinear programming sometimes, and we have

implemented in loqo a mechanism to “shift” the slack variables so that the algorithm

becomes unstuck and continues to make progress. However, we will show in the next

section that such a mechanism is not necessary for our semidefiniteness reformulation, and

in fact, we will not encounter the jamming phenomenon.

2.2. Jamming. A necessary anti-jamming condition is that each component of the

vector field of step directions is positive in a neighborhood of the set where the correspond-

ing component of the solution vector vanishes.

To state our result, we need to introduce some notation. In particular, it is convenient

to let

H = H + λI.

We consider a point (x, w, y) satisfying the following conditions:

(1) Nonnegativity: w ≥ 0, y ≥ 0.

Page 111: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE CONCAVITY OF THE dj ’S. 101

(2) Strict complementary: w + y > 0.

Of course, we are interested in a point where some of the wi’s vanish. Let U denote the

set of constraint indices for which wi vanishes and let B denote those for which it doesn’t.

Write A (and other matrices and vectors) in block form according to this partition

A =

B

U

.Matrix A and all other quantities are functions of the current point. We use the same letter

with a bar over it to denote the value of these objects at the point (x, w, y).

Theorem 2. If the point (x, w, y) satisfies conditions (1) and (2), and U has full row

rank, then ∆w has a continuous extension to this point and ∆wU = µY −1U eU > 0 there.

Proof. The explicit formula for ∆w can be obtained by solving the KKT system given

in Chapter 1, and it is as follows:

∆w = −AN−1∇f + µN−1ATW−1e− (I − AN−1ATW−1Y )ρ.

Thus, we see that

∆wU + ρU = −UN−1∇f(x)(37)

+UN−1ATW−1 (Y ρ+ µe) .

To prove the theorem, we must analyze the limiting behavior of UN−1 and UN−1ATW−1.

Let

K = N − UTW−1U YUU = H +BTW−1

B YBB.

Page 112: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE CONCAVITY OF THE dj ’S. 102

Applying the Sherman–Morrison–Woodbury formula, we get

N−1 = (UTW−1U YUU +K)−1

= K−1 −K−1UT(UK−1UT +WUY

−1U)−1

UK−1.

Hence,

UN−1 = UK−1 − UK−1UT(UK−1UT +WUY

−1U)−1

UK−1

= WUY−1U(UK−1UT +WUY

−1U)−1

UK−1.(38)

From the definition of U , we have that WU = 0. In addition assumption (2) implies that

YU > 0, which then implies that Y −1U remains bounded in a neighborhood of (x, w, y). Also,

K is positive definite since λ is chosen to make H positive definite. Hence, K is nonsingular.

The assumption that U has full row rank therefore implies that the following limit exists:

lim(UK−1UT +WUY

−1U)−1

UK−1 =(UK−1UT

)−1UK−1.

Here and throughout this proof, all limits are understood to be taken as (x,w, y) approaches

(x, w, y). From the previous limit, we see that

limUN−1 = 0.

It is easy to check that assumptions (1)–(2) imply that the terms multiplied by UN−1 on

the first line of (37) remain bounded in the limit and therefore

(39) lim−UN−1∇f(x) = 0.

Now, consider UN−1ATW−1. Writing A and W in block form and using (38), we get

UN−1ATW−1 = WUY−1U(UK−1UT +WUY

−1U)−1

UK−1

[BTW−1

B UTW−1U

].

Page 113: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE CONCAVITY OF THE dj ’S. 103

The analysis of UN−1 in the previous paragraph applies to the first block of this block

matrix and shows that it vanishes in the limit. The analysis of the second block is more

delicate because of the W−1U factor:

WUY−1U(UK−1UT + WUY

−1U)−1

UK−1UTW−1U

= WUY−1U

(I −

(UK−1UT +WUY

−1U)−1

WUY−1U

)W−1

U

=(I −WUY

−1U(UK−1UT +WUY

−1U)−1)Y −1U

= UK−1UT(UK−1UT +WUY

−1U)−1

Y −1U

From this last expression, we see that the limiting value for the second block is just Y −1U .

Putting the two blocks together, we get that

limUN−1ATW−1 =

[0 Y −1

U

]and hence that

(40) limUN−1ATW−1 (Y ρ+ µe) = ρU + µY −1U eU .

Combining (39) and (40), we see that

lim ∆wU = µY −1U eU .

It follows from Theorem (2) that the interior-point algorithm will not jam provided

that the sequence stays away from boundary points where complimentary pairs of variables

both vanish.

Therefore, we now have a reformulation of the SDP problem that is a smooth and

convex NLP. With the minor modification of an extra step-shortening in our algorithm, we

can use the general purpose interior-point method to solve this problem without worrying

Page 114: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE CONCAVITY OF THE dj ’S. 104

about jamming. In the next chapter, we present our implementations of the reformulation

in ampl and of the step-shortening in loqo. We also present results from numerical testing

with loqo, our general purpose interior-point algorithm for nonlinear programming.

Page 115: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CHAPTER 9

Numerical Results for Semidefinite Programming

In this chapter, we will present results from numerical testing with the reformulation

approach presented in the previous chapter. Our goal is to integrate SDP problems into

the context of smooth, convex NLPs. As always, we will formulate these problems using

ampl and solve them using a general purpose nonlinear programming solver. We start by

presenting the implementation of our reformulation in ampl.

1. The AMPL interface.

As discussed in the previous chapter, our approach to reformulating the SDP problem

is to replace the semidefiniteness constraint of the form

Z 0,

where Z is a symmetric n× n matrix, with n constraints of the form

dj(Z) = z − sTR−1s ≥ 0,

where dj is the jth entry in the diagonal matrix D of the LDLT factorization of Z and z,

s, and R are as described in (34). Because of the matrix inverse in the expression, it is not

easy for the user to formulate a general ampl model for this problem. In fact, it is best to

hide the details of the reformulation altogether from the user.

105

Page 116: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. ALGORITHM MODIFICATION FOR STEP SHORTENING. 106

To this end, we have created a user-defined function in ampl called kth diag. Using

this function, the user can just specify constraints of the form

dj(Z) ≥ 0,

and the definition of dj(Z) is performed internally. We have created a generic ampl model

for this problem, called sdp.mod. This model is too general for most of our applications,

but it illustrates the use of the user-defined function very well. The ampl model and the

ampl user-defined function are given in Appendix B.

The user-defined function defines function, gradient and Hessian evaluations as de-

scribed in the previous chapter. Because this function depends on a certain order in which

the variables and constraints are stored, the presolve option in ampl has to be set to 0 to

turn off preprocessing.

Another important issue to address is the initialization of the matrix Z. In our algo-

rithm, we need to initialize Z to be a positive definite matrix, and using a diagonally dom-

inated matrix in our initialization suffices. Because we are using an intermediate variable

in our ampl model to represent the positive semidefinite matrix, we can simply initialize

it with the identity matrix.

2. Algorithm modification for step shortening.

As described in the previous chapter, the interior-point algorithm requires an extra

step shortening to keep the iterates strictly feasible in order to guarantee a smooth, convex

problem. To this end, we have implemented an option in loqo called sdp. When this

option is turned on, loqo calls a subroutine called sdpsteplen (given in Appendix B) each

time a new step direction is computed. This subroutine factors Z, the matrix at the new

Page 117: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

3. APPLICATIONS. 107

point, to determine if it is still positive definite. If it is not, the steplength is repeatedly

shortened until we get to a new point that yields a positive definite matrix.

Also, if the sdp option is turned on, at the beginning of each iteration, we reset the

slack variables so that the infeasibility, ρ, is zero.

In the modification to the loqo algorithm, we need to be conscious of possible efficiency

improvements as well. One such improvement is that we use a right-looking variant of

the numerical factorization algorithm, which allows us to detect nonpositive entries in D

without having to complete the whole factorization.

Next, we discuss several SDP applications that we will use in numerical testing.

3. Applications.

Semidefinite programming problems arise frequently in engineering and as relaxations

of combinatorial optimization problems. We present some examples here.

3.1. Max-Cut Problem. We are given an undirected graph G = (V,E), where V is

the set of vertices and E is the set of edges. Each edge Eij which connects vertices i and

j has a weight wij associated with it. The maximum cut (max-cut) problem is to find a

partition of S ⊂ V such that the edges that run between S and Sc, the complement of S,

have maximum total weight. The combinatorial optimization model associated with this

problem is:

maximize1

2

∑Eij∈E

wij(1− yiyj)

subject to yi =

1, i ∈ S

−1, i /∈ S.

Page 118: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

3. APPLICATIONS. 108

By eliminating the constant term in the objective function, we can rewrite the problem

as:

minimize∑

Eij∈E

wijyiyj

subject to |yi| = 1, i ∈ V.

This problem can be expressed as a matrix optimization problem as follows:

minimize1

2W •X

subject to diag(X) = e

rank(X) = 1

X 0,

where W = [wij], X is a |V | × |V | matrix, diag(X) ∈ R|V | is the vector consisting of the

diagonal elements of X, and e ∈ R|V | is the vector of all ones. In the objective function,

W •X =∑|V |

i,j=1wijxij.

The combinatorial optimization problem is NP-complete. In fact, it is NP-complete

even if all of the edge weights are equal to 1. The matrix optimization problem has a

constraint that the matrix X has a rank of one. Without this constraint, the problem

would be tractable. Therefore, various relaxations of the problem are used to obtain a

upper bound for the optimal solution. These relaxations are studied in great detail by

Goemans and Williamson in [31], including one that leads to a version of 3.1 without the

rank constraint.

One relaxation can be obtained by replacing each yi with a unit vector vi in R|V |

dimensions. The resulting problem becomes:

minimize∑

ij∈E

wijvi · vj

‖vi‖ = 1, i ∈ V,

Page 119: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

3. APPLICATIONS. 109

where vi ·vj is the dot product of vi and vj. These vectors can be grouped in n×n matrices

B, and the objective function can be written as

minimize1

2W • (BTB),

where W = [wij] as before.

We know that a matrix X is positive semidefinite if and only if there exists a matrix

B such that X = BTB. We can use this property to replace BTB with X, so that the

resulting objective function is linear. The resulting problem is

minimize1

2W •X

subject to diag(X) = e

X 0.

This new model does not have the rank constraint, and it is an SDP in primal form. It

is known as the SDP relaxation of the max-cut problem. Since we are solving SDPs in dual

form, we present the dual formulation as well:

minimize∑i∈V

γi

subject to1

2W + Diag(γ) 0,

where Diag(γ) is the diagonal matrix whose ith entry is γi. This is the formulation that

we will use in our numerical testing.

3.2. Min-Max Eigenvalue Problem. We are given an n×n symmetric matrix A(x),

which depends affinely on x ∈ Rk: A(x) = A0 +A1x1 + . . .+Akxk. Here, Ai are symmetric

n × n matrices. We wish to minimize the largest eigenvalue of A(x). This problem has

been studied by Overton [45].

Page 120: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

3. APPLICATIONS. 110

We can formulate this problem as an SDP:

minimize λ

subject to λI − A(x) 0,

where λ ∈R and x ∈ Rk are the problem variables. Since we defined A(x) to be an affine

function, this is an SDP in the standard dual form that we have considered so far.

3.3. Pattern Separation. Given two sets of points x1, . . . , xK and y1, . . . , yL in

Rp, it may be possible to find a hyperplane to separate them. This hyperplane would be

defined by the equation aTx+ b = 0, and a ∈ Rp and b ∈ R would have the properties that

aTxi + b ≤ 0, i = 1, . . . , K , and aTyj + b ≥ 0, j = 1, . . . , L.

Sometimes, however, it may not be possible to find such a hyperplane. One choice is to

find a quadratic surface to divide the two sets of points. Such a surface would be defined

by A ∈ Rp×p, b ∈ Rp, and c ∈ R such that

xTi Axi + bTxi + c ≤ 0, i = 1, . . . , K , and yT

j Ayj + bTyj + c ≥ 0, j = 1, . . . , L.

Both of the problems given above are linear feasibility problems. However, if we stip-

ulate that the quadratic surface separating the two points is an ellipsoid that contains

all of the points x1, . . . , xK and none of y1, . . . , yL, then we need to add a positive

semidefiniteness constraint on the matrix A.

An optimization problem in this context would be one that tries to find the most

spherical ellipsoid to separate the two sets of points. Thus, the ratio of the larger semi-axis

to the smaller one would be minimized. The ellipsoid is a sphere when this ratio is equal

Page 121: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

3. APPLICATIONS. 111

to 1. The problem, then, is

minimize λ

subject to xTi Axi + bTxi + c ≤ 0, i = 1, . . . , K

yTj Ayj + bTyj + c ≥ 0, j = 1, . . . , L

λI A I.

The last constraint can be broken up and the problem can be reformulated as follows:

minimize λ

subject to xTi Axi + bTxi + c ≤ 0, i = 1, . . . , K

yTj Ayj + bTyj + c ≥ 0, j = 1, . . . , L

λI − A 0

A− I 0

λ ≥ 1.

This is an SDP in dual form.

3.4. Statistics and the Educational Testing Problem. Let x ∈ Rp be a random

vector with mean x and covariance matrix Σ, and let y ∈ Rp be a large number of samples

such that y = x + n. Here, n is random noise with a zero mean and an unknown but

diagonal covariance matrix D. We should also note that x and n are not correlated.

If we denote the covariance matrix of y by Σ, then Σ = Σ + D. We do not know the

value of Σ, but it lies in the set

Σ−D : Σ−D 0, D 0, D diagonal.

Using this set, we can evaluate bounds for linear functions of Σ.

Page 122: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

3. APPLICATIONS. 112

In the Educational Testing Problem described in [24], Fletcher defines y as the scores of

a random student on a series of p tests. The test is considered to be reliable if the variance

of the total score of the random student is close to the variance of the total scores over the

whole population. In other words, one would like the ratio

eT Σe

eT Σe

to be as close to 1 as possible. We can compute a lower bound for this ratio by solving the

optimization problem:

minimize

p∑i=1

di

subject to Σ−Diag(d) 0

d ≥ 0.

This is an SDP in the dual form.

3.5. Lovasz Theta Function. Let G = (V,E) be a simple, undirected graph. A

stable set S in V is a set of mutually nonadjacent vertices. An important NP-hard problem

in graph theory is to find the stability number of G, denoted by α(G), which is the size (or

cardinality) of the largest stable set of G.

Lovasz gave an upper bound for α(G) in [47]. This upper bound is the solution of the

optimization problem known as the Lovasz SDP:

maximize eTXe

subject to tr(X) = 1

Xij = 0, ∀(i, j)inE

X 0.

Page 123: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

3. APPLICATIONS. 113

This problem is in primal form. Its dual is:

maximize λ

subject to −A− λI −∑

(i,j)∈E

Bijvij,

where A is the |E| × |E| matrix of all ones, and Bij is the |E| × |E| matrix of all zeros,

except for a 1 corresponding to the edge between vertices i and j.

We will use the dual form of this problem in our numerical testing.

3.6. Nuffield Economic Model. The Revelation Principle in economics states that

there exists games with incomplete information where the players act simultaneously, and

the payoffs are such that it is in the best interest of the players to reveal the truth about

their private preferences rather than mimic someone else. Consider a company with two

products. The company would prefer that the prices on both products be higher, but the

demands of the consumers may go down when the prices go up. So what the company

needs to do is offer a high incentive for high prices and make sure that consumers who

would privately prefer high prices are sufficiently rewarded. Common incentives in the

marketplace today are additional products given away for free, better service, or better

store hours and locations for retail companies.

Let the incentive provided to the consumer from the company be denoted by v(x, y),

if the consumer spends x dollars on good 1 and y dollars on good 2. Let (x, y) ∈ Ω =

[a, a + 1] × [a, a + 1]. We would like to maximize the consumer surplus in a way that

encourages the incentive of the higher prices to be higher. We use the following objective

Page 124: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

3. APPLICATIONS. 114

function:

maximize (a+ 1)∫ a+1

av(a+ 1, y)dy + (a+ 1)

∫ a+1

av(x, a+ 1)dx

−a∫ a+1

av(a, y)dy − a

∫ a+1

av(x, a)dx

−3∫ a+1

a

∫ a+1

av(x, y)dxdy.

The constraints in the problem support the problem description by posing conditions on

the incentives. First of all, we assume that the partial derivatives of v are positive; that is,

incentive increases with price. Similarly, v is a convex function, as denoted by the positive

semidefiniteness constraint ∇2v 0. We also assume that v(a, a) = 0. This constraint is

called the individual rationality constraint, in that the consumer has no incentive to stay

at the price pair (a, a), but is free to participate in or leave the system.

There are two other constraints in this problem. The symmetry constraint v(x, y) =

v(y, x) means that, at an optimum, we cannot spend less money on one product to spend

more on the other in order to maximize our surplus. Also, we only allow for a total change

of 1 unit in the incentive v.

The optimization problem can thus be expressed as:

maximize (a+ 1)∫ a+1

av(a+ 1, y)dy + (a+ 1)

∫ a+1

av(x, a+ 1)dx

−a∫ a+1

av(a, y)dy − a

∫ a+1

av(x, a)dx

−3∫ a+1

a

∫ a+1

av(x, y)dxdy

subject to ∇xv(x, y) ≥ 0,∇yv(x, y) ≥ 0

∇2v(x, y) 0

v(a, a) = 0

v(x, y) = v(y, x), ∀(x, y) ∈ Ω

∇xv(x, y) +∇yv(x, y) = 1.

Page 125: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

3. APPLICATIONS. 115

To solve this problem, we have discretized Ω and used finite-differences to evaluate the

integrals. The convexity constraints were expressed as

∇2xxv(x, y) ≥ 0, and ∇2

yyv(x, y)−(∇2

xyv(x, y))2

∇2xxv(x, y)

≥ 0

which corresponds exactly to our reformulation of positive semidefiniteness constraints.

3.7. Truss Design Problem. The original form of the truss design problem given

in Chapter 6 had the inverse of a matrix with variables in the objective function, thereby

making it very hard to solve. As stated, there are very efficient linear and convex reformu-

lations of this problem and an SOCP reformulation was also presented in the same chapter.

The matrix inverse can also be eliminated using an SDP reformulation.

The problem in its original form was

(41)

minimize fTK(x)−1f

subject to∑

e

lexe ≤ v

xe ≥ 0, for all e.

The problem variables are x. We can introduce a new variable t into the problem to bring

the matrix inverse into a constraint:

(42)

minimize t

subject to fTK(x)−1f ≤ t

∑e

lexe ≤ v

xe ≥ 0, for all e.

Page 126: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

4. RESULTS OF NUMERICAL TESTING. 116

Now this new constraint can be reformulated as well:

(43)

minimize t

subject to

K(x) fT

f t

0

∑e

lexe ≤ v

xe ≥ 0, for all e.

This is an SDP in dual form.

4. Results of Numerical Testing.

Since our goal is to solve SDPs as smooth convex NLPs, we have tested our approach

using our interior-point solver loqo version 5.08. We have modified the solver to shorten

steps to stay strictly feasible and to reset the slacks at each iteration. loqo was called from

ampl version 20000601. We have also implemented the user-defined function kth diag for

ampl to recognize reformulated SDP constraints.

The Hessian resulting from our reformulation of the problem is usually dense. Therefore,

we do not expect loqo to perform well on the reformulated SDPs if the problem size is

large. In fact, computing and storing the Hessian can become quite a challenge for loqo.

Therefore, one would expect a first-order algorithm to perform better.

The default options for loqo were taken to be 8 digits of agreement between the

primal and dual solutions and an infeasibility tolerance of 10−7. For some of the models,

however, it was not possible to reach such levels of accuracy. We will note those models

and the corresponding tuning parameters in the following discussion. We have also turned

Page 127: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

4. RESULTS OF NUMERICAL TESTING. 117

Problem Variables Constraints Iterations Runtime

maxcut 135 135 44 63.30

minmaxeig 66 65 46 4.41

ellipsoid 26 38 21 0.05

educate 14 14 15 0.03

lovasz 62 65 26 2.21

nuffield 2381 7919 75 31.86

Table 1. Iteration counts and runtimes for semidefinite programming mod-

els from various application areas.

on the primal ordering option for all models, since it allows for a better symbolic Cholesky

factorization.

For the Max-Cut, Min-Max Eigenvalue, Pattern Separation, Educational Testing, and

Lovasz Theta function problems, we created ampl models with random data. All of these

models were solved easily by loqo to the default accuracy levels.

The truss topology design models from the SDPLib test suite [10] exhibited problems

when trying to achieve dual feasibility. For this reason, we have tuned the accuracy levels

for these problems. The column sf in the Table 2 refers to the number of digits of agreement

between the primal and the dual solutions, and feas refers to the primal and dual feasibility

levels.

Page 128: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

4. RESULTS OF NUMERICAL TESTING. 118

Problem Variables Constraints iter time sf feas

truss1 25 32 30 0.07 9 10−10

truss2 389 464 192 8.33 5 10−5

truss3 118 122 176 3.29 8 10−5

truss4 49 56 39 0.15 12 10−8

truss6 2024 2147 2867 531.97 4 10−4

truss7 537 752 1500 73.14 3 10−4

Table 2. Iteration counts and runtimes for small truss topology problems

from the SDPLib test suite.

Page 129: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CHAPTER 10

Future Research Directions.

Even though interior-point methods have been a fruitful topic of research in the last two

decades, there is still much work to be done for algorithm improvements and extensions to

other problem classes. In the work that needs to be performed on the algorithm, two main

questions are currently open:

(1) How can we reliably detect infeasible and unbounded problems in interior-point

methods for nonlinear programming?

(2) How can we address the numerical algebra issues that arise from ill-conditioned or

badly-scaled problems?

The first question can be addressed using ideas from our filter method - merit function

hybrid #3. Since we consider improvements to the barrier objective and the feasibility

separately, it is possible to identify when a problem cannot improve primal feasibility

but also has dual variables that are approaching infinity. This, of course, only identifies

a primal infeasible-dual unbounded problem, but dual infeasible-primal unbounded and

primal infeasible-dual infeasible problems may be identified in a similar fashion.

Our filter method - merit function hybrid #3 can also be helpful to address the second

question. For example, in the case of a problem that is numerically unstable at or around

the optimal solution, we may run into too many steplength cutbacks due to not being able

to satisfy the criteria of the method in the Armijo sense. One can also observe cases when

119

Page 130: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

10. FUTURE RESEARCH DIRECTIONS. 120

a small step that slightly improves the primal barrier objective causes a large change in the

dual problem.

There is also more work that needs to be done to apply interior-point method techniques

to classes of optimization problems that do not belong to the family of smooth nonlinear

programming problems. First, there are good theoretical results presented by Nesterov and

Nemirovskii [52] for the runtime complexity of the special purpose SDP algorithms. Our

approach, in general, is equivalent to theirs, but the use of slacks in our barrier objective

creates the need for a new complexity analysis for our reformulation. To see why the slacks

would make a difference, let us examine the approach of using the nonlinear functions dj(Z)

directly in the barrier objective. The resulting barrier problem for SDP is:

minimize bty − µ∑

j

log dj(Z),

where Z = C −∑

i=1Aiyi. But,∑

j log dj(Z) = log detZ and so this barrier problem

matches precisely the one used in the special purpose interior-point methods for SDP. For

a general NLP, the log barrier problem associated with this approach is:

minimize f(x)− µ∑

i

log hi(x)

The first order optimality conditions then are:

∇f(x)− µ∑

i

∇hi(x)

hi(x)= 0.

Applying Newton’s method to the first-order optimality conditions, we get the following

system for ∆x:

(44)

(∇2f(x)− µ

∑i

hi(x)∇2hi(x)−∇hi(x)∇hTi (x)

hi(x)2

)∆x = −∇f(x) + µ

∑i

∇hi(x)

hi(x).

Page 131: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

10. FUTURE RESEARCH DIRECTIONS. 121

Now, consider what happens if we add slack variables before writing the barrier problem.

In this case, the barrier problem is

minimize f(x)− µ∑

i

logwi

subject to hi(x)− wi = 0, i ∈ I.

Using yi’s to denote the Lagrange multipliers, the first-order optimality conditions are:

∇f(x)−∑

i

yi∇hi(x) = 0

−µW−1e+ y = 0

−h(x) + w = 0.

Applying Newton’s method to these equations, we get the following KKT system:(∇2f(x)−

∑i

yi∇2hi(x)

)∆x−

∑i

∇hi(x)∆yi = −∇f(x) +∑

i

yi∇hi(x)(45)

µW−2∆w + ∆y = µW−1 − y(46)

−∇h(x)∆x+ ∆w = h(x)− w.(47)

It can be shown that if w = h(x) and yiwi = µ for i ∈ I, then the step direction ∆x

defined by (45) –(47) coincides with the step direction defined by (44). Although neither

assumption would be expected to hold in general, we have modified the interior-point

method when we apply it to SDP in such as way that the first assumption, w = h(x), does

indeed hold. But there is no reason for the second one to be satisfied. Therefore, a new

complexity analysis for our approach needs to be conducted, and this analysis may shed

light on the effect of slack variables on the algorithm’s performance and allow us to come

up with a unified complexity analysis for loqo in general.

Page 132: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

10. FUTURE RESEARCH DIRECTIONS. 122

We would also like to pursue the semi-infinite LP approach described to reformulate an

SDP as an LP using the definition of a positive semidefinite matrix.

Another possible extension of interior-point methods for NLP is to use these algorithms

to solve nonlinear complementarity problems. There is some work already in this field by

Shanno and Simantiraki [60] and Jansen, et. al. [8]. With the improvements that have

been made to the interior-point algorithm, we would like to revisit and expand their work.

Page 133: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

CHAPTER 11

Conclusions.

In the first part of this dissertation, we proposed three different ways to implement a

filter-based method to control steplengths in an interior-point algorithm. The first two used

the same approach as Fletcher and Leyffer, with the barrier objective and the objective

function. Theoretically, however, we were not able to guarantee that they would work

without a barrier parameter that was monotonically decreasing. To remedy this situation,

both variants were supplemented with a merit function that was used if the filter approach

failed. These two variants worked equally well in practice.

The third variant was simpler than Fletcher and Leyffer’s filter method, imposing filter-

like conditions only on the previous iteration. This approach replaced the merit function,

and, numerically, it outperformed the current version of the algorithm as well as the other

two filter-based variants.

In general, the numerical results show that filter-based algorithms are superior to using

solely a merit function in terms of efficiency. This is especially true on well-behaved large

problems.

In the second part of this dissertation, we discussed how to extend interior-point meth-

ods for nonlinear programming to two popular classes of problems: Second-Order Cone

Programming and Semidefinite Programming. The smoothing approaches to the Second-

Order Cone Programming (SOCP) problem turned out to be quite effective. Of the two

that were presented, the variable perturbation method was quite reliable, whereas the ratio

123

Page 134: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

11. CONCLUSIONS. 124

reformulation led to a much faster solution on problems with large SOCP blocks. The

reformulation that we present for Semidefinite Programming (SDP) problems allowed an

algorithm such as loqo to to solve SDPs for the first time.

The important thing to note about these extensions is that our approaches are not

limited to just SOCPs or SDPs. Since we are reformulating specific constraints, a nonlinear

problem that contains any of these types of constraints can benefit from our approach.

Page 135: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

Bibliography

[1] I. Adler, N.K. Karmarkar, M.G.C. Resende, and G. Veiga. An implementation of Karmarkar’s algo-

rithm for linear programming. Mathematical Programming, 44:297–335, 1989.

[2] E. Andersen and K. Andersen. The mosek optimization software. EKA Consulting ApS, Denmark.

[3] K.D. Andersen and E. Christiansen. Computation of collapse states with von mises type yield condi-

tion. Technical report, Technival Report No. 18, Institut for Matematik og Datalogi, Odense Univer-

sitet, Denmark, 1998.

[4] K.D. Andersen and E. Christiansen. Minimizing a sum of norms subject to linear equality constraints.

Computational Optimization and Applications, 11:65–79, 1998.

[5] K.D. Andersen, E. Christiansen, A.R. Conn, and M.L. Overton. An efficient primal-dual interior-point

method for minimizing a sum of euclidean norms. Technical report.

[6] L. Armijo. Minimization of functions having lipschitz continuous first-partial derivatives. Pacific J.

Mathematics, 16-1:1–3, 1966.

[7] M.P. Bendsøe, A. Ben-Tal, and J. Zowe. Optimization methods for truss geometry and topology design.

Structural Optimization, 7:141–159, 1994.

[8] Tamas Terlaky Benjamin Jansen, Kees Roos and Akiko Yoshise. Polynomiality of primal-dual affine

scaling algorithms for nonlinear complementarity problems. Technical Report 95-83, Faculty of Tech-

nical Mathematics and Computer Science, Delft University of Technology, Delft, The Netherlands,

September 1995.

[9] A. Berman and R.J. Plemmons. Nonnegative Matrices in the Mathematical Sciences. Academic Press,

1979.

[10] B. Borchers. Sdplib 1.1, a library of semidefinite programming test problems. 1998.

[11] S. Boyd and C. Barratt. Linear Controller Design: Limits of Performance. Prentice Hall, Englewood

Cliffs, NJ, 1991.

125

Page 136: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

BIBLIOGRAPHY 126

[12] N. Brixius and S.J. Wright. Interior-point methods online. http://www-

unix.mcs.anl.gov/otc/InteriorPoint.

[13] S. Burer, R.D.C. Monteiro, and Y. Zhang. Solving semidefinite programs via nonlinear programming

part ii: Interior point methods for a subclass of sdps. Technical report, TR99-17, Dept. of Computa-

tional and Applied Mathematics, Rice University, Houston TX, 1999.

[14] S. Burer, R.D.C. Monteiro, and Y. Zhang. Solving semidefinite programs via nonlinear programming

part i: Transformations and derivatives. Technical report, TR99-17, Dept. of Computational and

Applied Mathematics, Rice University, Houston TX, 1999.

[15] R.H. Byrd, M.E. Hribar, and J. Nocedal. An interior point algorithm for large scale nonlinear pro-

gramming. SIAM J. Opt., 9(4):877–900, 1999.

[16] J. O. Coleman and R.J. Vanderbei. Random-process formulation of computationally efficient perfor-

mance measures for wideband arrays in the far field. The 1999 Midwest Symposium on Circuits and

Systems, August 1999.

[17] A.R. Conn, N. Gould, and Ph.L. Toint. Constrained and unconstrained testing environment.

http://www.dci.clrc.ac.uk/Activity.asp?CUTE.

[18] G. Pataki D. Johnson and F. Alizadeh. Seventh dimacs implementation challenge: Semidefinite and

related problems. http://dimacs.rutgers.edu/Challenges/Seventh.

[19] G.B. Dantzig. Linear Programming and Extensions. Princeton University Press, Princeton, NJ, 1963.

[20] E. D. Dolan and J. J. More. Benchmarking optimization software with performance profiles. Technical

report, Argonne National Laboratory, January 2001.

[21] C. Roos E. D. Andersen and T. Terlaky. On implementing a primal-dual interior-point method for

conic quadratic optimization. Technical Report W-274, Helsinki School of Economics and Business

Administration, December 2000.

[22] A. El-Bakry, R. Tapia, T. Tsuchiya, and Y. Zhang. On the formulation and theory of the Newton

interior-point method for nonlinear programming. J. of Optimization Theory and Appl., 89:507–541,

1996.

Page 137: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

BIBLIOGRAPHY 127

[23] A.V. Fiacco and G.P. McCormick. Nonlinear Programming: Sequential Unconstrainted Minimization

Techniques. Research Analysis Corporation, McLean Virginia, 1968. Republished in 1990 by SIAM,

Philadelphia.

[24] R. Fletcher. A nonlinear programming problem in statistics (educational testing). SIAM J. Sci. Stat.

Comput., 2:257–267, 1981.

[25] R. Fletcher. Semi-definite matrix constaints in optimization. SIAM J. Control and Optimization,

23(4):493–513, 1985.

[26] R. Fletcher and S. Leyffer. Nonlinear programming without a penalty function. Technical Report

NA/171, University of Dundee, Dept. of Mathematics, Dundee, Scotland, 1997.

[27] R. Fourer, D.M. Gay, and B.W. Kernighan. AMPL: A Modeling Language for Mathematical Program-

ming. Scientific Press, 1993.

[28] K. R. Frisch. The logarithmic potential method of convex programming. Technical report, University

Institute of Economics, Oslo, 1955.

[29] P.E. Gill, W. Murray, and M.A. Saunders. User’s guide for SNOPT 5.3: A Fortran package for large-

scale nonlinear programming. Technical report, Systems Optimization Laboratory, Stanford Univer-

sity, Stanford, CA, 1997.

[30] P.E. Gill, W. Murray, M.A. Saunders, J.A. Tomlin, and M.H. Wright. On projected newton methods

for linear programming and an equivalence to Karmarkar’s projective method. Mathematical Program-

ming, 36:183–209, 1986.

[31] M.X. Goemans and D.P. Williamson. Improved approximation algorithms for maximum cute and

satisfiability problems using semidefinite programming. Journal of ACM, 42:1115–1145, 1995.

[32] D. Goldfarb and K. Scheinberg. Stability and efficiency of matrix factorizations in interior-point meth-

ods. Conference Presentation, HPOPT IV Workshop, June 16-18, Rotterdam, The Netherlands, 1999.

[33] S.-P. Han. A globally convergent method for nonlinear programming. Journal of Optimization Theory

and Applications, 22:297–309, 1977.

[34] C. Helmberg. Sbmethod: A c++ implementation of the spectral bundle method. Technical report,

Konrad-Zuse-Zentrum fuer Informationstechnik Berlin, 2000.

Page 138: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

BIBLIOGRAPHY 128

[35] W. Hock and K. Schittkowski. Test examples for nonlinear programming codes. Lecture Notes in

Economics and Mathematical Systems 187. Springer Verlag, Heidelberg, 1981.

[36] S. Homer and R. Peinado. Design and performance of parallel and distributed approximation algo-

rithms for maxcut. Technical report, Boston University, 1995.

[37] F.K. Hwang. A linear time algorithm for full steiner trees. Operations Research Letters, 4:235–237,

1986.

[38] F.K. Hwang and J.F. Weng. The shortest network under a given topology. J. of Algorithms, 13:468–

488, 1992.

[39] D.F. Shanno H.Y. Benson and R.J. Vanderbei. Interior-point methods for nonconvex nonlinear pro-

gramming: Filter methods and merit functions. Technical Report ORFE 00-06, Department of Oper-

ations Research and Financial Engineering, Princeton University, 2000.

[40] N.K. Karmarkar. A new polynomial time algorithm for linear programming. Combinatorica, 4:373–395,

1984.

[41] L.G. Khachian. A polynomial algorithm in linear programming. Doklady Academiia Nauk SSSR,

244:191–194, 1979. In Russian. English Translation: Soviet Mathematics Doklady 20: 191-194.

[42] V. Klee and G.J. Minty. How good is the simplex algorithm? In O. Shisha, editor, Inequalities–III,

pages 159–175. Academic Press, New York, 1972.

[43] E. Kranich. Bibliography on interior point methods for mathematical programming.

http://liinwww.ira.uka.de/bibliography/Math/intbib.html.

[44] H.W. Kuhn. On a pair of dual nonlinear programs. In J. Abadie, editor, Nonlinear Programming,

pages 39–54. North-Holland, 1967.

[45] A.S. Lewis and M.L. Overton. Eigenvalue optimization. Acta Numerica, 5:149–190, 1996.

[46] M.S. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of second-order cone programming.

Technical report, Electrical Engineering Department, Stanford University, Stanford, CA 94305, 1998.

To appear in Linear Algebra and Applications special issue on linear algebra in control, signals and

imaging.

Page 139: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

BIBLIOGRAPHY 129

[47] L. Lovasz. On the shannon capacity of a graph. IEEE Transactions of Information Theory, IT-25(1):1–

7, January 1979.

[48] R.F. Love, J.G. Morris, and G.O Wesolowsky. Facilities Location: Models and Methods. North-Holland,

1988.

[49] R.S. Martin and J.H. Wilkinson. Symmetric decomposition of positive definite band matrices. Numer.

Math., 7:355–361, 1965.

[50] A.G.M. Michell. The limits of economy of material in frame structures. Phil. Mag., 8:589–597, 1904.

[51] Y.E. Nesterov and A.S. Nemirovsky. Self–concordant functions and polynomial–time methods in convex

programming. Central Economic and Mathematical Institute, USSR Academy of Science, Moscow,

USSR, 1989.

[52] Y.E. Nesterov and A.S. Nemirovsky. Interior Point Polynomial Methods in Convex Programming :

Theory and Algorithms. SIAM Publications, Philadelphia, 1993.

[53] A.V. Oppenheim and R.W. Schafer. Digital Signal Processing. Prentice Hall, Englewood Cliffs, NJ,

1970.

[54] M.L. Overton. A quadratically convergent method for minimizing a sum of Euclidean norms. Mathe-

matical Programming, 27:34–63, 1983.

[55] M.L. Overton and R.S. Womersley. Second derivatives for optimizing eigenvalues of symmetric matri-

ces. Technical Report 627, Computer Science Department, NYU, New York, 1993.

[56] Y. Ye S. Benson and X. Zhang. Dsdp: A dual scaling algorithm for positive semidefinite programming.

[57] R.D.C. Monteiro S. Burer. A nonlinear programming algorithm for solving semidefinite programs via

low-rank factorization. Technical report, School of ISyE, Georgia Tech, Atlanta, GA, March 2001.

[58] Klaus Schittkowski. More Test Samples for Nonlinear Programming Codes. Springer Verlag, New York,

1987.

[59] D.F. Shanno and E.M. Simantiraki. Interior-point methods for linear and nonlinear programming.

In I.S. Duff and G.A. Watson, editors, The State of the Art in Numerical Analysis, pages 339–362.

Oxford University Press, New York, 1997.

Page 140: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

BIBLIOGRAPHY 130

[60] Evangelia M. Simantiraki and David F. Shanno. An infeasible-interior-point method for linear com-

plementarity problems. SIOPT, 7:620–640, 1997.

[61] W.D. Smith. How to find steiner minimal trees in euclidean d-space. Algorithmica, 7:137–177, 1992.

[62] Jos F. Sturm. Using sedumi 1.02: a matlab toolbox for optimization over symmetric cones. Technical

report, McMaster University, 1998.

[63] R.J. Vanderbei. AMPL models. http://www.sor.princeton.edu/˜rvdb/ampl/nlmodels.

[64] R.J. Vanderbei. LOQO user’s manual. Technical Report SOR 92-5, Princeton University, 1992. revised

1995.

[65] R.J. Vanderbei, M.S. Meketon, and B.F. Freedman. A modification of Karmarkar’s linear programming

algorithm. Algorithmica, 1:395–407, 1986.

[66] R.J. Vanderbei and D.F. Shanno. An interior-point algorithm for nonconvex nonlinear programming.

Computational Optimization and Applications, 13:231–252, 1999.

[67] R.J. Vanderbei and H. Yurttan. Using LOQO to solve second-order cone programming problems.

Technical Report SOR-98-09, Statistics and Operations Research, Princeton University, 1998.

[68] J.H. Wilkinson and C. Reinsch. Handbook for Automatic Computation, volume II: Linear Algebra.

Springer-Verlag, Berlin-Heidelberg-New York, 1971.

[69] M. H. Wright. Ill–conditioning and computational error in interior–point methods for nonlinear pro-

gramming. SIAM Journal on Optimization, 9:84–111, 1999.

[70] S.-P. Wu, S. Boyd, and L. Vandenberghe. Magnitude filter design via spectral factorization and convex

optimization. Applied and Computational Control, Signals and Circuits, 1997. To appear.

[71] G. Xue and Y. Ye. Efficient algorithms for minimizing a sum of euclidean norms with applications.

SIAM J. Optimization, 7:1017–1036, 1997.

Page 141: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

APPENDIX A

Numerical Results for Steplength Control.

In this chapter of the Appendix, we present comparative results for different steplengthcontrol methods on the CUTE test suite. The problems presented here are only those wheresome method solved the problem in a different number of iterations. They are organizedby problem size, defined as the number of variables plus the number of constraints anddenoted by m+n. Iteration counts and runtimes (in CPU seconds) are provided for eachsteplength control method. Here, LOQO - the merit function algorithm, FB - filter withbarrier objective and merit function, FO - filter with objective and merit function, FP -filter based only on the previous iteration.

131

Page 142: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 132

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Timebqp1var 1 10 0.00 10 0.01 10 0.01 10 0.01beale 2 10 0.01 10 0.01 10 0.01 10 0.01box2 2 12 0.01 12 0.01 12 0.01 12 0.01brkmcc 2 9 0.01 9 0.01 9 0.01 9 0.01brownbs 2 35 0.02 35 0.03 35 0.03 35 0.02camel6 2 11 0.01 11 0.01 11 0.01 11 0.01cliff 2 33 0.03 33 0.02 33 0.02 33 0.03cube 2 34 0.03 34 0.03 34 0.03 34 0.03denschna 2 9 0.01 9 0.01 9 0.01 9 0.01denschnb 2 10 0.01 10 0.00 10 0.01 10 0.01denschnc 2 16 0.00 16 0.02 16 0.02 16 0.02denschnf 2 12 0.01 12 0.02 12 0.01 12 0.01djtl 2 23 0.04 23 0.04 23 0.03 21 0.03expfit 2 11 0.01 11 0.01 11 0.01 11 0.01hairy 2 61 0.05 61 0.06 61 0.06 61 0.06himmelbb 2 13 0.01 13 0.02 13 0.02 13 0.01himmelbg 2 8 0.01 8 0.01 8 0.01 8 0.01himmelbh 2 10 0.01 10 0.01 10 0.01 10 0.01himmelp1 2 13 0.01 13 0.01 13 0.02 13 0.01hs001 2 33 0.03 33 0.03 33 0.03 33 0.02hs002 2 19 0.02 21 0.01 21 0.02 21 0.02hs003 2 11 0.01 11 0.01 11 0.01 11 0.01hs004 2 8 0.01 8 0.01 8 0.01 8 0.01hs005 2 10 0.01 10 0.00 10 0.01 10 0.01hs3mod 2 12 0.01 12 0.01 12 0.01 12 0.01humps 2 212 0.19 212 0.20 212 0.23 212 0.19jensmp 2 14 0.01 14 0.01 14 0.01 14 0.01loghairy 2 89 0.08 89 0.09 89 0.10 89 0.09logros 2 72 0.05 312 0.29 312 0.29 294 0.26maratosb 2 18 0.01 18 0.01 18 0.02 18 0.01mexhat 2 8 0.01 8 0.01 8 0.01 8 0.01recipe 2 8 0.01 8 0.01 8 0.01 8 0.01rosenbr 2 26 0.02 26 0.02 26 0.03 26 0.02s201 2 10 0.01 10 0.01 10 0.01 10 0.00s202 2 13 0.01 13 0.02 13 0.01 13 0.02s204 2 8 0.00 8 0.01 8 0.01 8 0.01s205 2 13 0.01 13 0.01 13 0.01 13 0.02s206 2 12 0.01 12 0.01 12 0.01 12 0.02s207 2 12 0.01 12 0.01 12 0.01 12 0.01s208 2 26 0.02 26 0.02 26 0.02 26 0.02

Table 1. Comparative results for different steplength control methods onthe CUTE test suite.

Page 143: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 133

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Times209 2 88 0.07 88 0.07 87 0.08 88 0.07s211 2 34 0.03 34 0.03 34 0.03 34 0.02s212 2 11 0.01 11 0.01 11 0.01 11 0.01s213 2 33 0.02 33 0.03 33 0.02 33 0.03s214 2 61 0.10 61 0.10 61 0.10 61 0.10s229 2 26 0.02 26 0.02 26 0.03 26 0.02s274 2 9 0.01 9 0.01 9 0.01 9 0.01s290 2 8 0.00 8 0.01 8 0.01 8 0.01s307 2 14 0.02 14 0.02 14 0.01 14 0.01s308 2 13 0.02 13 0.01 13 0.02 13 0.01s309 2 13 0.01 13 0.01 13 0.01 13 0.01s311 2 15 0.01 15 0.01 15 0.01 15 0.01s312 2 26 0.02 26 0.02 26 0.02 26 0.02s314 2 9 0.00 9 0.01 9 0.01 9 0.01s328 2 25 0.02 22 0.02 22 0.02 22 0.02s386 2 10 0.01 10 0.01 10 0.01 10 0.01sim2bqp 2 14 0.01 14 0.01 14 0.01 14 0.01simbqp 2 13 0.01 13 0.01 13 0.01 13 0.01sineval 2 47 0.04 47 0.04 47 0.04 47 0.03sisser 2 16 0.02 16 0.01 16 0.02 16 0.01zangwil2 2 9 0.00 9 0.01 9 0.01 9 0.01alsotame 3 11 0.01 11 0.01 11 0.01 11 0.01bard 3 17 0.02 17 0.02 17 0.02 17 0.02biggs3 3 13 0.01 13 0.01 13 0.01 13 0.01box3 3 11 0.01 11 0.01 11 0.01 11 0.01bt1 3 18 0.01 18 0.02 18 0.02 18 0.02denschnd 3 37 0.02 37 0.03 37 0.04 37 0.03denschne 3 14 0.01 14 0.02 14 0.01 14 0.02engval2 3 22 0.02 22 0.02 22 0.02 22 0.02extrasim 3 11 0.01 11 0.01 11 0.01 11 0.00growth 3 79 0.09 82 0.09 82 0.09 81 0.09growthls 3 78 0.08 78 0.09 78 0.09 78 0.09gulf 3 27 0.08 27 0.08 27 0.09 27 0.07hatfldd 3 25 0.03 25 0.03 25 0.03 25 0.03hatflde 3 23 0.04 23 0.04 23 0.04 23 0.03helix 3 14 0.02 14 0.01 14 0.02 14 0.02himmelp2 3 17 0.02 19 0.01 17 0.02 19 0.02hs006 3 11 0.01 11 0.01 11 0.01 11 0.01hs007 3 14 0.02 14 0.02 14 0.01 14 0.02hs009 3 10 0.01 10 0.01 10 0.01 10 0.01

Page 144: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 134

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Timehs010 3 15 0.02 15 0.02 15 0.02 15 0.01hs011 3 13 0.01 13 0.02 13 0.01 13 0.01hs012 3 10 0.01 10 0.00 10 0.01 10 0.01hs021 3 11 0.01 11 0.00 11 0.01 11 0.01hs025 3 26 0.07 26 0.07 26 0.08 26 0.06hs057 3 16 0.03 16 0.02 16 0.02 16 0.03hs088 3 22 0.13 28 0.15 28 0.15 28 0.16hs35mod 3 16 0.01 16 0.01 16 0.01 16 0.02hubfit 3 12 0.01 12 0.01 12 0.01 12 0.01lsqfit 3 12 0.01 12 0.01 12 0.01 12 0.01maratos 3 8 0.00 8 0.00 8 0.01 8 0.01pfit1 3 235 0.22 199 0.19 235 0.26 199 0.18pfit1ls 3 235 0.22 199 0.19 235 0.26 199 0.16pfit2 3 88 0.08 73 0.06 88 0.09 73 0.06pfit2ls 3 88 0.08 73 0.06 88 0.09 73 0.06pfit3 3 116 0.11 108 0.10 116 0.13 108 0.10pfit3ls 3 116 0.10 108 0.10 116 0.13 108 0.10pfit4 3 214 0.22 208 0.21 214 0.25 208 0.20pfit4ls 3 214 0.21 208 0.19 214 0.24 208 0.19s215 3 25 0.02 25 0.02 25 0.02 25 0.02s216 3 17 0.01 18 0.02 18 0.01 18 0.02s218 3 11 0.01 11 0.01 11 0.01 11 0.01s220 3 9 0.01 9 0.01 9 0.01 9 0.01s222 3 10 0.01 10 0.01 10 0.01 10 0.01s233 3 13 0.01 13 0.01 13 0.02 13 0.01s234 3 18 0.02 18 0.01 18 0.02 18 0.02s239 3 15 0.01 15 0.01 19 0.02 15 0.01s240 3 15 0.02 15 0.02 15 0.01 15 0.01s242 3 25 0.04 25 0.02 30 0.03 25 0.02s243 3 7 0.00 7 0.01 7 0.01 7 0.00s244 3 16 0.01 16 0.01 16 0.02 16 0.01s245 3 19 0.02 19 0.02 19 0.02 19 0.01s246 3 15 0.02 15 0.01 15 0.01 15 0.02s316 3 14 0.01 14 0.01 14 0.02 14 0.01s317 3 14 0.01 14 0.01 14 0.01 14 0.02s318 3 15 0.01 15 0.01 15 0.02 15 0.02s319 3 15 0.02 15 0.02 15 0.01 15 0.01s320 3 15 0.02 15 0.01 15 0.01 15 0.01s321 3 16 0.01 16 0.02 16 0.02 16 0.01s322 3 20 0.01 20 0.02 20 0.02 20 0.01

Page 145: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 135

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Times327 3 16 0.03 16 0.02 16 0.02 16 0.02s330 3 13 0.02 13 0.01 13 0.01 13 0.01s331 3 8 0.01 8 0.01 8 0.01 8 0.01s334 3 17 0.01 17 0.02 17 0.02 17 0.02tame 3 9 0.00 9 0.00 9 0.00 9 0.01try-b 3 17 0.02 18 0.02 18 0.02 18 0.02yfit 3 41 0.05 41 0.05 41 0.08 41 0.05yfitu 3 42 0.04 42 0.04 42 0.07 42 0.04aljazzaf 4 46 0.06 44 0.04 46 0.06 44 0.05allinitu 4 10 0.01 10 0.01 10 0.01 10 0.01booth 4 8 0.00 8 0.01 8 0.01 8 0.01brownden 4 16 0.02 16 0.02 16 0.01 16 0.02bt10 4 12 0.01 12 0.01 12 0.01 12 0.01bt2 4 18 0.01 18 0.02 18 0.01 18 0.02cluster 4 12 0.01 12 0.01 12 0.01 12 0.01gottfr 4 10 0.01 10 0.01 17 0.02 10 0.02hatflda 4 8 0.01 8 0.01 8 0.01 8 0.01hatfldb 4 11 0.01 11 0.01 11 0.01 11 0.01hatfldc 4 9 0.01 9 0.01 9 0.01 9 0.01himmelba 4 0 0.00 0 0.00 0 0.00 0 0.00himmelbc 4 10 0.01 10 0.01 10 0.01 10 0.01himmelbf 4 23 0.02 23 0.02 23 0.02 23 0.02himmelp3 4 16 0.01 16 0.02 16 0.02 16 0.01hs008 4 9 0.01 9 0.01 9 0.01 9 0.01hs014 4 11 0.01 11 0.01 11 0.01 11 0.01hs016 4 18 0.02 18 0.01 18 0.02 18 0.01hs017 4 27 0.03 30 0.02 30 0.03 29 0.02hs018 4 18 0.02 18 0.01 18 0.01 15 0.01hs019 4 17 0.01 17 0.02 17 0.02 17 0.02hs022 4 9 0.01 9 0.01 9 0.01 9 0.01hs024 4 13 0.01 13 0.01 13 0.01 13 0.02hs026 4 15 0.03 15 0.01 15 0.02 15 0.01hs027 4 55 0.12 17 0.02 17 0.01 17 0.01hs028 4 8 0.00 8 0.01 8 0.01 8 0.01hs029 4 10 0.01 10 0.01 10 0.01 10 0.01hs030 4 9 0.01 9 0.01 9 0.01 9 0.01hs031 4 13 0.01 17 0.01 13 0.01 17 0.01hs035 4 10 0.01 10 0.01 10 0.01 10 0.01hs036 4 16 0.01 16 0.01 16 0.01 16 0.02hs037 4 11 0.01 11 0.01 11 0.01 11 0.01

Page 146: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 136

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Timehs038 4 44 0.03 44 0.04 44 0.04 44 0.03hs042 4 10 0.01 9 0.01 9 0.01 9 0.00hs060 4 18 0.02 10 0.01 10 0.01 9 0.01hs062 4 13 0.02 12 0.01 12 0.02 13 0.01hs064 4 27 0.02 27 0.03 27 0.03 27 0.03hs065 4 17 0.02 19 0.01 19 0.01 19 0.02hs089 4 27 0.20 28 0.22 27 0.21 28 0.22hypcir 4 9 0.01 9 0.01 9 0.01 9 0.01kowosb 4 11 0.01 11 0.01 11 0.01 11 0.02palmer1 4 16 0.02 19 0.02 16 0.02 59 0.07palmer1b 4 39 0.05 39 0.05 39 0.06 39 0.07palmer2 4 10 0.01 10 0.01 10 0.01 10 0.01palmer2b 4 30 0.04 30 0.04 30 0.04 29 0.04palmer3 4 13 0.01 13 0.01 13 0.02 13 0.02palmer3b 4 21 0.02 21 0.02 21 0.02 21 0.02palmer4 4 13 0.02 13 0.01 13 0.02 13 0.02palmer4b 4 22 0.02 22 0.02 22 0.02 22 0.02palmer5d 4 29 0.03 29 0.03 29 0.03 29 0.03powellbs 4 17 0.02 17 0.02 17 0.02 17 0.02powellsq 4 38 0.03 38 0.03 38 0.04 38 0.04pspdoc 4 11 0.01 11 0.01 11 0.01 11 0.01s217 4 17 0.01 17 0.01 17 0.01 17 0.01s223 4 12 0.01 12 0.01 12 0.01 12 0.01s224 4 12 0.01 12 0.01 12 0.02 12 0.01s226 4 9 0.01 9 0.01 9 0.01 9 0.01s227 4 9 0.01 9 0.01 9 0.01 9 0.01s228 4 10 0.01 10 0.01 10 0.01 10 0.00s230 4 9 0.01 9 0.01 9 0.01 9 0.01s231 4 30 0.03 30 0.03 30 0.02 32 0.03s232 4 12 0.02 12 0.01 12 0.01 12 0.01s235 4 24 0.02 23 0.02 23 0.02 22 0.01s236 4 21 0.02 17 0.02 17 0.02 17 0.02s249 4 10 0.01 10 0.01 10 0.01 10 0.01s250 4 16 0.01 16 0.02 16 0.01 16 0.02s251 4 11 0.01 11 0.01 11 0.01 11 0.01s252 4 21 0.01 20 0.01 20 0.02 20 0.02s253 4 15 0.01 15 0.02 15 0.01 15 0.02s255 4 8 0.01 8 0.00 8 0.01 8 0.00s256 4 20 0.02 20 0.02 20 0.02 20 0.02s257 4 37 0.04 37 0.03 37 0.03 37 0.03

Page 147: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 137

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Time

s258 4 47 0.04 47 0.04 47 0.04 47 0.04s259 4 12 0.01 12 0.02 12 0.02 12 0.02s260 4 47 0.04 47 0.05 47 0.04 47 0.04s261 4 19 0.01 19 0.02 19 0.01 19 0.02s275 4 9 0.00 9 0.01 9 0.01 9 0.01s323 4 10 0.01 10 0.01 10 0.01 10 0.01s324 4 17 0.01 15 0.01 15 0.01 15 0.01s326 4 12 0.01 12 0.01 12 0.01 12 0.01s337 4 11 0.01 11 0.01 11 0.01 11 0.01s339 4 11 0.01 11 0.01 11 0.01 11 0.01s341 4 11 0.01 11 0.01 11 0.01 11 0.01s342 4 22 0.02 23 0.02 23 0.02 23 0.02s344 4 10 0.01 10 0.01 10 0.01 10 0.01s345 4 17 0.02 12 0.01 12 0.01 12 0.01s348 4 302 0.76 35 0.06 35 0.06 35 0.06s350 4 11 0.01 11 0.01 11 0.01 11 0.01s351 4 22 0.02 22 0.03 22 0.02 22 0.02s352 4 12 0.01 12 0.01 12 0.01 12 0.02simpllpa 4 12 0.01 12 0.01 12 0.01 12 0.01snake 4 55 0.08 26 0.03 26 0.02 21 0.02supersim 4 11 0.01 11 0.01 11 0.01 11 0.01twobars 4 10 0.01 10 0.01 10 0.01 10 0.01zecevic2 4 11 0.01 11 0.01 11 0.01 11 0.01zecevic3 4 12 0.01 12 0.01 12 0.01 12 0.01zecevic4 4 15 0.02 15 0.01 15 0.01 15 0.02zy2 4 13 0.01 13 0.02 13 0.01 13 0.01aircrftb 5 16 0.02 16 0.02 16 0.02 16 0.02biggs5 5 27 0.04 27 0.04 27 0.04 27 0.03bt4 5 11 0.01 11 0.01 11 0.01 11 0.01bt5 5 9 0.00 9 0.01 9 0.00 9 0.01byrdsphr 5 12 0.02 12 0.01 12 0.01 12 0.02genhumps 5 79 0.08 79 0.09 79 0.08 175 0.17himmelp4 5 16 0.02 16 0.02 16 0.02 16 0.01himmelp5 5 311 0.90 36 0.04 36 0.04 36 0.04hong 5 20 0.01 20 0.02 20 0.02 20 0.02hs020 5 24 0.02 24 0.02 24 0.02 24 0.03hs032 5 23 0.02 26 0.02 24 0.02 23 0.02hs033 5 12 0.01 11 0.01 11 0.01 11 0.01hs034 5 16 0.02 15 0.01 15 0.01 14 0.01hs041 5 16 0.01 16 0.01 16 0.02 16 0.01

Page 148: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 138

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Timehs045 5 23 0.02 23 0.02 23 0.02 23 0.02hs059 5 22 0.03 22 0.02 22 0.03 22 0.02hs061 5 11 0.01 11 0.01 11 0.01 11 0.01hs063 5 9 0.01 9 0.01 9 0.01 9 0.01hs066 5 14 0.01 15 0.01 15 0.01 15 0.01hs090 5 27 0.31 29 0.33 27 0.32 29 0.34kiwcresc 5 15 0.02 14 0.02 14 0.02 14 0.02lootsma 5 12 0.01 12 0.01 12 0.01 12 0.01makela1 5 14 0.02 14 0.01 14 0.02 14 0.01mifflin1 5 9 0.01 9 0.01 9 0.01 9 0.00mifflin2 5 14 0.01 13 0.01 13 0.01 13 0.01osbornea 5 19 0.03 17 0.02 19 0.03 17 0.02polak1 5 14 0.02 14 0.01 14 0.02 14 0.02polak5 5 71 0.10 37 0.04 37 0.06 37 0.05s238 5 127 0.28 25 0.03 23 0.02 28 0.03s247 5 58 0.09 22 0.02 22 0.02 23 0.02s248 5 18 0.02 19 0.02 19 0.02 19 0.01s254 5 19 0.01 19 0.02 19 0.02 19 0.01s266 5 9 0.02 9 0.02 9 0.01 9 0.02s267 5 40 0.05 40 0.05 40 0.06 40 0.06s315 5 14 0.01 14 0.02 14 0.01 14 0.02s325 5 11 0.01 11 0.01 11 0.01 11 0.01s329 5 19 0.01 18 0.02 18 0.02 18 0.01s335 5 27 0.04 39 0.06 44 0.04 50 0.05s336 5 18 0.02 18 0.01 18 0.02 18 0.02s338 5 19 0.01 19 0.01 19 0.02 19 0.02s343 5 27 0.03 27 0.02 27 0.02 27 0.02s346 5 27 0.02 27 0.03 27 0.03 27 0.03s354 5 16 0.01 16 0.01 16 0.02 16 0.01s358 5 29 0.04 29 0.05 29 0.04 24 0.04simpllpb 5 13 0.01 13 0.01 13 0.01 13 0.01stancmin 5 17 0.02 17 0.02 17 0.01 17 0.02biggs6 6 45 0.06 45 0.06 45 0.06 45 0.06bt13 6 26 0.02 24 0.02 24 0.03 24 0.02bt9 6 15 0.02 15 0.01 15 0.02 15 0.02cantilvr 6 16 0.01 16 0.01 16 0.01 16 0.01cb2 6 11 0.01 11 0.01 11 0.02 11 0.01cb3 6 11 0.01 11 0.01 11 0.01 11 0.01chaconn1 6 11 0.01 11 0.01 11 0.01 11 0.01chaconn2 6 11 0.01 11 0.01 11 0.01 11 0.01

Page 149: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 139

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Time

demymalo 6 17 0.02 15 0.01 15 0.02 15 0.01gigomez1 6 17 0.02 17 0.02 17 0.01 17 0.02hart6 6 25 0.03 25 0.03 25 0.03 25 0.02hatfldf 6 10 0.01 10 0.01 10 0.01 10 0.01himmelbe 6 0 0.00 0 0.00 0 0.00 0 0.00hs039 6 15 0.01 15 0.02 15 0.01 15 0.01hs071 6 13 0.01 13 0.01 13 0.01 13 0.01hs072 6 23 0.02 23 0.02 23 0.02 23 0.02hs091 6 29 0.47 29 0.45 29 0.46 29 0.45makela2 6 12 0.01 12 0.01 12 0.01 12 0.01palmer1a 6 46 0.06 46 0.06 46 0.06 46 0.06palmer2a 6 95 0.11 95 0.11 95 0.12 93 0.12palmer3a 6 88 0.12 88 0.12 88 0.13 99 0.16palmer4a 6 71 0.10 71 0.11 71 0.10 65 0.09palmer5c 6 36 0.04 36 0.04 36 0.05 36 0.04palmer6a 6 163 0.17 163 0.17 163 0.17 159 0.17palmer8a 6 53 0.06 53 0.06 53 0.06 82 0.08polak4 6 13 0.01 13 0.01 13 0.01 13 0.01s219 6 27 0.03 25 0.02 30 0.03 25 0.02s265 6 12 0.02 12 0.01 12 0.01 12 0.02s270 6 17 0.01 17 0.02 17 0.02 17 0.02s271 6 11 0.01 11 0.01 11 0.01 11 0.01s272 6 53 0.08 53 0.07 53 0.06 53 0.07s273 6 17 0.02 17 0.01 17 0.01 17 0.02s276 6 9 0.01 9 0.01 9 0.01 9 0.01s294 6 24 0.02 24 0.02 24 0.03 24 0.02s370 6 17 0.02 17 0.02 17 0.02 17 0.02womflet 6 11 0.01 11 0.01 11 0.02 11 0.01zangwil3 6 8 0.01 8 0.00 8 0.01 8 0.01bt6 7 12 0.01 12 0.02 12 0.01 12 0.01hs023 7 18 0.02 18 0.01 18 0.01 18 0.02hs040 7 9 0.01 9 0.01 9 0.01 9 0.01hs043 7 11 0.01 11 0.01 11 0.01 11 0.01hs046 7 20 0.02 20 0.02 20 0.02 20 0.02hs048 7 8 0.00 8 0.01 8 0.01 8 0.01hs049 7 24 0.02 24 0.02 24 0.02 24 0.02hs054 7 12 0.01 12 0.01 12 0.02 12 0.01hs073 7 21 0.02 20 0.02 19 0.01 19 0.02hs076 7 11 0.01 11 0.01 11 0.01 11 0.01hs077 7 13 0.01 13 0.01 13 0.01 13 0.01

Page 150: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 140

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Time

hs092 7 23 0.51 23 0.51 23 0.52 23 0.51minmaxrb 7 30 0.04 14 0.01 14 0.02 13 0.02palmer1d 7 138 0.16 138 0.15 138 0.18 126 0.13s225 7 19 0.01 17 0.02 18 0.02 17 0.01s264 7 11 0.01 11 0.01 11 0.01 11 0.01s353 7 23 0.03 13 0.01 13 0.02 13 0.01s360 7 30 0.03 28 0.03 28 0.04 28 0.04arglinc 8 11 0.01 11 0.01 11 0.01 11 0.02bt11 8 14 0.01 12 0.01 12 0.01 12 0.01bt12 8 13 0.01 13 0.01 13 0.02 13 0.02bt3 8 12 0.01 12 0.02 12 0.01 12 0.01bt7 8 24 0.02 19 0.01 19 0.02 19 0.01congigmz 8 34 0.03 33 0.03 30 0.02 33 0.03fletcher 8 14 0.01 14 0.01 14 0.01 14 0.01heart8ls 8 55 0.11 55 0.10 55 0.14 55 0.10hs047 8 21 0.02 21 0.02 22 0.03 21 0.02hs050 8 16 0.02 16 0.01 16 0.01 16 0.01hs051 8 8 0.00 8 0.01 8 0.01 8 0.00hs052 8 8 0.01 8 0.00 8 0.00 8 0.01hs053 8 11 0.01 11 0.01 11 0.01 11 0.01hs074 8 18 0.02 18 0.02 18 0.02 16 0.01hs075 8 19 0.01 19 0.02 19 0.02 18 0.02hs078 8 9 0.01 9 0.01 9 0.01 9 0.01hs079 8 9 0.01 9 0.01 9 0.01 9 0.01hs080 8 9 0.01 9 0.00 9 0.01 9 0.01hs081 8 16 0.01 16 0.02 16 0.01 16 0.01hs083 8 16 0.02 16 0.01 16 0.02 13 0.02hs084 8 23 0.03 22 0.02 22 0.03 21 0.03hs093 8 13 0.01 13 0.02 13 0.01 12 0.01hs105 8 17 0.62 18 0.56 18 0.59 18 0.56hs21mod 8 22 0.02 21 0.02 21 0.02 20 0.02matrix2 8 29 0.03 29 0.03 29 0.02 26 0.03mwright 8 13 0.01 12 0.01 12 0.01 12 0.01oslbqp 8 19 0.01 19 0.01 19 0.02 19 0.02palmer1c 8 43 0.06 43 0.05 43 0.06 43 0.06palmer1e 8 103 0.15 103 0.16 103 0.19 103 0.16palmer2c 8 48 0.06 48 0.07 48 0.06 43 0.06palmer2e 8 95 0.14 95 0.14 95 0.16 95 0.14palmer3c 8 44 0.06 44 0.07 44 0.07 41 0.05palmer3e 8 143 0.22 143 0.22 143 0.25 140 0.21

Page 151: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 141

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Time

palmer4c 8 41 0.06 41 0.06 41 0.06 40 0.06palmer4e 8 88 0.14 88 0.14 88 0.16 88 0.14palmer5e 8 38 0.06 38 0.06 40 0.11 37 0.06palmer6c 8 155 0.14 155 0.14 155 0.17 155 0.14palmer6e 8 134 0.15 134 0.15 105 0.15 134 0.16palmer7c 8 31 0.03 31 0.03 31 0.03 31 0.03palmer8c 8 46 0.05 46 0.06 46 0.05 46 0.06palmer8e 8 64 0.07 64 0.08 67 0.11 64 0.08s203 8 16 0.01 10 0.01 10 0.01 9 0.01s262 8 13 0.01 13 0.01 13 0.01 13 0.01s263 8 19 0.02 19 0.02 19 0.01 19 0.02s269 8 11 0.01 11 0.01 11 0.01 11 0.01s277 8 13 0.01 13 0.00 13 0.01 13 0.01s368 8 19 0.03 10 0.01 11 0.02 9 0.01csfi1 9 21 0.02 19 0.02 18 0.02 16 0.01csfi2 9 23 0.02 20 0.02 23 0.03 17 0.02hs100lnp 9 10 0.01 10 0.01 10 0.01 12 0.02hs44new 9 21 0.02 21 0.01 21 0.02 21 0.02lsnnodoc 9 21 0.02 21 0.02 21 0.02 21 0.02madsen 9 24 0.02 24 0.02 24 0.03 24 0.02nonmsqrt 9 238 0.25 261 0.28 238 0.28 261 0.28polak6 9 28 0.03 27 0.03 27 0.03 27 0.03rosenmmx 9 15 0.01 15 0.02 15 0.01 15 0.01s356 9 17 0.02 17 0.02 17 0.02 17 0.02s371 9 18 0.03 18 0.02 18 0.02 18 0.03aircrfta 10 7 0.01 7 0.01 7 0.01 7 0.00arglinb 10 12 0.02 12 0.01 12 0.02 12 0.01brownal 10 12 0.01 12 0.02 12 0.01 12 0.01dixon3dq 10 11 0.01 11 0.01 11 0.02 11 0.01extrosnb 10 9 0.01 9 0.01 9 0.01 9 0.01hilberta 10 10 0.01 10 0.01 10 0.01 10 0.01hs044 10 12 0.01 16 0.01 16 0.01 16 0.02hs095 10 15 0.02 16 0.02 16 0.01 16 0.02hs096 10 16 0.02 15 0.02 25 0.04 19 0.02hs097 10 31 0.03 31 0.04 31 0.04 18 0.02hs110 10 11 0.01 11 0.01 11 0.01 11 0.01hs268 10 27 0.03 27 0.03 27 0.03 27 0.02s268 10 27 0.02 27 0.03 27 0.03 27 0.03s281 10 73 0.14 73 0.14 73 0.15 73 0.15s282 10 65 0.06 64 0.06 65 0.06 64 0.06

Page 152: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 142

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Time

s283 10 43 0.04 43 0.04 43 0.05 43 0.05s291 10 9 0.01 9 0.01 9 0.01 9 0.01s295 10 30 0.03 30 0.03 30 0.03 30 0.03biggsc4 11 21 0.02 21 0.02 21 0.02 21 0.02dipigri 11 11 0.01 11 0.02 11 0.01 11 0.01hatfldh 11 16 0.01 16 0.02 16 0.01 16 0.02hs056 11 14 0.01 13 0.02 13 0.01 13 0.02hs086 11 13 0.01 13 0.02 13 0.02 13 0.02hs100 11 11 0.03 11 0.01 11 0.01 11 0.01hs100mod 11 15 0.01 15 0.01 15 0.02 15 0.02osborneb 11 28 0.13 28 0.13 28 0.13 28 0.13s379 11 28 0.13 28 0.12 28 0.13 28 0.13avgasa 12 13 0.01 13 0.02 13 0.01 13 0.02avgasb 12 12 0.01 12 0.01 12 0.01 12 0.01heart6 12 341 1.06 191 0.59 341 1.08 81 0.17hs055 12 11 0.01 11 0.02 11 0.01 11 0.01qudlin 12 25 0.02 25 0.02 25 0.02 25 0.03s278 12 13 0.01 13 0.01 13 0.02 13 0.01s365 12 42 0.05 38 0.05 37 0.05 37 0.04s367 12 34 0.03 34 0.03 34 0.03 34 0.04synthes1 12 16 0.02 17 0.02 17 0.02 17 0.02hs101 13 145 0.42 73 0.18 62 0.14 37 0.07hs102 13 98 0.27 95 0.28 55 0.13 55 0.13hs103 13 42 0.10 23 0.04 65 0.15 48 0.10hs111 13 15 0.02 15 0.03 15 0.02 15 0.02hs111lnp 13 19 0.03 19 0.03 19 0.03 19 0.04hs112 13 19 0.02 18 0.03 17 0.02 17 0.02polak2 13 24 0.03 24 0.02 24 0.03 24 0.03portfl1 13 19 0.03 17 0.03 17 0.03 17 0.03portfl2 13 20 0.03 17 0.03 17 0.03 17 0.03portfl3 13 18 0.03 17 0.02 17 0.03 17 0.03portfl4 13 18 0.03 16 0.03 16 0.03 16 0.02portfl6 13 19 0.03 15 0.02 16 0.03 16 0.03s241 13 14 0.02 15 0.02 16 0.03 14 0.01s377 13 131 0.20 131 0.22 131 0.20 131 0.20s378 13 19 0.03 19 0.03 19 0.03 19 0.03cresc4 14 41 0.05 56 0.07 56 0.08 56 0.07hs104 14 14 0.01 14 0.02 14 0.03 14 0.02hs106 14 61 0.09 23 0.03 34 0.05 25 0.02s369 14 17 0.02 16 0.02 17 0.02 17 0.01

Page 153: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 143

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Time

dixchlng 15 27 0.03 27 0.03 27 0.03 27 0.03s380 15 237 0.44 237 0.48 237 0.44 237 0.47s383 15 33 0.03 33 0.04 33 0.03 35 0.03haifas 16 12 0.01 12 0.01 12 0.02 12 0.02heart8 16 27 0.05 25 0.04 27 0.05 26 0.04odfits 16 12 0.02 10 0.01 10 0.01 15 0.01s279 16 14 0.01 14 0.01 14 0.01 14 0.02s296 16 39 0.04 39 0.04 39 0.04 39 0.04hs087 17 24 0.02 24 0.02 24 0.02 24 0.03s381 17 15 0.01 15 0.02 15 0.01 15 0.01s382 17 17 0.03 16 0.01 16 0.02 16 0.02argauss 18 6 0.01 6 0.01 6 0.01 6 0.01coolhans 18 24 0.03 26 0.03 26 0.03 26 0.03genhs28 18 10 0.01 10 0.02 10 0.01 10 0.01hs113 18 16 0.02 16 0.02 16 0.01 16 0.01pentagon 18 27 0.03 34 0.04 34 0.04 34 0.04res 18 12 0.01 12 0.01 12 0.01 12 0.01hs109 19 77 0.17 33 0.05 33 0.05 33 0.05lotschd 19 20 0.03 20 0.02 20 0.03 19 0.02s359 19 17 0.01 17 0.01 17 0.01 17 0.01s375 19 22 0.04 19 0.02 19 0.02 19 0.03hs117 20 19 0.02 19 0.01 19 0.02 19 0.02s280 20 16 0.02 16 0.01 16 0.02 16 0.01s286 20 26 0.02 26 0.03 26 0.03 26 0.02s287 20 47 0.06 47 0.06 47 0.06 47 0.06s288 20 20 0.01 20 0.02 20 0.02 20 0.02s300 20 13 0.02 13 0.01 13 0.02 13 0.01s303 20 16 0.01 16 0.03 16 0.02 16 0.02sineali 20 14 0.02 14 0.02 14 0.02 14 0.02hs114 21 27 0.04 25 0.03 23 0.02 24 0.03s366 21 34 0.05 123 0.17 123 0.17 47 0.05s372 21 35 0.05 32 0.04 47 0.06 32 0.04s394 21 19 0.03 19 0.02 19 0.03 19 0.03hs108 22 19 0.02 20 0.02 20 0.03 20 0.03polak3 22 22 0.04 22 0.04 22 0.04 22 0.04hs119 24 32 0.06 28 0.05 29 0.05 29 0.04minmaxbd 25 32 0.04 34 0.04 34 0.04 31 0.04s284 25 27 0.04 27 0.04 27 0.05 27 0.04s285 25 15 0.03 15 0.03 15 0.03 15 0.03s384 25 15 0.03 15 0.03 15 0.02 15 0.02

Page 154: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 144

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Times385 25 19 0.04 19 0.03 19 0.03 19 0.04expfita 26 23 0.02 23 0.03 23 0.02 23 0.03s387 26 21 0.04 21 0.04 21 0.03 21 0.04fccu 27 15 0.02 15 0.02 15 0.02 15 0.02hs116 28 71 0.09 86 0.12 86 0.13 74 0.12rk23 28 13 0.02 13 0.01 13 0.01 13 0.023pk 30 19 0.03 19 0.04 22 0.04 19 0.04s289 30 10 0.02 10 0.01 10 0.02 10 0.02s292 30 9 0.01 9 0.01 9 0.01 9 0.01s297 30 67 0.09 67 0.09 67 0.08 67 0.09s388 30 18 0.02 18 0.03 18 0.04 18 0.03s389 30 19 0.04 19 0.03 19 0.03 19 0.03watson 31 18 0.07 18 0.07 18 0.07 18 0.07hs118 32 15 0.01 15 0.02 15 0.02 15 0.02hs099 33 24 0.03 22 0.03 22 0.03 22 0.03orthregb 33 18 0.05 10 0.02 10 0.02 10 0.02degenlpa 34 29 0.03 29 0.03 29 0.03 29 0.03degenlpb 35 30 0.03 30 0.03 30 0.03 30 0.03minsurf 36 13 0.02 13 0.03 13 0.03 13 0.03himmelbk 38 18 0.06 21 0.07 21 0.07 21 0.07s357 39 12 0.21 12 0.20 12 0.20 12 0.20makela3 41 18 0.02 18 0.03 18 0.02 18 0.02catena 43 27 0.05 27 0.04 27 0.04 27 0.04hs085 43 199 0.78 30 0.07 32 0.08 30 0.07eigmaxc 44 14 0.02 14 0.03 14 0.02 14 0.02eigminc 44 13 0.04 13 0.03 13 0.03 13 0.03eigencco 45 25 0.11 18 0.06 18 0.06 18 0.07bqpgabim 46 15 0.03 15 0.04 15 0.04 14 0.03optcntrl 48 42 0.07 37 0.06 42 0.08 37 0.06hs99exp 49 314 0.62 313 0.65 313 0.64 313 0.63bqpgasim 50 15 0.04 15 0.04 15 0.04 14 0.03chebyqad 50 55 32.41 73 43.23 73 41.42 73 40.41chnrosnb 50 50 0.09 50 0.09 50 0.10 50 0.09errinros 50 50 0.09 50 0.10 50 0.09 50 0.10hatfldg 50 16 0.03 16 0.02 16 0.03 16 0.03hilbertb 50 10 0.10 10 0.10 10 0.10 10 0.10s293 50 9 0.01 9 0.01 9 0.02 9 0.02s298 50 98 0.17 98 0.17 98 0.17 98 0.17s301 50 17 0.03 17 0.03 17 0.03 17 0.03s304 50 21 0.07 21 0.07 21 0.08 21 0.07

Page 155: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 145

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Timetointqor 50 11 0.02 11 0.02 11 0.02 11 0.02deconvu 51 137 1.21 137 1.19 137 1.18 137 1.19disc2 51 47 0.10 29 0.06 29 0.06 29 0.06s395 51 22 0.07 22 0.07 22 0.06 22 0.07ssnlbeam 51 57 0.11 64 0.13 64 0.13 64 0.13s392 55 29 0.04 29 0.04 29 0.04 29 0.05himmelbj 57 114 0.32 114 0.32 114 0.32 167 0.74optprloc 59 25 0.04 23 0.04 23 0.05 23 0.04makela4 61 12 0.02 12 0.01 12 0.02 12 0.01loadbal 62 24 0.04 23 0.04 23 0.04 23 0.04avion2 64 68 0.20 78 0.22 78 0.22 78 0.21dallass 73 56 0.16 43 0.10 45 0.09 49 0.11gridnetg 78 13 0.04 12 0.03 12 0.04 12 0.04coshfun 81 29 0.07 25 0.06 25 0.06 21 0.05dnieper 81 25 0.09 25 0.09 25 0.09 25 0.09prodpl0 89 22 0.04 23 0.04 23 0.04 23 0.04prodpl1 89 20 0.04 20 0.04 20 0.03 20 0.03hadamals 90 692 20.72 692 21.39 692 21.07 692 20.71model 92 15 0.02 15 0.03 15 0.03 15 0.03gridneth 97 12 0.04 12 0.04 12 0.03 12 0.04gridneti 97 15 0.04 15 0.05 15 0.05 16 0.06discs 99 389 3.02 62 0.30 126 0.75 59 0.26arglina 100 9 1.60 9 1.52 9 1.51 9 1.33fletchcr 100 51 0.13 51 0.15 51 0.14 51 0.15harkerp2 100 30 0.64 30 0.64 30 0.64 30 0.64mancino 100 19 4.52 19 4.25 19 4.33 19 3.88penalty2 100 22 0.25 22 0.25 22 0.26 22 0.25s299 100 174 0.47 174 0.50 174 0.50 174 0.51s302 100 19 0.04 19 0.05 19 0.05 19 0.05s305 100 25 0.27 25 0.26 25 0.27 25 0.27s368cute 100 10 3.08 10 2.96 10 3.05 10 3.13vardim 100 34 0.35 34 0.34 34 0.35 34 0.35goffin 101 13 0.11 13 0.11 13 0.10 13 0.11s332a 102 30 0.10 30 0.11 30 0.10 30 0.11linspanh 104 12 0.02 12 0.02 12 0.02 12 0.02spanhyd 104 42 0.27 63 0.47 63 0.45 32 0.21expfitb 106 31 0.08 31 0.08 31 0.08 31 0.08zigzag 108 28 0.06 29 0.08 29 0.07 29 0.08eigena 110 24 0.68 24 0.62 24 0.61 24 0.61eigenals 110 34 3.09 34 3.09 34 3.08 34 3.06

Page 156: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 146

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Time

eigenb 110 84 2.45 84 2.42 84 2.41 84 2.38eigenbls 110 83 6.57 83 5.94 83 5.94 83 5.88himmelbi 112 26 0.06 26 0.06 26 0.07 26 0.06batch 115 62 0.12 62 0.13 62 0.12 62 0.12core1 115 81 0.22 48 0.14 48 0.14 48 0.14explin 120 23 0.04 23 0.05 23 0.05 23 0.05explin2 120 24 0.05 24 0.05 24 0.05 24 0.05expquad 120 25 0.08 25 0.09 25 0.08 25 0.09qrtquad 120 26 0.10 26 0.09 26 0.10 26 0.09optmass 121 17 0.07 15 0.06 15 0.06 15 0.05airport 126 20 0.22 20 0.22 20 0.22 20 0.22qr3dls 155 52 4.34 47 3.64 47 3.61 48 4.10eigena2 165 30 0.60 15 0.28 15 0.28 15 0.28eigenaco 165 21 0.69 21 0.71 21 0.69 21 0.70eigenb2 165 52 1.66 52 1.66 52 1.64 52 1.63lakes 168 281 1.02 281 1.07 281 1.03 281 1.02swopf 173 23 0.09 20 0.08 17 0.07 20 0.08smbank 181 27 0.12 27 0.12 27 0.11 27 0.12optctrl3 198 36 0.34 36 0.35 36 0.32 36 0.33optctrl6 198 36 0.33 36 0.35 36 0.33 36 0.33argtrig 200 7 0.57 7 0.52 7 0.54 7 0.53chandheq 200 20 2.29 20 2.18 20 2.10 20 2.01integreq 200 8 0.93 8 0.77 8 0.75 8 0.72eigmaxa 202 48 0.24 44 0.23 44 0.22 44 0.23eigmaxb 202 20 0.17 16 0.13 16 0.13 17 0.11eigmina 202 45 0.20 44 0.21 44 0.21 46 0.23eigminb 202 11 0.09 11 0.09 11 0.08 11 0.09grouping 225 15 0.13 15 0.12 15 0.12 15 0.13haifam 235 26 0.29 24 0.26 26 0.29 26 0.29madsschj 239 24 1.08 24 1.08 24 1.06 24 1.07sseblin 264 11 0.04 11 0.05 11 0.04 11 0.04qpcboei2 268 103 0.53 101 0.55 101 0.52 97 0.49qpnboei2 268 175 1.75 182 1.92 182 1.85 199 2.18dallasm 283 77 0.48 70 0.44 70 0.42 70 0.43ssebnln 288 54 0.23 66 0.36 66 0.33 66 0.34reading3 304 51 0.69 50 0.66 50 0.64 50 0.64hadamard 321 12 0.14 15 0.15 15 0.15 15 0.15hanging 468 17 0.27 17 0.27 17 0.27 17 0.26probpenl 500 17 24.21 17 24.34 17 24.42 17 24.04pt 503 16 0.10 16 0.11 16 0.10 16 0.10

Page 157: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 147

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Timeexpfitc 506 46 0.45 46 0.45 46 0.44 46 0.45steenbra 540 22 3.32 22 3.34 22 3.33 22 3.32steenbrb 576 142 40.83 188 48.57 165 45.27 165 45.35dittert 591 137 21.54 137 22.05 137 21.49 137 21.58lch 601 26 45.63 26 45.74 26 45.29 26 44.74flosp2hh 650 37 7.21 37 6.91 37 6.85 37 6.72flosp2hl 650 12 1.55 12 1.32 12 1.33 12 1.28flosp2hm 650 12 1.00 12 0.82 12 0.84 12 0.81flosp2th 650 13 1.09 13 1.00 13 0.95 13 0.95flosp2tl 650 13 0.96 13 0.91 13 0.90 13 0.88flosp2tm 650 13 1.11 13 0.95 13 0.95 13 0.92qpcboei1 660 76 1.00 72 1.01 72 0.93 72 0.95catenary 662 38 0.70 39 0.74 39 0.70 39 0.70eigenc2 693 48 74.48 48 74.93 48 74.95 48 72.16qpcstair 741 308 6.51 308 7.28 308 6.24 309 6.19qpnstair 741 405 18.06 350 15.80 348 14.63 348 14.53gpp 748 19 7.30 19 6.26 19 6.08 19 5.90orthrega 773 63 2.84 61 2.67 61 2.65 61 2.55britgas 810 15 0.76 15 0.76 15 0.76 15 0.74bdqrtic 1000 13 0.64 13 0.51 13 0.47 13 0.49biggsb1 1000 36 0.66 29 0.46 29 0.43 29 0.44chainwoo 1000 60 1.67 60 1.70 60 1.62 60 1.67chenhark 1000 18 0.26 18 0.27 18 0.26 18 0.26penalty1 1000 56 336.65 56 343.39 54 321.26 56 326.01pentdi 1000 21 0.54 21 0.53 21 0.52 20 0.48power 1000 13 0.20 13 0.20 13 0.17 13 0.19scon1dls 1000 292 15.36 292 16.21 324 18.70 292 15.41sensors 1000 40 1809.38 40 1813.52 73 3522.62 40 1809.16bratu1d 1001 14 0.52 14 0.64 14 0.51 14 0.49gilbert 1001 36 410.80 35 363.06 35 351.14 35 351.04oet1 1005 16 0.23 16 0.24 16 0.24 16 0.24oet2 1005 200 4.08 178 3.49 187 3.64 178 3.38oet3 1006 17 0.27 17 0.29 17 0.29 17 0.29ksip 1020 47 1.91 47 1.86 47 1.82 47 1.81fminsurf 1024 55 403.29 55 367.72 55 353.79 55 344.01msqrtals 1024 33 835.85 33 867.88 33 820.83 33 800.83msqrtbls 1024 26 752.34 26 773.65 26 747.00 26 741.91gouldqp2 1048 26 0.42 26 0.44 26 0.40 26 0.44gouldqp3 1048 17 0.31 17 0.32 17 0.30 17 0.30dtoc1nd 1225 31 9.32 22 4.97 22 4.75 23 4.63

Page 158: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 148

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Timencvxqp4 1250 399 188.69 398 190.30 398 188.28 398 184.97sawpath 1371 83 2.98 90 3.49 84 3.05 90 3.20dallasl 1435 40 1.27 58 1.83 58 1.76 58 1.74cvxqp1 1500 29 6.90 29 6.99 29 6.86 29 6.74mosarqp2 1500 22 0.91 22 0.92 22 0.89 22 0.89ncvxqp1 1500 422 358.86 420 355.11 420 347.42 420 347.25ncvxqp2 1500 217 182.88 207 170.21 207 167.91 207 165.88ncvxqp3 1500 78 56.04 78 57.38 78 55.86 78 54.58ncvxqp7 1750 459 451.32 459 453.52 459 446.70 459 446.14ncvxqp8 1750 189 176.02 189 174.61 189 175.13 189 280.31cbratu2d 1764 8 1.05 8 1.06 8 1.01 8 1.00optcdeg2 1997 74 3.82 75 4.01 74 3.68 71 3.44optcdeg3 1997 55 2.92 55 3.00 54 2.78 56 2.79sinrosnb 1999 6 0.46 6 0.48 6 0.46 6 0.47edensch 2000 11 0.79 11 0.61 11 0.56 11 0.58powell20 2000 20 0.59 20 0.55 20 0.52 20 0.51semicon2 2000 50 4.53 49 4.28 50 4.22 49 4.05cbratu3d 2048 8 3.57 8 3.48 8 3.39 8 3.29msqrta 2048 15 269.22 15 252.78 15 245.72 15 237.73msqrtb 2048 16 291.77 16 279.02 16 267.76 16 261.61dtoc1na 2475 11 7.53 11 6.82 11 6.77 11 6.37dtoc1nb 2475 10 7.11 10 6.19 10 6.16 10 5.63dtoc1nc 2475 11 8.80 16 11.43 16 11.43 16 10.44clnlbeam 2499 107 12.12 105 10.12 105 9.80 105 9.61bigbank 2587 36 2.13 36 1.93 36 1.79 36 1.81dixmaana 3000 12 1.03 12 0.94 12 0.92 12 0.91dixmaanb 3000 13 2.18 13 1.99 13 1.92 13 1.81dixmaanc 3000 14 2.34 14 2.36 14 2.16 14 2.02dixmaand 3000 15 2.56 15 2.36 15 2.20 15 2.08dixmaane 3000 18 1.63 18 1.61 18 1.52 18 1.52dixmaanf 3000 18 3.56 18 3.29 18 3.14 18 2.93dixmaang 3000 20 3.70 20 3.51 20 3.33 20 3.17dixmaanh 3000 22 5.22 22 4.23 22 4.04 22 3.83dixmaani 3000 18 1.78 18 1.67 18 1.55 18 1.57dixmaanj 3000 22 4.94 22 4.25 22 3.89 22 3.73dixmaank 3000 24 5.52 24 4.49 24 4.19 24 4.03dixmaanl 3000 24 5.81 24 4.55 24 4.18 24 4.08bloweya 3004 76 5.19 77 5.42 74 4.71 74 4.82bloweyb 3004 135 8.82 134 9.11 134 8.23 134 8.38blockqp1 3006 18 1.25 18 1.08 18 1.00 18 1.00

Page 159: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 149

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Timeblockqp2 3006 12 0.81 12 0.76 12 0.73 12 0.72blockqp3 3006 33 2.15 32 1.92 32 1.80 32 1.76blockqp4 3006 18 1.16 18 1.15 18 1.06 18 1.06blockqp5 3006 33 2.12 34 2.01 34 1.84 34 1.83mosarqp1 3200 18 1.07 18 1.06 18 1.01 18 1.03yao 3999 214 12.92 207 13.68 207 12.19 194 16.56aug3d 4873 10 6.42 10 6.34 10 6.34 10 6.09aug3dc 4873 10 6.53 10 6.45 10 6.43 10 6.17aug3dcqp 4873 23 3.19 24 3.31 22 2.76 22 2.65aug3dqp 4873 27 3.33 26 3.23 27 3.15 25 2.84clplatea 4970 10 3.07 10 3.11 10 2.89 9 2.43clplateb 4970 12 3.70 12 3.66 12 3.64 12 3.29clplatec 4970 9 2.96 9 2.74 9 2.64 9 2.47arwhead 5000 23 4.89 14 1.76 13 1.57 14 1.63bdexp 5000 33 4.87 33 4.22 33 3.88 33 3.88brybnd 5000 15 8.18 15 8.38 15 7.96 15 7.51cragglvy 5000 17 2.82 17 2.87 17 2.78 17 2.70dqdrtic 5000 12 1.40 12 1.19 12 1.09 12 1.10dqrtic 5000 64 6.47 64 5.40 64 5.00 64 4.91engval1 5000 16 2.11 16 2.18 12 1.21 15 1.86freuroth 5000 21 8.39 14 3.15 14 2.87 14 2.88morebv 5000 8 1.37 8 1.19 8 1.23 8 1.16sipow2 5002 16 1.63 16 1.61 16 1.54 16 1.57sipow2m 5002 16 1.59 16 1.60 16 1.54 16 1.57orthrdm2 6003 402 308.16 418 316.77 391 282.24 195 155.50gridnetd 6589 27 10.52 28 8.26 28 7.95 27 6.67bratu3d 6750 11 69.21 11 68.32 11 67.53 11 65.80bratu2d 9800 10 20.60 10 19.81 10 19.68 10 17.24bratu2dt 9800 12 24.15 12 25.13 12 24.18 12 24.13porous1 9800 15 33.52 15 33.59 15 33.25 15 29.67porous2 9800 13 29.19 13 29.12 13 28.58 13 25.84dtoc2 9990 17 28.35 16 26.90 16 25.89 15 20.13nondia 9999 13 4.97 13 4.60 13 4.49 13 4.38artif 10000 24 36.03 24 34.98 24 34.62 24 29.53bdvalue 10000 9 13.08 9 12.71 9 12.31 9 10.91broydnbd 10000 11 20.47 11 20.20 11 20.31 11 19.02cosine 10000 13 4.05 13 3.96 13 3.72 13 3.66curly10 10000 18 16.60 18 14.93 18 14.64 18 13.94curly20 10000 18 29.56 18 29.29 18 28.06 18 27.72curly30 10000 18 47.25 18 46.63 18 45.72 18 44.81

Page 160: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 150

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Timecvxbqp1 10000 18 40.24 17 37.71 17 38.26 17 37.60liarwhd 10000 22 8.96 22 8.14 22 7.81 22 7.27nondquar 10000 24 6.46 24 6.83 24 6.57 24 6.49nonscomp 10000 33 9.59 33 9.04 32 8.18 33 8.00quartc 10000 69 12.84 69 12.67 69 11.81 69 11.47scosine 10000 80 28.04 80 27.56 80 25.97 80 25.93scurly10 10000 103 114.56 103 106.47 103 106.27 103 101.55scurly20 10000 94 180.96 94 181.49 94 175.43 94 170.85scurly30 10000 88 271.08 88 268.50 88 263.75 88 256.64sinquad 10000 70 56.80 70 56.12 70 54.33 83 55.33srosenbr 10000 28 6.87 27 5.25 27 4.82 27 4.85svanberg 10000 20 39.50 20 38.48 20 38.37 20 34.91tridia 10000 12 2.12 12 2.24 12 1.94 12 2.04woods 10000 48 13.96 48 12.35 48 11.76 48 11.10hues-mod 10002 280 60.40 229 41.85 230 40.85 229 37.84huestis 10002 63 12.74 98 23.19 98 22.97 34 8.12sipow1 10002 17 8.11 17 4.86 17 4.70 17 4.77sipow1m 10002 16 8.39 16 4.69 16 4.58 16 4.70sipow3 10002 19 8.22 19 5.93 19 5.72 19 5.69tfi2 10003 18 6.57 18 6.61 18 6.42 18 6.52sipow4 10004 19 7.27 19 7.23 19 7.02 19 7.04gridnetc 11408 38 18.19 35 15.53 35 14.61 34 13.61gridnete 11409 18 16.87 15 12.19 15 11.83 14 9.46gridnetf 11409 35 31.70 33 24.82 33 23.35 32 20.54cvxqp2 12500 25 1548.13 25 1668.78 25 1511.51 25 1379.50brainpc0 13805 21 68.46 21 71.92 21 69.77 21 69.58brainpc1 13805 34 108.45 33 110.70 33 107.00 50 162.92brainpc4 13805 27 86.40 25 83.90 26 85.89 25 82.18dtoc5 14997 19 56.40 18 53.27 18 53.48 17 40.88dtoc6 15000 26 80.52 23 66.56 23 65.71 23 56.75hager1 15000 10 2.50 10 2.67 10 2.71 10 2.58hager2 15000 10 4.02 10 4.25 10 4.20 10 4.27hager3 15000 14 9.51 14 10.38 14 9.24 12 7.12hager4 15000 17 7.04 17 7.91 17 7.72 17 7.38sreadin3 15000 30 95.37 25 76.55 27 83.47 25 80.75reading1 15001 225 764.21 33 102.74 38 117.17 33 103.12orthregc 15005 23 111.15 17 73.93 17 74.08 16 65.75lminsurf 15129 118 341.18 118 337.70 118 336.82 118 337.00gridneta 15688 28 10.17 27 9.19 27 8.31 26 7.18corkscrw 15997 31 26.33 37 30.54 37 29.54 37 27.13

Page 161: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

A. NUMERICAL RESULTS FOR STEPLENGTH CONTROL. 151

LOQO FB FO FPProblem m+n Iter Time Iter Time Iter Time Iter Time

broydn3d 20000 11 59.67 11 60.15 11 60.54 11 50.70liswet1 20002 17 5.81 15 5.46 15 5.04 15 5.00liswet10 20002 14 5.03 10 3.49 10 3.28 10 3.47liswet11 20002 15 6.03 10 3.64 10 3.35 10 3.38liswet12 20002 15 6.07 11 3.98 11 3.63 11 3.68liswet2 20002 12 4.14 10 3.67 10 3.43 10 3.45liswet3 20002 38 13.60 30 10.21 30 9.35 30 9.11liswet4 20002 45 16.36 45 15.43 45 14.59 45 13.78liswet5 20002 33 11.02 33 11.43 35 11.05 35 10.48liswet6 20002 47 15.65 41 13.94 41 12.84 41 12.26liswet7 20002 10 3.32 10 3.56 10 3.36 10 3.39liswet8 20002 10 3.37 10 3.63 10 3.41 10 3.36liswet9 20002 10 3.41 10 3.50 10 3.33 10 3.36gridnetb 20008 19 22.48 16 15.83 16 15.19 15 13.07dtoc1l 24975 20 27.25 11 11.06 11 10.75 11 10.40dtoc3 24993 45 28.48 44 30.25 44 28.32 43 21.88dtoc4 24993 19 88.20 19 84.82 19 88.70 18 72.57reading2 25001 24 12.22 24 11.53 24 10.76 24 10.46ubh1 29997 35 27.16 35 28.64 35 26.90 35 27.64sosqp2 30001 18 16.83 18 17.40 18 16.99 18 17.21trainf 30002 81 478.06 77 464.47 81 508.74 76 468.31aug2d 30188 16 28.87 16 27.33 16 26.01 16 24.97aug2dqp 30188 27 24.06 27 23.04 27 21.66 27 20.99aug2dc 30196 16 30.76 16 28.32 16 27.46 16 25.93aug2dcqp 30196 27 22.40 27 22.56 27 21.57 27 20.39ubh5 33997 272 802.35 272 802.89 272 798.07 272 794.05mccormck 50000 10 18.01 10 18.20 10 17.42 10 17.01

Page 162: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

APPENDIX B

Solving SDPs using AMPL.

1. The SDP model.

function kth_diag;

param K;

param N1..K;

param M;

param b1..M default 0;

set C_indexk in 1..K within 1..N[k], 1..N[k];

set A_indexk in 1..K, m in 1..M within

1..N[k], 1..N[k];

param Ck in 1..K, (i,j) in C_index[k];

param Ak in 1..K, m in 1..M,

(i,j) in A_index[k,m];

param eps := 1e-6;

var Xk in 1..K, i in 1..N[k], j in i..N[k] :=

if (i==j) then 1 else 0;

var y1..M := 1;

maximize f:

sum i in 1..M y[i]*b[i];

subject to lin_cons1k in 1..K, (i,j) in C_index[k]:

C[k,i,j] -

sum m in 1..M: (i,j) in A_index[k,m] A[k,m,i,j]*y[m]

= X[k,i,j];

subject to lin_cons2k in 1..K, i in 1..N[k],

j in i..N[k]: !((i,j) in C_index[k]):

-sum m in 1..M: (i,j) in A_index[k,m] A[k,m,i,j]*y[m]

= X[k,i,j];

subject to sdp_consk in 1..K, kk in 1..N[k]:

kth_diag(i in 1..N[k], j in 1..N[k]: j >= i

X[k,i,j], kk, N[k]) >= eps;

option presolve 0;

option loqo_options "sigfig=4 convex sdp iterlim=200

152

Page 163: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

1. THE SDP MODEL. 153

verbose=2 timing=1 primal";

data sdp.dat;

solve;

Page 164: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE AMPL FUNCTION DEFINITION. 154

2. The AMPL function definition.

static real

kth_diag(register arglist *al)

/* kth diagonal element of D in LDL^T */

/* must be called in order: 1,2,...,n */

real z, diagval;

real *d, *h, *ra;

int *at, i, ii, j, jj, k, m, n, N;

static int k0=-1;

char *se, *sym;

AmplExports *ae = al->AE;

static real **A=NULL, Ident, **A0=NULL;

static int convex_flag=1;

FILE *f1;

N = al->n;

if (N <= 2) return 0;

n = (int)al->ra[N-1];

k0 = (int)al->ra[N-2];

k0--;

if (k0 == 0)

if ((f1 = fopen("sdpblocks.loqo", "r")) != NULL)

fclose(f1);

f1 = fopen("sdpblocks.loqo", "a");

fprintf(f1, "%d \n", n);

fclose(f1);

if (n*(n+1)/2 != N-2)

fprintf(Stderr,

"should be kth_diag(i in 1..n, j in 1..n: j>=i A[i,j],k,n)\n");

fflush(Stderr);

at = al->at;

ra = al->ra;

d = (real *)al->derivs;

h = al->hes;

if (A == NULL)

REALLOC( A, n, real *);

A[0] = NULL;

else

REALLOC( A, n, real *);

Page 165: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE AMPL FUNCTION DEFINITION. 155

REALLOC( A[0], 2*n*n, real);

for (i=0, k=0; i<n; k+=2*n, i++) A[i] = A[0]+k;

if (A0 == NULL)

REALLOC( A0, n, real *);

A0[0] = NULL;

else

REALLOC( A0, n, real *);

REALLOC( A0[0], n*n, real);

for (i=0, k=0; i<n; k+=n, i++) A0[i] = A0[0]+k;

if (k0 == 0)

k = 0;

for(i=0; i<n; i++)

for (j=i; j<n; j++)

jj = at[k];

if (jj >= 0)

A[i][j] = ra[jj];

else

A[i][j] = z = strtod(sym = al->sa[-(jj+1)], &se);

if (*se)

fprintf(Stderr,

"kth_diag treating arg %d = \"%s\" as %.g\n",

i+1, sym, z);

fflush(Stderr);

A[j][i] = A[i][j];

k++;

for(j=n; j<2*n; j++)

A[i][j] = 0;

A[i][n+i] = 1;

for (i=0; i<n; i++)

for (j=0; j<n; j++)

A0[i][j] = A[i][j];

if (d)

m = 0;

for (i=0; i<n; i++)

Page 166: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE AMPL FUNCTION DEFINITION. 156

for (j=i; j<n; j++)

if (i<k0 && j<k0)

d[m] = A[i][k0]*A[j][k0];

if (j!=i) d[m] *= 2;

else

if (i<k0 && j==k0)

d[m] = -2*A[i][k0];

else

if (i==k0 && j==k0)

d[m] = 1;

else

d[m] = 0;

m++;

if (h)

m = 0;

for (ii=0; ii<n; ii++)

for (jj=ii; jj<n; jj++)

for (i=0; i<=ii; i++)

for (j=i; j<=(i==ii?jj:n-1); j++)

if (i<k0 && j<k0 && ii<k0 && jj<k0)

if ( i!=j && ii!=jj)

h[m] = - A[ii][k0]*A[jj][n+i ]*A[j ][k0]

- A[i ][k0]*A[j ][n+ii]*A[jj][k0]

- A[ii][k0]*A[jj][n+j ]*A[i ][k0]

- A[j ][k0]*A[i ][n+ii]*A[jj][k0]

- A[jj][k0]*A[ii][n+i ]*A[j ][k0]

- A[i ][k0]*A[j ][n+jj]*A[ii][k0]

- A[jj][k0]*A[ii][n+j ]*A[i ][k0]

- A[j ][k0]*A[i ][n+jj]*A[ii][k0];

else

if ( i!=j && ii==jj)

h[m] = - A[ii][k0]*A[jj][n+i ]*A[j ][k0]

- A[i ][k0]*A[j ][n+ii]*A[jj][k0]

- A[ii][k0]*A[jj][n+j ]*A[i ][k0]

- A[j ][k0]*A[i ][n+ii]*A[jj][k0];

else

if ( i==j && ii!=jj)

h[m] = - A[ii][k0]*A[jj][n+i ]*A[j ][k0]

Page 167: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE AMPL FUNCTION DEFINITION. 157

- A[i ][k0]*A[j ][n+ii]*A[jj][k0]

- A[jj][k0]*A[ii][n+i ]*A[j ][k0]

- A[i ][k0]*A[j ][n+jj]*A[ii][k0];

else

if ( i==j && ii==jj)

h[m] = - A[ii][k0]*A[jj][n+i ]*A[j ][k0]

- A[i ][k0]*A[j ][n+ii]*A[jj][k0];

else

if (i<k0 && j<k0 && ii<k0 && jj==k0)

h[m] = A[i][k0]*A[j][n+ii]

+ A[j][k0]*A[ii][n+i];

if (i!=j) h[m] *= 2;

else

if (i<k0 && j==k0 && ii<k0 && jj<k0)

h[m] = A[ii][k0]*A[jj][n+i]

+ A[jj][k0]*A[i][n+ii];

if (ii!=jj) h[m] *= 2;

else

if (i<k0 && j==k0 && ii<k0 && jj==k0)

h[m] = - A[ii][n+i ] - A[i ][n+ii];

else

h[m] = 0;

m++;

for (i=0; i<k0; i++)

for (j=k0; j<n+k0; j++)

A[k0][j] -= A[k0][i]*A[i][j];

A[k0][i] = 0;

diagval = A[k0][k0];

for (j=k0; j<=n+k0; j++)

A[k0][j] /= diagval;

for (i=0; i<k0; i++)

for (j=k0+1; j<=n+k0; j++)

A[i][j] -= A[i][k0]*A[k0][j];

Page 168: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

2. THE AMPL FUNCTION DEFINITION. 158

A[i][k0] = 0;

if (diagval < -1.0e-12) convex_flag = 0;

if (k0 == n-1)

if (convex_flag == 0)

fprintf(Stderr,"matrix X not positive semidefinite\n");

convex_flag = 1;

return diagval;

Page 169: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

3. STEP-SHORTENING IN LOQO. 159

3. Step-shortening in LOQO.

static double sdpsteplen(

LOQO *vlp,

double *x,

double *dx,

double alphap

)

int i, j, k, n, N, s, curx;

static double **A=NULL;

double Aij, Ajj;

FILE *f1;

LOQO *lp = (LOQO *) vlp;

if (lp->sdpblock == NULL || lp->sdpnum == 0)

if (f1 = fopen("sdpblocks.loqo", "r"))

while (!feof(f1))

fscanf(f1,"%d",&i);

lp->sdpnum++;

lp->sdpnum--;

REALLOC(lp->sdpblock, lp->sdpnum, int);

fclose(f1);

f1 = fopen("sdpblocks.loqo", "r");

j = 0;

fscanf(f1,"%d",&i);

while (!feof(f1))

lp->sdpblock[j] = i;

fscanf(f1,"%d",&i);

j++;

fclose(f1);

system("rm -f sdpblocks.loqo");

else

fprintf(stderr, "not an SDP \n");

curx = 0;

for (s=0; s<lp->sdpnum; s++)

n = lp->sdpblock[s];

if (A == NULL)

REALLOC( A, n, double *);

A[0] = NULL;

else

REALLOC( A, n, double *);

Page 170: INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER …hvb22/thesis12.pdf · Interior-point methods have been a re-emerging field in optimization since the mid-1980s. We will present

3. STEP-SHORTENING IN LOQO. 160

REALLOC( A[0], n*n, double);

for (i=0, k=0; i<n; k+=n, i++) A[i] = A[0]+k;

start:

k=curx;

for (i=0; i<n; i++)

for (j=i; j<n; j++)

A[i][j] = x[k] + dx[k]/alphap;

A[j][i] = x[k] + dx[k]/alphap;

k++;

for (j=0; j<n; j++)

Ajj = A[j][j];

if (Ajj < -1e-12)

alphap *= 2;

goto start;

for (i=j+1; i<n; i++)

Aij = A[i][j];

for (k=j+1; k<n; k++)

A[i][k] -= Aij*A[j][k]/Ajj;

for (i=j+1; i<n; i++)

A[i][j] /= Ajj;

A[j][i] /= Ajj;

curx += n*(n+1)/2;

FREE(A);

return alphap;


Recommended