+ All Categories
Home > Documents > A Platform for Solving Constraint Satisfaction Problems ...

A Platform for Solving Constraint Satisfaction Problems ...

Date post: 29-Oct-2021
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
14
EE 227C Convex Optimization and Approximation A Platform for Solving Constraint Satisfaction Problems via Semidefinite Programming Authors: Riley Murray Paul Anderson Instructor: Benjamin Recht University of California, Berkeley May 12, 2016
Transcript
Page 1: A Platform for Solving Constraint Satisfaction Problems ...

EE 227CConvex Optimization and Approximation

A Platform for Solving Constraint Satisfaction

Problems via Semidefinite Programming

Authors:

Riley MurrayPaul Anderson

Instructor:

Benjamin Recht

University of California, Berkeley

May 12, 2016

Page 2: A Platform for Solving Constraint Satisfaction Problems ...

Abstract

A Constraint Satisfaction Problem (CSP) is a discrete optimization problem con-sisting of a set of variables (which take on values in a finite domain) and a set ofconstraints (indicator functions defined on a subset of variables of some specified car-dinality). The objective in a CSP is to assign each variable a value in the domain sothat the largest number of constraints are satisfied.

CSP’s subsume a wide range of fundamental combinatorial optimization problems,including Max k-SAT, graph coloring (approximately defined), and Unique Games.

Significant effort has been dedicated to developing approximation algorithms forCSP’s. The most general of these algorithms involve solving a canonical SDP relax-ation of the CSP (called “Basic SDP” [Rag08]) and use extremely sophisticated post-processing of SDP vectors to determine an assignment of variables. Unfortunately, aswas shown in [DMS15], these algorithms require impossibly powerful machines.

In this work, we deploy approximation algorithms rooted in theory but driven bypractical considerations to approximate CSP’s with the machines of today and the nearfuture.

1

Page 3: A Platform for Solving Constraint Satisfaction Problems ...

Contents

1 Introduction 31.1 Our Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 CSP’s for Ramsey Theory 4

3 An Overview of Basic SDP 5

4 Rounding Schemes for Basic SDP (principled and heuristic) 64.1 The Variable Folding Method . . . . . . . . . . . . . . . . . . . . . . . . . . 64.2 SDP Vector Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

5 Practical Considerations in Solving Large SDP’s 7

6 Experimental Results 96.1 Graph Coloring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96.2 SAT and the Pigeonhole Principle . . . . . . . . . . . . . . . . . . . . . . . . 10

7 Conclusion 12

References 13

2

Page 4: A Platform for Solving Constraint Satisfaction Problems ...

1 Introduction

CSP’s are concerned with a set of variables, V , taking values in a finite domain D withconsideration to a set of constraints C. We emphasize that while most optimization literatureconsiders“constraints” inviolable, this is not the case for CSP’s. In fact, it is the objective ofa CSP to satisfy as many of constraints as possible, and an optimal solution may well onlysatisfy a small portion of these constraints. We say that a CSP is satisfiable if there existsan assignment of variables for which all constraints are satisfied. Inviolable constraints doexist within the CSP framework, but those constraints are only that each v ∈ V takes avalue in D. This established, we discuss the constraints of a CSP in more detail.

Let ΩkD be the set of all functions on k or fewer variables (each taking values in D) with

range 0, 1. A constraint Ci for a CSP over variable set V is any function in ΩkD defined

on S ⊂ V : |S| ≤ k. We refer to the S ⊂ V as the scope of the constraint, and refer to |S|as the arity of the constraint. The maximum arity of all constraints in a CSP is considereda fixed parameter (in the fixed-parameter tractability sense). In view of this, we simplifyexposition by assuming that all constraints are of arity k.1

While the generality of this framework is useful, it can appear a bit opaque. Figure 1aims to consolidate these structures to make more clear how a CSP instance is constructed.To bring us back to still more familiar territory, we can divide CSP’s into well known classesof problems by drawing the constraint functions from some Γ ⊂ Ωk

D. Table 1 presents suchclasses of problems for different Γ, D, and k.

Figure 1: A diagram showing how different objects in the CSP framework come together todefine a CSP instance C. Objects at the tail of an arrow play a defining role the objects atthe head of the same.

k D Γ Problem2 0,1 6= Max-Cut3 0,1 All disjunctions on ≤ 3 literals Max 3-SAT10 0,1 ¬[All-Equal] Finding Ramsey(5,5)2 0,1,...,q-1 6= Graph Coloring

Table 1: How the CSP framework can be restricted to result in a variety of heavily studiedproblem classes.

1Our software package for CSP approximation does not make this assumption.

3

Page 5: A Platform for Solving Constraint Satisfaction Problems ...

Significant effort has been dedicated to developing approximation algorithms for CSP’s.The most general of these algorithms involve solving a canonical SDP relaxation of the CSP“Basic SDP” [Rag08]. This report is concerned with constructing, solving, and roundingsolutions for the Basic SDP relaxation of a CSP. Along the way, we present new linksbetween hardness-of-approximation (in the computer algorithms sense) and Ramsey Theory.

1.1 Our Contributions

Our primary contribution is the development of a Matlab based software package for workingwith CSP’s. Our codes deal with the Basic SDP relaxation for a CSP, rounding of theassociated SDP solutions, and solving small scale CSP’s exactly via integer programming.The software package has the following capabilities.

1. Given a Matlab CSP object, we can efficiently construct SDP input parameters in theformat used by SDPT3 ( a core routing in CVX and Yalmip) and SDPNAL+ ( a cuttingedge solver capable of solving SDP’s with > 10 million linear equality constraints).

2. Given a Matlab CSP object, we can solve it to arbitrary accuracy with a MIP formu-lation via the Gurobi Optimization solver.

3. Given the solution to Basic SDP for a given CSP, we can return an assignment of vari-ables for the CSP based on various heuristic adaptations of published (but impractical)algorithms.

The assignment heuristics used are SDP vector clustering and an approximation of theVariable Folding Method [RS09]. We tested this software package on two types of CSP’s:a graph-coloring problems, and Max 3-SAT. Our approximation of the Variable FoldingMethod is found to perform best, closely approximating or matching the optimal solutionwhile reducing the number of variables in the folded CSP by over 80% in large problems.

Our software package makes use of the Matlab CSP abstraction of [DMS15]. All codefor this project can be found at https://github.com/rileyjmurray/ee227cProject.

In addition to these contributions, we address a variety of considerations for semidefiniteprogramming at scale (particularly memory management and linearly independent constraintqualification). Lastly, we demonstrate how hardness-of-approximation relates to RamseyTheory.

2 CSP’s for Ramsey Theory

Ramsey theory is the study of combinatorial objects in which a certain amount of ordermust occur as the scale of the object becomes large [Wei16b]. The most well-known topicin Ramsey theory is that of Ramsey numbers. Ramsey numbers deal with questions of thefollowing form :

for fixed n and m, what is the smallest α such that for any 2-coloring of edges in Kα, therenecessarily exists a monochromatic clique of size n, or a monochromatic clique of size m?

4

Page 6: A Platform for Solving Constraint Satisfaction Problems ...

This“smallest α” is referred to as “Ramsey of n,m”, or simply R(n,m). When n = m, wewrite R(n). Although it has been proven for any n,m there exists a finite R(n,m) satisfyingthe requirements above, the exact values R(n,m) are largely unknown. For example, R(5) isonly known to lay in [43, 49], and this has remained unchanged for the past 20 years [Wei16a].

We now turn to formulating a sequence of CSP’s for which optimal solutions woulddetermine R(5), and for which approximate solutions may yield new bounds on the same.Suppose we are interested in testing whether R(5) ≤ 48. Construct the complete graph on48 vertices, K48, and for each edge in K48 define a variable (in the CSP sense) taking valuesin red, blue. For every induced subgraph on 5 vertices, define a constraint with scope ofthe 5-choose-2 edges in this subgraph. Set the relation for this constraint as the not-all-equaloperator. Stated in these terms, R(5) ≤ 48 if and only if the optimal solution to this CSPsatisfies less than 100% of all constraints (i.e. if the CSP is “not satisfiable”).

More generally, one can construct a CSP which asks whether R(n) ≤ L by identifyingedge colors of KL with variables in our CSP, and by identifying edges of induced subgraphson n vertices as scopes for our constraints. In all cases, the relation on these variables is thenot-all-equal operator. In all cases, R(n) ≤ L if and only if this CSP is not satisfiable.

Of course, our use of CSP’s is posed as a decision problem in a way that could alsobe modeled with 3-SAT by introduction of appropriate helper variables. Our insight is thatwhile SAT solvers try to find exact solutions, algorithms in CSP literature are concerned withapproximate solutions, and approximate solutions can be used to bound optimal solutions.In particular, if the Basic SDP relaxation for a Ramsey CSP has optimal objective less than1, then that CSP cannot be satisfiable.

Since the integrality gap of Basic SDP is quite possibly smallest-possible2 [Rag08], it isreasonable to use SDP relaxations for CSP’s in the manner of one-sided hypothesis tests onRamsey number bounds.

This is where Ramsey theory and hardness-of-approximation intersect : determining thesmallest possible size of an object (i.e., sKα for some α) so that a property of interest isguaranteed to hold represents an increasingly challenging algorithmic task as the proportionof satisfied constraints approaches but does not reach 1.

3 An Overview of Basic SDP

Basic SDP operates on a principle of consistency across local assignments. A local assignmentfor a constraint Ci is a mapping Li from Si to Dk; there are |D|k such local assignments foreach constraint. A collection of consistent local assignments is any set of mappings L suchthat for all Li, Lj ∈ L with v ∈ Si ∩ Sj, we have Li(v) = Lj(v).

Basic SDP introduces one linear variable for each local assignment of each constraint(yi[L]), and establishes coupling constraints between these variables. The coupling con-straints are with respect to a matrix variable (which has the probabilistic interpretation asthe covariance matrix of indicator variables). Using this interpretation, Basic SDP optimizesover a set of probability distributions.

2In that there may not exist a mathematical programming relaxation of any subclass of CSP’s for whichthe largest distance between a relaxation’s objective and the true optimal objective is smaller than that ofBasic SDP.

5

Page 7: A Platform for Solving Constraint Satisfaction Problems ...

For a CSP C = (V,D,C) including |C| = m constraints, the Basic SDP of C is as follows.

maxy≥0X0

1

m

∑i:Ci∈C

∑L∈Li

Ri(L)yi[L]

s.t.∑L∈Li

yi[L] = 1 ∀i : Ci ∈ C (1)∑L∈LiL(v)=`L(v′)=`′

yi[L] = X(v,`),(v′,`′) ∀(v, `, v′, `′) : ∃Ci = (Si, fi) with v, v′ ∈ Si (2)

0 ≤ X(v,`),(v′,`′) ≤ 1v 6= v′ or ` = `′ (3)

The probabilistic interpretation arises from the following identifications.

X(v,`),(v′,`′) = E[Iv(`)Iv′(`′)] = PL∼yi [L(v) = `, L(v′) = `′] ∀ i : v, v′ ∈ Si (4)

We note that while Basic SDP has a probabilistic interpretation, this interpretation onlyserves to motivate the formulation, and is not used in the rounding schemes that follow(either disciplined rounding schemes, or more ad-hoc schemes).

4 Rounding Schemes for Basic SDP (principled and heuristic)

Rounding a solution to Basic SDP involves performing a Cholesky (or LDL) factorizationof the SDP matrix X → MMᵀ, followed by carefully designed operations on the SDPvectors in the columns of M . The two most prominent rounding schemes for Basic SDP arethe Variable Folding Method3 [RS09] and a peculiar algorithm introduced by Raghavendra[Rag08] and analyzed under the name “UGDFS” by Dwivedi et. al[DMS15]. Both of theserounding schemes technically have polynomial runtime, but the constant factors in theirruntime complexity are so large that the authors state the constant – but not the polynomial.

As shown in Dwivedi et. al [DMS15], there is no hope of implementing UGDFS on com-puters of today or the foreseeable future. The Variable Folding Method on the other hand,can be simplified while retaining the core idea. Our software package implements a simplifiedversion of the Variable Folding Method on recommendation of Prasad Raghavendra.

4.1 The Variable Folding Method

The Variable Folding Method (VFM) is a rounding scheme for Basic SDP introduced in[RS09]. Given the SDP vectors in the matrix M (from the Cholesky decomposition of X),VFM projects the vectors onto a random subspace of dimension β. Once the SDP vectorsare projected onto this random subspace, they are classified over an ε-net of the unit ball inRβ.4 Variables whose associated SDP vectors are classified in identical ways are merged, and

3sometimes called “Rounding by Miniatures”4An ε-net is a discretization of a compact subset of a metric space such that for any point in the subset,

there exists a discrete representative within ε distance under the associated metric.

6

Page 8: A Platform for Solving Constraint Satisfaction Problems ...

a new CSP is defined in this variable-merging process. The new CSP is called a “folding”of the original CSP and needs to be solved by an exact algorithm. VFM completes by“unfolding” the optimal assignment of variables for the folded CSP into an assignment ofvariables for the original CSP. VFM is polynomial in that the folded CSP has a boundednumber of variables, but this bound is impossibly large to be useful with today’s computersystems [DMS15].

In our implementation of VFM, we push the folding to an extreme by projecting onto anextremely small dimension (β = 2), and choosing ε for our ε-net in a generous way. For thephase requiring an exact solution of the folded CSP, we wrote a generic function that acceptsarbitrary Matlab CSP objects and returns optimal solutions via the Gurobi Optimizationinteger programming solver [GO15].

4.2 SDP Vector Clustering

VFM has three computationally expensive operations: (1) constructing and solving BasicSDP, (2) constructing an ε-net of the unit ball in Rβ for β >> 2, and (3) solving a resultingCSP exactly.

While we are interested in approaches that solve Basic SDP, there are natural alternativesto the second and third of these steps. We propose the following procedure as an ad-hocrounding scheme for Basic SDP.

1. Project SDP vectors onto a random subspace of any dimension ≥ |D|.

2. Assemble all SDP vectors associated with a CSP variable (there are |D| such vectors)into a single vector.

3. Cluster the resulting vectors into |D| equivalence classes.

4. Assign variables arbitrarily for symmetric CSP’s (e.g. graph coloring) and test all |D|!assignments for asymmetric CSP’s (e.g. 3-SAT).

While the last of these steps is expensive for problems with large domain size, the CSPframework is intentionally used for classes of problems where the domain is small (oftenbinary).

5 Practical Considerations in Solving Large SDP’s

Because Basic SDP is quite large even for moderate CSP’s, standard solvers based on interior-point methods (e.g. SDPT3 or Mosek’s SDP solver) are not viable for our purpose. Recently,a group from the National University of Singapore has developed a hybrid Newton ConjugateGradient and ADMM solver for SDP’s with bound constraints [YST15, ZST10]. Theirsolver, SDPNAL+, uses Matlab as the organizing language and C for numerically intensiveoperations (via Mex files). SDPNAL+ is capable of high levels of shared-memory parallelism(e.g. effective use of more than 8 cores in our experiments) and has been shown to capable ofsolving SDP’s with matrix variable on the order of 5,000 × 5,000 and over 10 million linearequality constraints.

7

Page 9: A Platform for Solving Constraint Satisfaction Problems ...

SDPNAL+ is not supported by any general purpose modeling language. As a result,we needed to build all constraint matrices directly. Since constraint matrices of the typeswe consider have millions of rows and columns, severe memory bottlenecks (requiring > 50GB RAM) are faced in SDP construction even with diligent usage of sparse matrix storageformats. With Matlab’s Code Profiler, we identified and removed these memory bottlenecks.Our code can construct a CSP’s Basic SDP relaxation even with large matrix variable (inexcess of 3,000 x 3,000) with under 500 MB RAM.

Our biggest runtime bottleneck for SDP construction came in creating linear operatorsfor the matrix variable. In the symbolic statement of a standard form SDP for SDPNAL+,the linear operators are defined as matrices. In the code for SDPNAL+, the linear opera-tors must be stored as vectorized matrices. We found that author-provided C code writtenspecifically for vectorizing matrix-represented linear operators was too slow for our formula-tion. Ultimately, we bypassed this bottleneck by constructing the linear operators directlyin vector form.

In our experiments, SDPNAL+ is capable of solving these SDP’s with matrix variable onthe order of multiple thousands of rows and columns (and hundreds of thousands of linearconstraints) in minutes. A set of SDP instances with time-to-solve is given below (withCSP’s corresponding to 3-SAT statements of the Pigeonhole Principle).

Figure 2: Time to solve Basic SDP relaxations of 3-SAT statements of the Pigeon HolePrinciple. The machine used for these instances had 64GB RAM and dual Xeon processors(8 cores / socket) at 2.2GHz.

Our greatest difficultly with SDPNAL+ lay in linearly independent constraint qualifica-tion. While any well-posed SDP can have linearly dependent constraints removed withoutloss of generality, it is difficult to determine a-priori a subset of linearly independent con-straints for a CSP C’s Basic SDP relaxation. On the other hand, once an instance is built,the constraint matrices are too large for standard constraint reduction techniques (which

8

Page 10: A Platform for Solving Constraint Satisfaction Problems ...

involve computing Cholesky or QR factorizations of the entire constraint matrix). Thus forsmall problems we have implemented constraint reduction techniques that make SDPNAL+robust, large problems (such as those for bounding R(5)) are not solve able at this time(even on our machine with 64GB RAM).

6 Experimental Results

We conducted empirical testing on two different CSPs to evaluate and compare the per-formance of SDP vector clustering and the Variable Folding Method. We also performedsensitivity testing on the Variable Folding Method with respect to the parameter ε, whichdetermines the spacing of the ε-net, and to the domain size.

6.1 Graph Coloring

The first CSP is the “Americas problem,” a graph coloring problem (i.e. max-cut on a largerdomain) where the graph is a map of North and South America with the countries renderedas nodes and borders as edges. The CSP contains 24 variables and 38 constraints, which isa small enough size that it can be solved exactly using Gurobi. We solved this problem withdomains of 2, 3, and 4 colors in three different ways: exactly, using SDP vector clustering,and using the Variable Folding Method. The results are shown in Figure 3.

For those graph coloring problems, we are interested in how much the Variable FoldingMethod is able to reduce the size of the problem. In Figure 3(a), we can see that the reductionin variables depends on both domain size and ε, with domain size having the larger impact.The number of variables cannot be reduced at all when the domain size is 4, and ε has littleeffect for a domain size of 3. In the best case, with domain 2 and a large ε, the number ofvariables can be reduced by over 40%. Next we are interested in the solution quality withSDP vector clustering and the Variable Folding Method, relative to each other and to theoptimal solution. These plots are also contained in Figure 3. Note that 2- and 3-coloringsof this graph are not satisfiable.

We observe that the Variable Folding Method is closest to the optimal solution in allthree plots. The difference is 4% in 3(b), and Variable Folding Method has the optimalsolution in 3(c) and 3(d). The best result achieved by SDP vector clustering in 3(d) is a 25%difference.

9

Page 11: A Platform for Solving Constraint Satisfaction Problems ...

(a) Reduction in variables with variable fold-ing method vs. epsilon and domain

(b) Solution quality for 2-coloring the Amer-icas

(c) Solution quality for 3-coloring the Amer-icas

(d) Solution quality for 4-coloring the Amer-icas

Figure 3: Evaluation of SDP Vector Clustering and the Variable Folding Method on theAmericas Graph-Coloring Problem

6.2 SAT and the Pigeonhole Principle

The second CSP is a 3-SAT formulation of the Pigeonhole Principle. These CSP’s have theobjective to place items in boxes without having more than 1 item in the same box. Wheneverthe number of items is greater than the number of boxes, these problems make for interestingcandidates for CSP approximation. The reasoning is that for a sufficiently large gap betweenitems and boxes, even an SDP relaxation should be able to tell that not all constraints canbe satisfied. We limited ourselves to instances that were small enough to solve exactly. Thelargest instance presented here (11 items in 9 boxes) has 572 CSP constraints, 165 CSPvariables and an arity of 2 to 3 (depending on CSP constraint), resulting in nearly 5,000integer variables in gurobi. We note that more generally, the integer program used to solvea CSP (on n variables and m constraints over domain D with arity k) has m · |D|k + n · |D|integer variables.

The figures are similar to those presented for the Americas problem. In Figure 4(a),

10

Page 12: A Platform for Solving Constraint Satisfaction Problems ...

only ε varies, not the domain. We see that the Variable Folding Method can eliminate morevariables in the folded CSP the larger the problem gets, with a reduction of over 80% possiblewith large ε for 10 items in 8 boxes and 11 items in 9 boxes. Even with small ε, the numberof variables is reduced by 50% in the largest problems. In the solution quality plots, we seethat the Variable Folding Method achieves or is very close to the optimal solution, with aslight degradation in solution quality as ε increases. SDP vector clustering is not close tothe optimal solution, although the approximation error improves from about 30% in 4(c) to15% in 5(b) as the problem size grows.

(a) VFM variable reduction vs. ε and |D| (b) Solution quality for 4 items in 2 boxes

(c) Solution quality for 6 items in 4 boxes (d) Solution quality for 8 items in 6 boxes

Figure 4: Evaluation of SDP Vector Clustering and the Variable Folding Method on thePigeonhole Problem (1 of 2)

11

Page 13: A Platform for Solving Constraint Satisfaction Problems ...

(a) Solution quality for 10 items in 8 boxes (b) Solution quality for 11 items in 9 boxes

Figure 5: Evaluation of SDP Vector Clustering and the Variable Folding Method on thePigeonhole Problem (2 of 2)

7 Conclusion

In this work, we developed a Matlab-based software package for solving CSP’s. This packagetakes a CSP, computes the Basic SDP relaxation, and feeds the inputs to SDPNAL+, apowerful solver that can handle large scale problems. Once the SDP solution is found,we need to convert it to an assignment of variables for the CSP. This is done using twoheuristics: SDP vector clustering and an approximation of the Variable Folding Method.We used an approximation of the Variable Folding Method because the published method,while polynomial, is impossibly large. We avoided this problem by projecting the SDPvectors onto R2, which greatly reduced the number of variables in the folded CSP. We thentested this software package on small graph coloring and pigeonhole problems, comparingthe assignment heuristics to the exact solution . The Variable Folding Method, even withour approximation, proved to be very close to the optimal solution. SDP vector clustering,on the other hand, showed a fairly large approximation error in the two problems.

The two problems used to evaluate SDP vector clustering and the Variable FoldingMethod are favorable cases for SDP vector clustering because there are no constraints thatreference specific values in the domain. In graph coloring, we simply want adjacent nodes tobe different colors, and in the pigeonhole problem we want each box to contain no more thanone item. If, for example, we wanted one node to be a particular color, or for one item to goin a specific box, then the assignment would affect the CSP objective. In the problems wetested, only the clustering process affects the CSP objective, not the assignment of values.If the assignment matters, then it is necessary to test all |D|! possible assignments and theclustering method would be slower as a result.

We encountered various issues related to scaling in this work. Because SDPNAL+ isnot supported by any general purpose modeling language, we had to construct the BasicSDP relaxation ourselves. In large CSP’s, the constraint matrices have millions of rows andcolumns, which created a large memory bottleneck. We identified and removed these issuesusing Matlab’s Code Profiler. Another issue is that the Variable Folding Method depends

12

Page 14: A Platform for Solving Constraint Satisfaction Problems ...

on solving the folded problem exactly. Using the published version of this method whichprojects onto a random subspace in Rβ, we would end up with folded problems that are toolarge to solve exactly. Our approximation of projecting onto R2 instead helped to keep thefolded problems small enough to solve exactly.

While developing our software, we identified two small bugs in the API for SDPNAL+(no bugs relating to algorithmic correctness). Our communications with Dr. Kim-ChuanToh (corresponding author for SDPNAL+) indicated that these issues and others will beresolved in time, as SDPNAL+ is still very much under development. This bodes well forour project since it suggests that our CSP API will allow those interested in developingrounding schemes for Basic SDP ready-access to a solver with expanding capabilities.

References

[DMS15] Raaz Dwivedi, Riley Murray, and Quico Spaen. An introduction to constraint sat-isfaction problems: Convex relaxations and approximation algorithms. Technicalreport, University of California Berkeley, 2015.

[GO15] Inc. Gurobi Optimization. Gurobi optimizer reference manual, 2015.

[Rag08] Prasad Raghavendra. Optimal algorithms and inapproximability results for ev-ery csp? In Proceedings of the fortieth annual ACM symposium on Theory ofcomputing, pages 245–254. ACM, 2008.

[RS09] Prasad Raghavendra and David Steurer. How to round any csp. In Foundationsof Computer Science, 2009. FOCS’09. 50th Annual IEEE Symposium on, pages586–594. IEEE, 2009.

[Wei16a] Eric W. Weisstein. Ramsey Number. From MathWorld–A Wolfram Web Resource.,2016 (accessed May 10, 2016).

[Wei16b] Eric W. Weisstein. Ramsey Theory. From MathWorld–A Wolfram Web Resource.,2016 (accessed May 10, 2016).

[YST15] Liuqin Yang, Defeng Sun, and Kim-Chuan Toh. Sdpnal+: a majorized semis-mooth newton-cg augmented lagrangian method for semidefinite programmingwith nonnegative constraints. Mathematical Programming Computation, 7(3):331–366, 2015.

[ZST10] Xin-Yuan Zhao, Defeng Sun, and Kim-Chuan Toh. A newton-cg augmented la-grangian method for semidefinite programming. SIAM Journal on Optimization,20(4):1737–1765, 2010.

13


Recommended