+ All Categories
Home > Documents > Near-Optimal Algorithms for Unique Games

Near-Optimal Algorithms for Unique Games

Date post: 12-Feb-2022
Category:
Upload: others
View: 4 times
Download: 0 times
Share this document with a friend
29
Near-Optimal Algorithms for Unique Games Moses Charikar * Konstantin Makarychev Yury Makarychev Princeton University Abstract Unique games are constraint satisfaction problems that can be viewed as a generalization of Max-Cut to a larger domain size. The Unique Games Conjecture states that it is hard to distin- guish between instances of unique games where almost all constraints are satisfiable and those where almost none are satisfiable. It has been shown to imply a number of inapproximability results for fundamental problems that seem difficult to obtain by more standard complexity assumptions. Thus, proving or refuting this conjecture is an important goal. We present signif- icantly improved approximation algorithms for unique games. For instances with domain size k where the optimal solution satisfies 1 - ε fraction of all constraints, our algorithms satisfy roughly k -ε/(2-ε) and 1 - O( ε log k) fraction of all constraints. Our algorithms are based on rounding a natural semidefinite programming relaxation for the problem and their performance almost matches the integrality gap of this relaxation. Our results are near optimal if the Unique Games Conjecture is true, i.e. any improvement (beyond low order terms) would refute the conjecture. 1 Introduction Given a set of linear equations over Z p with two variables per equation, consider the problem of finding an assignment to variables that satisfies as many equations (constraints) as possible. If there is an assignment to the variables which satisfies all the constraints, it is easy to find such an assignment. On the other hand, if there is an assignment that satisfies almost all constraints (but not all), it seems quite difficult to find a good satisfying assignment. This is the essence of the Unique Games Conjecture of Khot[10]. One distinguishing feature of the above problem on linear equations is that every constraint corresponds to a bijection between the values of the associated variables. For every possible value of one variable, there is a unique value of the second variable that satisfies the constraint. Unique games are systems of constraints – a generalization of linear equations discussed above – that have this uniqueness property (first considered by Feige and Lov´ asz [6]). * [email protected] Supported by NSF ITR grant CCR-0205594, NSF CAREER award CCR-0237113, MSPA-MCS award 0528414, and an Alfred P. Sloan Fellowship. [email protected] Supported by a Gordon Wu fellowship. Part of this work was done at Microsoft Research. [email protected] Supported by a Gordon Wu fellowship. Part of this work was done at Microsoft Research. 1
Transcript
Page 1: Near-Optimal Algorithms for Unique Games

Near-Optimal Algorithms for Unique Games

Moses Charikar∗ Konstantin Makarychev † Yury Makarychev ‡

Princeton University

Abstract

Unique games are constraint satisfaction problems that can be viewed as a generalization ofMax-Cut to a larger domain size. The Unique Games Conjecture states that it is hard to distin-guish between instances of unique games where almost all constraints are satisfiable and thosewhere almost none are satisfiable. It has been shown to imply a number of inapproximabilityresults for fundamental problems that seem difficult to obtain by more standard complexityassumptions. Thus, proving or refuting this conjecture is an important goal. We present signif-icantly improved approximation algorithms for unique games. For instances with domain sizek where the optimal solution satisfies 1 − ε fraction of all constraints, our algorithms satisfyroughly k−ε/(2−ε) and 1− O(

√ε log k) fraction of all constraints. Our algorithms are based on

rounding a natural semidefinite programming relaxation for the problem and their performancealmost matches the integrality gap of this relaxation. Our results are near optimal if the UniqueGames Conjecture is true, i.e. any improvement (beyond low order terms) would refute theconjecture.

1 Introduction

Given a set of linear equations over Zp with two variables per equation, consider the problem offinding an assignment to variables that satisfies as many equations (constraints) as possible. Ifthere is an assignment to the variables which satisfies all the constraints, it is easy to find such anassignment. On the other hand, if there is an assignment that satisfies almost all constraints (butnot all), it seems quite difficult to find a good satisfying assignment. This is the essence of theUnique Games Conjecture of Khot[10].

One distinguishing feature of the above problem on linear equations is that every constraintcorresponds to a bijection between the values of the associated variables. For every possible valueof one variable, there is a unique value of the second variable that satisfies the constraint. Uniquegames are systems of constraints – a generalization of linear equations discussed above – that havethis uniqueness property (first considered by Feige and Lovasz [6]).

[email protected] Supported by NSF ITR grant CCR-0205594, NSF CAREER award CCR-0237113,MSPA-MCS award 0528414, and an Alfred P. Sloan Fellowship.

[email protected] Supported by a Gordon Wu fellowship. Part of this work was done at MicrosoftResearch.

[email protected] Supported by a Gordon Wu fellowship. Part of this work was done at MicrosoftResearch.

1

Page 2: Near-Optimal Algorithms for Unique Games

Definition 1.1 (Unique Game). A unique game consists of a constraint graph G = (V,E), a setof variables xu (for all vertices u) and a set of permutations πuv on [k] = 1, . . . , k (for all edges(u, v)). Each permutation πuv defines the constraint πuv(xu) = xv. The goal is to assign a valuefrom the set [k] to each variable xu so as to maximize the number of satisfied constraints.

As in the setting of linear equations, instances of unique games where all constraints are satis-fiable are easy to handle. Given an instance where 1− ε fraction of constraints are satisfiable, theUnique Games Conjecture (UGC) of Khot [10] says that it is hard to satisfy even δ fraction of theconstraints. More formally, the conjecture is the following.

Conjecture 1 (Unique Games Conjecture [10]). For any constants ε, δ > 0, for any k >k(ε, δ), it is NP-hard to distinguish between instances of unique games with domain size k where1− ε fraction of constraints are satisfiable and those where δ fraction of constraints are satisfiable.

This conjecture has attracted a lot of recent attention since it has been shown to imply hardnessof approximation results for several important problems: MaxCut [11, 15], Min 2CNF Deletion [3,10], MultiCut and Sparsest Cut [3, 14], Vertex Cover [13], and coloring 3-colorable graphs [5] (basedon a variant of the UGC), that seem difficult to obtain by standard complexity assumptions.

Note that a random assignment satisfies a 1/k fraction of the constraints in a unique game.Andersson, Engebretsen, and Hastad [2] considered semidefinite program (SDP) based algorithmsfor systems of linear equations mod p (with two variables per equation) and gave an algorithmthat performs (very slightly) better than a random assignment. The first approximation algorithmfor general unique games was given by Khot [10], and satisfies 1−O(k2ε1/5

√log(1/ε)) fraction of

all constraints if 1 − ε fraction of all constraints is satisfiable. Recently Trevisan [17] developedan algorithm that satisfies 1 − O( 3

√ε log n) fraction of all constraints (this can be improved to

1−O(√

ε log n) [9]), and Gupta and Talwar [9] developed an algorithm that satisfies 1−O(ε log n)fraction of all constraints. The result of [9] is based on rounding an LP relaxation for the problem,while previous results use SDP relaxations for unique games.

There are very few results that show hardness of unique games. Feige and Reichman [7] showedthat for every positive ε there is c s.t. it is NP-hard to distinguish whether c fraction of allconstraints is satisfiable, or only εc fraction is satisfiable.

Our Results. We present two new approximation algorithms for unique games. We stateour guarantees for instances where 1− ε fraction of constraints are satisfiable. The first algorithmsatisfies

Ω

(min(1,

1√ε log k

) · (1− ε)2 ·(

k√log k

)−ε/(2−ε))

(1)

fraction of all constraints. The second algorithm satisfies 1−O(√

ε log k) fraction of all constraintsand has a better guarantee for ε = O(1/ log k). We apply the same techniques for d-to-1 games aswell.

In order to understand the complexity theoretic implications of our results, it is useful to keep inmind that inapproximability reductions from unique games typically use the “Long Code”, whichincreases the size of the instance by a 2k factor. Thus, such applications of unique games usuallyhave domain size k = O(log n). In Figure 1, we summarize known algorithmic guarantees for uniquegames. In order to compare these different guarantees in the context of hardness applications (i.e.k = O(log n)), we compare the range of values of ε (as a function of k) for which each of thesealgorithms beats the performance of a random assignment.

2

Page 3: Near-Optimal Algorithms for Unique Games

Algorithm Guarantee Thresholdfor OPT = 1− ε ε

Khot [10] 1−O(k2ε1/5√

log(1/ε)) O(1/k10)Trevisan [17] 1−O( 3

√ε log n) O(1/k)

Gupta-Talwar [9] 1−O(ε log n) O(1/k)This paper ≈ Ω(k−ε/(2−ε)) const < 1

1−O(√

ε log k) O(1/ log k)

Figure 1: Summary of results. The guarantee represents the fraction of constraints satisfied forinstances where OPT = 1 − ε. The threshold represents the range of values of ε for which thealgorithm beats a random assignment (computed for k = O(log n)).

Our results show limitations on the hardness bounds achievable using the UGC and strongerversions of it. Chawla, Krauthgamer, Kumar, Rabani, and Sivakumar [3] proposed a strengthenedform of the UGC, conjecturing that it holds for k = log n and ε = δ = 1

(log n)Ω(1) . This was used to

obtain an Ω(log log n) hardness for sparsest cut. Our results refute this strengthened conjecture.1

The performance of our algorithms is naturally constrained by the integrality gap of the SDPrelaxation, i.e. the smallest possible value of an integer solution for an instance with SDP solution ofvalue (1−ε)|E|. Khot and Vishnoi [14] constructed a gap instance for the semidefinite relaxation2 forthe Unique Games Problem where the SDP satisfies (1− ε) fraction of constraints, but the optimalsolution can satisfy at most O(k−ε/9) (one may show that their analysis can yield O(k−ε/4+o(ε))).This shows that our results are almost optimal for the standard semidefinite program.

After we unofficially announced our results, Khot, Kindler, Mossel and O’Donnell [12] showedthat a reduction in an earlier version of their paper [11] together with the techniques in the recentlyproved Majority is Stablest result of Mossel, O’Donnell, and Oleszkiewicz [15] give lower boundsfor unique games that almost match the upper bounds we obtain. They establish the followinghardness results (in fact, for the special case of linear equations mod p):

Theorem 1.2 ([12], Corollary 13). The Unique Games Conjecture implies that for every fixedε > 0, for all k > k(ε), it is NP-hard to distinguish between instances of unique games with domainsize k where at least 1− ε fraction of constraints are satisfiable and those where 1/kε/2−ε fractionof constraints are satisfiable.

Theorem 1.3 ([12], Corollary 14). The Unique Games Conjecture implies that for every fixedε > 0, for all k > k(ε), it is NP-hard to distinguish between instances of unique games with domainsize k where at least 1−ε fraction of constraints are satisfiable and those where 1−

√2/π

√ε log k+

o(1) fraction of constraints are satisfiable.

Thus, our bounds are near optimal if the UGC is true – even a slight improvement of the results1/kε/(2−ε) or 1−O(

√ε log k) (beyond low order terms) will disprove the unique games conjecture!

Our algorithms are based on rounding an SDP relaxation for unique games. The goal is toassign a value in [k] to every variable u. The SDP solution gives a collection of vectors ui for

1An updated version of [3] proposes a different strengthened form of the UGC, which is still plausible given ouralgorithms. They use a modified analysis to account for the asymmetry in ε and δ to obtain an Ω(

√log log n) hardness

for sparsest cut based on this.2We use a slightly stronger SDP than they used, but their integrality gap construction works for our SDP as well.

3

Page 4: Near-Optimal Algorithms for Unique Games

every variable u, one for every value i ∈ [k]. Given a constraint πuv on u and v, the vectors ui andvπuv(i) are close. In contrast to the algorithms of Trevisan [17] and Gupta, Talwar [9], our roundingalgorithms ignore the constraint graph entirely. We interpret the SDP solution as a probabilitydistribution on assignments of values to variables and the goal of our rounding algorithm is to pickan assignment to variables by sampling from this distribution such that values of variables connectedby constraints are strongly correlated. The rough idea is to pick a random vector and examine theprojections of this vector on ui, picking a value i for u for which ui has a large projection. (Infact, this is exactly the algorithm of Khot [10]). We have to modify this basic idea to obtain ourresults since the ui’s could have different lengths and other complications arise. Instead of pickingone random vector, we pick several Gaussian random vectors. Our first algorithm (suitable forlarge ε) picks a small set of candidate assignments for each variable and chooses randomly amongstthem (independently for every variable). It is interesting to note that such a multiple assignment isoften encountered in algorithms implicit in hardness reductions involving label cover. In contrast toprevious results, this algorithm has a non-trivial guarantee even for very large ε. As ε approaches 1(i.e. for instances where the optimal solution satisfies only a small fraction of the constraints), theperformance guarantee approaches that of a random assignment. Our second algorithm (suitablefor small ε) carefully picks a single assignment so that almost all constraints are satisfied. Theperformance guarantee of this algorithm generalizes that obtained by Goemans and Williamson [8]for k = 2. Note that a unique game of domain size k = 2 where 1 − ε fraction of constraints issatisfiable is equivalent to an instance of Max-Cut where the optimal solution cuts 1 − ε fractionof all edges. For such instances, the random hyperplane rounding algorithm of [8] gives a solutionof value 1−O(

√ε), and our guarantee can be viewed as a generalization of this to larger k.

The reader might wonder about the confluence of our bounds and the lower bounds obtainedby Khot et al. [12]. In fact, both arise from the analysis of the same quantity: Given two unitvectors with dot product 1− ε, conditioned on the probability that one has projection Θ(

√log k) on

a random Gaussian vector, what is the probability that the other has a large projection as well?This question arises naturally in the analysis of our rounding algorithms. On the other hand, thebounds obtained by Khot et al. [12] depend on the noise stability of certain functions. Via theresults of [15], this is bounded by the answer to the above question.

In Section 2, we describe the semidefinite relaxation for unique games. In Sections 3 and 4, wepresent and analyze our approximation algorithms. In Section 5, we apply our results to d-to-1games. We defer some of the technical details of our analysis to the Appendix.

Recently, Chlamtac, Makarychev and Makarychev [4] have combined our approach with tech-niques of metric embeddings. Their approximation algorithm for unique games satisfies 1 −O(ε

√log n log k) fraction of all constraints. This generalizes the result of Agarwal, Charikar, Maka-

rychev, and Makarychev [1] for the Min UnCut Problem (i.e. the case k = 2). Note that theirapproximation guarantee is not comparable with ours.

2 Semidefinite Relaxation

First we reduce a unique game to an integer program. For each vertex u we introduce k indicatorvariables ui ∈ 0, 1 (i ∈ [k]) for the events xu = i. For every u, the intended solution has ui = 1for exactly one i. The constraint πuv(xu) = xv can be restated in the following form:

for all i ui = vπuv(i).

4

Page 5: Near-Optimal Algorithms for Unique Games

The unique game instance is equivalent to the following integer quadratic program:

minimize12

∑(u,v)∈E

(k∑

i=1

|ui − vπuv(i)|2)

subject to ∀u ∈ V ∀i ∈ [k] ui ∈ 0, 1∀u ∈ V ∀i, j ∈ [k], i 6= j ui · uj = 0

∀u ∈ V

k∑i=1

u2i = 1

Note that the objective function measures the number of unsatisfied constraints. The contributionof (u, v) ∈ E to the objective function is equal to 0 if the constraint πuv is satisfied, and 1 otherwise.The last two equations say that exactly one ui is equal to 1.

We now replace each integer variable ui with a vector variable and get a semidefinite program(SDP):

minimize12

∑(u,v)∈E

k∑i=1

|ui − vπuv(i)|2

subject to

∀u ∈ V ∀i, j ∈ [k], i 6= j 〈ui, uj〉 = 0 (2)

∀u ∈ V

k∑i=1

|ui|2 = 1 (3)

∀(u, v) ∈ E i, j ∈ [k] 〈ui, vj〉 ≥ 0 (4)∀(u, v) ∈ E i ∈ [k] 0 ≤ 〈ui, vπuv(i)〉 ≤ |ui|2 (5)

The last two constraints are triangle inequality constraints3 for the squared Euclidean distance:inequality (4) is equivalent to |ui − 0|2 + |vj − 0|2 ≥ |ui − vj |2, and inequality (5, right side) isequivalent to |ui − vπuv(i)|2 + |ui − 0|2 ≥ |vπuv(i)− 0|2. A very important constraint is that for i 6= jthe vectors ui and uj are orthogonal. This SDP was studied by Khot [10], and by Trevisan [17].

Here is an intuitive interpretation of the vector solution: Think of the elements of the set [k]as states of the vertices. If ui = 1, the vertex is in the state i. In the vector case, each vertex isin a mixed state, and the probability that xu = i is equal to |ui|2. The inner product 〈ui, vj〉 canbe thought of as the joint probability that xu = i and xv = j. The directions of vectors determinewhether two states are correlated or not: If the angle between ui and vj is small it is likely thatboth events “u is in the state i” and “v is in the state j” occur simultaneously. In some sense laterwe will treat the lengths and the directions of vectors separately.

3 Rounding Algorithm

We first describe a high level idea for the first algorithm. Pick a random Gaussian vector g (withstandard normal independent components). For every vertex u add those vectors ui whose inner

3We will use constraint 4 only in the second algorithm.

5

Page 6: Near-Optimal Algorithms for Unique Games

product with g are above some threshold τ to the set Su; we choose the threshold τ in such a waythat the set Su contains only one element in expectation. Then pick a random state from Su andassign it to the vertex u (if Su is empty do not assign any states to u). What is the probability thatthe algorithm satisfies a constraint between vertices u and v? Loosely speaking, this probability isequal to

E[|Su ∩ πuv(Sv)||Su| · |Sv|

]≈ E [|Su ∩ πuv(Sv)|] .

Assume for a moment that the SDP solution is symmetric: the lengths of all vectors ui arethe same and the squared Euclidean distance between every ui and vπuv(i) is equal to 2ε. (Infact, these constraints can be added to the SDP in the special case of systems of linear equationsof the form xi − xj = cij (mod p).) Since we want the expected size of Su to be 1, we pickthreshold τ such that the probability that 〈g, ui〉 ≥ τ equals 1/k. The random variables 〈g,

√k ·ui〉

and 〈g,√

k · vπuv(i)〉 are standard normal random variables with covariance 1 − ε (note that wemultiplied the inner products by a normalization factor of

√k). For such random variables if the

probability of the event 〈g,√

k · ui〉 ≥ t ≡√

kτ equals 1/k, then roughly speaking the probabilityof the event 〈g,

√k · ui〉 ≥ t ≡

√k · τ and 〈g,

√k · vπuv(i)〉 ≥ t ≡

√k · τ equals k−ε/2 · 1/k. Thus the

expected size of the intersection of the sets Su and πuv(Sv) is approximately k−ε/2.Unfortunately this no longer works if the lengths of vectors are different. The main problem is

that if, say, u1 is two times longer than u2, then Pr (u1 ∈ Su) is much larger than Pr (u2 ∈ Su).One of the possible solutions is to normalize all vectors first. In order to take into account

original lengths of vectors we repeat the procedure of adding vectors to the sets Su many times,but each vector ui has a chance to be selected in the set Su only in the first su,i trials, where su,i

is some integer number proportional to the original squared Euclidean length of ui.We now formally present a rounding algorithm for the SDP described in the previous section.

In Appendix D, we describe an alternate approach to rounding the SDP.

Theorem 3.1. There is a polynomial time algorithm that finds an assignment of variables whichsatisfies

Ω

(min(1,

1√ε log k

) · (1− ε)2 ·(

k√log k

)−ε/(2−ε))

fraction of all constraints if the optimal solution satisfies (1− ε) fraction of all constraints.

Rounding Algorithm 1Input: A solution of the SDP, with the objective value ε · |E|.Output: An assignment of variables xu.

1. Define ui = ui/|ui| if ui 6= 0, 0 otherwise.

Note that vectors u1, . . . , uk are orthogonal unit vectors (except for those vectors that areequal to zero).

2. Pick random independent Gaussian vectors g1, . . . , gk with independent components dis-tributed as N (0, 1).

3. For each vertex u:(a) Set sui = d|ui|2 · ke.

6

Page 7: Near-Optimal Algorithms for Unique Games

(b) For each i project sui vectors g1, . . . , gsuito ui:

ξui,s = 〈gs, ui〉, 1 ≤ s ≤ sui .

Note that ξu1,1, ξu1,2, . . . , ξu1,su1, . . . , ξuk,1, . . . , ξuk,suk

are independent standard normal ran-dom variables. (Since ui and uj are orthogonal if i 6= j, their projections onto a randomGaussian vector are independent). The number of random variables corresponding to each ui

is proportional to |ui|2.(c) Fix a threshold t s.t. Pr (ξ ≥ t) = 1/k, where ξ ∼ N (0, 1) (i.e. t is the (1− 1/k)-quantileof the standard normal distribution; note that t = Θ

(√log k

)).

(d) Pick ξui ’s that are larger than the threshold t:

Su = (i, s) : ξui,s ≥ t .

(e) Pick at random a pair (i, s) from Su and assign xu = i.

If the set Su is empty do not assign any value to the vertex: this means that all the constraintscontaining the vertex are not satisfied.

We introduce some notation.

Definition 3.2. Define the distance between two vertices u and v as:

εuv =12

k∑i=1

|ui − vπuv(i)|2

and letεiuv =

12|ui − vπuv(i)|2.

If ui and vπuv(i) are nonzero vectors and αi is the angle between them, then εiuv = 1 − cos αi. For

consistency, if one of the vectors is equal to zero we set εiuv = 1 and αi = π/2.

Lemma 3.3. For every edge (u, v), state i in [k] and s ≤ min(sui , svπuv(i)) the probability that the

algorithm picks (i, s) for the vertex u and (πuv(i), s) for v at the step 3.e is

Ω

(min(1,

1√εiuv log k

) · 1√log k

·(√

log k

k

)2/(2−εiuv))

. (6)

Proof. First let us observe that ξui,s and ξvπuv(i),s are standard normal random variables withcovariance cos αi = 1 − εi

uv. As we will see later (Lemma B.1) the probability that ξui,s ≥ t andξvπuv(i),s ≥ t is equal to (6).

Note that the expected number of elements in Su is equal to (su1 + . . . + suk)/k which is at

most 2. Moreover, as we prove in the Appendix (Lemma B.3), the conditional expected number ofelements in Su given the event ξui,s ≥ t and ξvπuv(i),s ≥ t is also a constant. Thus by the Markovinequality the following event happens with probability (6): The sets Su and Sv contain the pairs(i, s) and (πuv(i), s) respectively and the sizes of these sets are bounded by a constant. The lemmafollows.

7

Page 8: Near-Optimal Algorithms for Unique Games

Definition 3.4. For brevity, denote(√

log k/k)2/(2−x) by fk(x).

Remark 3.1. It is instructive to consider the case when the SDP solution is uniform in the fol-lowing sense:

1. The lengths of all vectors ui are the same and are equal to 1/√

k.

2. All εiuv are equal to ε.

In this case all sui are equal to 1. And thus the probability that a constraint is satisfied is k times theprobability (6) which is equal, up to a logarithmic factor, to k−ε/(2−ε). Multiplying this probabilityby the number of edges we get that the expected number of satisfied constraints is k−ε/(2−ε)|E|.

In the general case however we need to do some extra work to average the probabilities amongall states i and edges (u, v).

Recall that we interpret |ui|2 as the probability that the vertex u is in the state i. Suppose nowthat the constraint between u and v is satisfied, what is the conditional probability that u is in thestate i and v is in the state πuv(i)? Roughly speaking, it should be equal to (|ui|2 + |vπuv(i)|2)/2.This motivates the following definition.

Definition 3.5. Define a measure µuv on the set [k]:

µuv(T ) =∑i∈T

|ui|2 + |vπuv(i)|2

2, where T ⊂ [k].

Note that µuv([k]) = 1. This follows from constraint (3).

The following lemma shows why this measure is useful.

Lemma 3.6. For every edge (u, v) the following statements hold.

1. The average value of εiuv w.r.t. the measure µuv is less than or equal to εuv:

k∑i=1

µuv(i)εiuv ≤ εuv.

2. For every i,min(sui , svπuv(i)

) ≥ (1− εiuv)

2µuv(i)k.

Proof. 1. Indeed,

k∑i=1

µuv(i) · εiuv =

k∑i=1

|ui|2 + |vπuv(i)|2 − (|ui|2 + |vπuv(i)|2) · cos αi

2

≤k∑

i=1

|ui|2 + |vπuv(i)|2 − 2 · |ui| · |vπuv(i)| · cos αi

2

=k∑

i=1

|ui − vπuv(i)|2

2= εuv

8

Page 9: Near-Optimal Algorithms for Unique Games

Note that here we used the fact that 〈ui, vπuv(i)〉 ≥ 0.2. Without loss of generality assume that |ui| ≤ |vπuv(i)|, and hence min(sui , svπuv(i)

) = sui .Due to the triangle inequality constraint (5) in the SDP |vπuv(i)| cos αi ≤ |ui|. Thus

(1− εiuv)

2µuv(i) = cos2 αi ·|ui|2 + |vπuv(i)|2

2≤ |ui|2 ≤ sui/k.

Lemma 3.7. For every edge (u, v) the probability that an assignment found by the algorithm sat-isfies the constraint πuv(xu) = xv is

Ω(

k√log k

·min(1,1√

εuv log k) · fk(εuv)

). (7)

Proof. Denote the desired probability by Puv. It is equal to the sum of the probabilities obtainedin Lemma 3.3 over all i in [k] and s ≤ min (sui , svπuv(i)

). In other words,

Puv = Ω

(k∑

i=1

min (sui , svπuv(i))

1√log k

min(1,1√

εiuv log k

)fk(εiuv)

).

Replacing min (sui , svπuv(i)) with (1− εi

uv)2µuv(i) · k we get

Puv = Ω

(k√log k

k∑i=1

µuv(i) min(1,1√

εiuv log k

)(1− εiuv)

2fk(εiuv)

).

Consider the set M =i ∈ [k] : εi

uv ≤ 2εuv

. For i in M the term

√εiuv log k is bounded from

above by√

2εuv log k. Thus

Puv = Ω

(k√log k

min(1,1√

εuv log k)∑i∈M

µuv(i)(1− εiuv)

2fk(εiuv)

).

The function (1 − x)2fk(x) is convex on [0, 1] (see Lemma B.4). The average value of εiuv among

i in M (w.r.t. the measure µuv) is at most the average value of εiuv among all i, which in turn is

less than εuv according to Lemma 3.6. Finally, by the Markov inequality µuv(M) ≥ 1/2. Thus byJensen’s inequality

Puv = Ω(

k√log k

min(1,1√

εuv log k)µuv(M)(1− εuv)2 · fk(εuv)

)= Ω

(k√log k

min(1,1√

εuv log k)(1− εuv)2 · fk(εuv)

).

This finishes the proof.

We are now in position to prove the main theorem.Theorem 3.1. There is a polynomial time algorithm that finds an assignment of variables whichsatisfies

Ω

(min(1,

1√ε log k

) · (1− ε)2 ·(

k√log k

)−ε/(2−ε))

fraction of all constraints if the optimal solution satisfies (1− ε) fraction of all constraints.

9

Page 10: Near-Optimal Algorithms for Unique Games

Proof. Let us restrict our attention to a subset of edges E′ = (u, v) ∈ E : εuv ≤ 2ε. For (u, v) inE′, since εuv log k ≤ 2ε log k, we have

Puv = Ω(

k√log k

min(1,1√

ε log k)(1− εuv)2 · fk(εuv)

).

Summing this probability over all edges (u, v) in E′ and using convexity of the function (1−x)2fk(x)we get the statement of the theorem.

4 Almost Satisfiable Instances

Suppose that ε is O(1/ log k). In the previous section we saw that in this case the algorithm findsan assignment of variables satisfying a constant fraction of constraints. But can we do better? Inthis section we show how to find an assignment satisfying 1−O(

√ε log k) fraction of constraints.

The main issue we need to take care of is to guarantee that the algorithm always picks onlyone element in the set Su (otherwise we loose a constant factor). This can be done by selecting thelargest in absolute value ξui,s (at step 3.d). We will also change the way we set sui .

Denote by [x]r the function that rounds x up or down depending on whether the fractionalpart of x is greater or less than r. Note that if r is a random variable uniformly distributed in theinterval [0, 1], then the expected value of [x]r is equal to x.Rounding Algorithm 2Input: A solution of the SDP, with the objective value ε · |E|.Output: An assignment of variables xu.

1. Pick a number r in the interval [0, 1] uniformly at random.

2. Pick random independent Gaussian vectors g1, . . . , g2k with independent components dis-tributed as N (0, 1).

3. For each vertex u:

(a) Set sui = [2k · |ui|2]r.(b) For each i project sui vectors g1, . . . , gsui

to ui:

ξui,s = 〈gs, ui〉, 1 ≤ s ≤ sui .

(c) Select ξui,s with the largest absolute value, where i ∈ [k] and s ≤ sui . Assign xu = i.

We first elaborate on the difference between the choice of sui in the algorithm above and that inAlgorithm 1 presented earlier. Consider a constraint πuv(xu) = xv. Projection ξui,s generated by ui

and ξvπuv(i),s generated by vπuv(i) are considered to be matched. On the other hand, a projection ξui,s

such that the corresponding ξvπuv(i),s does not exist (or vice versa) is considered to be unmatched.Unmatched projections arise when sui 6= svπuv(i)

and the fraction of such projections limits theprobability of satisfying the constraint. Recall that in Algorithm 1, we set sui = d|ui|2 · ke. Even ifui and vπuv(i) are infinitesimally close, it may turn out that sui and svπuv(i)

differ by 1, yielding anunmatched projection. As a result, some constraints that are almost satisfied by the SDP solution(i.e. εuv is close to 0) could be satisfied with low probability (by the first rounding algorithm). In

10

Page 11: Near-Optimal Algorithms for Unique Games

Algorithm 2, we set sui = [2k · |ui|2]r. This serves two purposes: Firstly, Er

[|sui − svπuv(i)

|]

can

be bounded by 2k · |ui − vπuv(i)|2, giving a small number of unmatched projections in expectation.Secondly, the number of matched projections is always at least k/2. These two properties areestablished in Lemma 4.3 and ensure that the expected fraction of unmatched projections is small.

Our analysis of Rounding Algorithm 2 is based on the following theorem.

Theorem 4.1. Let ξ1, · · · , ξm and η1, · · · , ηm be two sequences of standard normal random vari-ables. Suppose that the random variables in each of the sequences are independent, the covarianceof every ξi and ηj is nonnegative, and the average covariance of ξi and ηi is at least 1− ε:

cov(ξ1, η1) + · · ·+ cov(ξm, ηm)m

≥ 1− ε.

Then the probability that the largest r.v. in absolute value in the first sequence has the same indexas the largest r.v. in absolute value in the second sequence is 1−O(

√ε log m).

We informally sketch the proof. See Appendix C for the complete proof. It is instructive toconsider the case when cov(ξi, ηi) = 1−ε for all i. Assume that the first variable ξ1 is the largest inabsolute value among ξ1, . . . , ξm and its absolute value is a positive number t. Note that the typicalvalue of t is approximately

√2 log m− log log m (i.e. t is the (1−1/m)-quantile ofN (0, 1)). We want

to show that η1 is the largest in absolute value among η1, . . . , ηm with probability 1−O(√

ε log m),or in other words the probability that any (fixed) ηi is larger than η1 is O(

√ε log m/m). Let us

compute this probability for η2.Since cov(η1, ξ1) = 1−ε and cov(ξ2, η2) = 1−ε, the random variable η1 is equal to (1−ε)ξ1+ζ1;

and η2 is equal to (1−ε)ξ2+ζ2, where ζ1 and ζ2 are normal random variables with variance, roughlyspeaking, 2ε. We need to estimate the probability of the event

η2 ≥ η1 = (1− ε)ξ2 + ζ2 ≥ (1− ε)ξ1 + ζ1 = (1− ε)ξ2 + ζ2 − ζ1 ≥ (1− ε)t

conditional on ξ1 = t and ξ2 ≤ t. For typical t this probability is almost equal to the probability ofthe event:

ξ2 + ζ ≥ t and ξ2 ≤ t = t− ζ ≤ ξ2 ≤ t (8)

where ζ = ζ2 − ζ1.Since the variance of the random variable ζ is O(ε), we can think that ζ ≈ O(

√ε). The density

of ξ2 on the interval [t− ζ, t] is approximately 1/√

2πe−t2/2 ≈ O(√

log m/m) (for typical t). Thusprobability (8) is equal to O(

√ε log m/m). This finishes our informal “proof”.

Now we are ready to prove the main lemma.

Lemma 4.2. The probability that the algorithm finds an assignment of variables satisfying theconstraint πuv(xu) = xv is 1−O(

√εuv log k).

Proof. If εuv ≥ 1/8 the statement of the lemma follows trivially. So we assume that εuv ≤ 1/8.Let

M =

(i, s) : i ∈ [k] and s ≤ min(sui , svπuv(i))

;

Mc =

(i, s) : i ∈ [k] and min(sui , svπuv(i)) < s ≤ max(sui , svπuv(i)

)

.

The set M contains those pairs (i, s) for which both ξui,s and ξvπuv(i),s are defined (i.e. the matchedprojections); the set Mc contains those pairs for which only one of the variables ξui,s and ξvπuv(i),s

is defined (i.e. the unmatched projections). We will need the following lemmas.

11

Page 12: Near-Optimal Algorithms for Unique Games

Lemma 4.3. 1. The expected size of Mc is at most 4εuvk:

E [|Mc|] ≤ 4εuvk.

2. The set M always contains at least k/2 elements: |M | ≥ k/2.

Proof. 1. First we find the expected value of |sui − svπuv(i)| for a fixed i. This value is equal to

Er

[∣∣[2k · |ui|2]r − [2k · |vπuv(i)|2]r∣∣] = 2k ·

∣∣|ui|2 − |vπuv(i)|2∣∣ .

Now by the triangle inequality constraint (5),

2k ·∣∣|ui|2 − |vπuv(i)|2

∣∣ ≤ 2k · |ui − vπuv(i)|2.

Summing over all i in [k] we finish the proof.2. Observe that

min(sui , svπuv(i)) ≥ 2k ·min(|ui|2, |vπuv(i)|2)− 1

andmin(|ui|2, |vπuv(i)|2) ≥ |ui|2 − ||ui|2 − |vπuv(i)|2| ≥ |ui|2 − |ui − vπuv(i)|2.

Summing over all i we get

|M | =∑i∈[k]

min(sui , svπuv(i)) ≥

∑i∈[k]

(2k · |ui|2 − 2k · |ui − vπuv(i)|2 − 1

)≥ 2k − 4kεuv − k ≥ k/2.

Lemma 4.4. The following inequality holds:

E

1|M |

∑(i,s)∈M

εiuv

≤ 4εuv.

Proof. Recall that M always contains at least k/2 elements. The expected value of min(sui , svπuv(i))

is equal to 2k ·min(|ui|2, |vπuv(i)|2) and is less than or equal to 2k · µuv(i). Thus we have

Er

1|M |

∑(i,s)∈M

εiuv

= Er

[1|M |

k∑i=1

min(sui , svπuv(i)) · εi

uv

]

≤ 2k

k∑i=1

2k · µuv(i) · εiuv ≤ 4

k∑i=1

µuv(i) · εiuv ≤ 4εuv.

12

Page 13: Near-Optimal Algorithms for Unique Games

Proof of Lemma 4.2Applying Theorem 4.1 to the sequences ξui,s ((i, s) ∈ M) and ξvπuv(i),s ((i, s) ∈ M) we get thatfor given r the probability that the largest in absolute value random variables in the first sequenceξui,s and the second sequence ξvπuv(i),s have the same index (i, s) is

1−O

√√√√log |M | · 1|M |

∑(i,s)∈M

εiuv

.

Now by Lemma 4.4, and by the concavity of the function√

x, we have

Er

1−O

√√√√ log |M |

|M |∑

(i,s)∈M

εiuv

≥ 1−O

(√εuv log k

).

The probability that there is a larger ξui,s or ξvπuv(i),s in Mc is at most

Er

[|Mc||M |

]≤ 4εuvk

k/2= 8εuv.

Using the union bound we get that the probability of satisfying the constraint πuv(xu) = xv is equalto

1−O(√

εuv log k)− 8εuv = 1−O(√

εuv log k).

Theorem 4.5. There is a polynomial time algorithm that finds an assignment of variables whichsatisfies 1− O(

√ε log k) fraction of all constraints if the optimal solution satisfies (1− ε) fraction

of all constraints.

Proof. Summing the probabilities obtained in Lemma 4.2 over all edges (u, v) and using theconcavity of the function

√x we get that the expected number of satisfied constraints is 1 −

O(√

ε log k)|E|.

5 d to 1 Games

In this section we extend our results to d-to-1 games.

Definition 5.1. We say that Π ⊂ [k]× [k] is a d-to-1 predicate, if for every i there are at most ddifferent values j such that (i, j) ∈ Π, and for every j there is at most one i such that (i, j) ∈ Π.

Definition 5.2 (d to 1 Games). We are given a directed constraint graph G = (V,E), a setof variables xu (for all vertices u) and d-to-1 predicates Πuv ⊂ [k] × [k] for all edges (u, v). Ourgoal is to assign a value from the set [k] to each variable xu, so that the maximum number of theconstraints (xu, xv) ∈ Πuv is satisfied.

13

Page 14: Near-Optimal Algorithms for Unique Games

Note that even if all constraints of a d-to-1 game are satisfiable it is hard to find an assignmentof variables satisfying all constraints. We will show how to satisfy

Ω

1√log k

· (1− ε)4 ·(

k√log k

)−√d−1+ε√d+1−ε

fraction of all constraints (the multiplicative constant in the Ω notation depends on d). Notice thatthis value can be obtained by replacing ε in formula (1) with ε′ = 1 − (1 − ε)/

√d (and changing

(1− ε)2 to (1− ε)4).Even though we do not require that for a constraint Πuv each i in [k] belongs to some pair

(i, j) ∈ Πuv, let us assume that for each i there exists j s.t. (i, j) ∈ Πuv; and for each j there existsi s.t. (i, j) ∈ Πuv. As we see later this assumption is not important.

In order to write a relaxation for d-to-1 games introduce the following notation:

wiuv =

∑j:(i,j)∈Πuv

vj .

The SDP is as follows:

minimize12

∑(u,v)∈E

(k∑

i=1

∣∣ ui − wiuv

∣∣2)

subject to

∀u ∈ V ∀i, j ∈ [k], i 6= j 〈ui, uj〉 = 0 (9)

∀u ∈ Vk∑

i=1

|ui|2 = 1 (10)

∀(u, v) ∈ V i, j ∈ [k] 〈ui, vj〉 ≥ 0 (11)

∀(u, v) ∈ V i ∈ [k] 0 ≤ 〈ui, wiuv〉 ≤ min(|ui|2, |wi

uv|2) (12)

An important observation is that |w1uv|2 + . . . + |wk

uv|2 = 1, here we use the fact that for a fixededge (u, v) each vj is a summand in one and only one wi

uv.We use Algorithm 1 for rounding a vector solution. For analysis we will need to change some

notation:

wiuv =

wi

uv/|wiuv|, if wi

uv 6= 0;0, otherwise

εiuv =

|ui − wiuv|2

2

εiuv′ = 1− 1− εi

uv√d

µuv(i) =|ui|2 + |wi

uv|2

2The following lemma explains why we get the new dependency on ε.

14

Page 15: Near-Optimal Algorithms for Unique Games

Lemma 5.3. For every edge (u, v) and state i there exists j′ s.t. (i, j′) ∈ Πuv and |ui − vj′ |2/2 ≤εiuv′.

Proof. Let u′i be the projection of the vector ui to the linear span of the vectors vj (where (i, j) ∈Πuv). Let αi be the angle between ui and wi

uv; and let βi be the angle between ui and u′i. Clearly,|u′i| = cos βi ≥ cos αi = 1− εi

uv. Since all vj ((i, j) ∈ Πuv) are orthogonal unit vectors, there existsvj′ s.t. 〈vj′ , u

′i〉 ≥ |u′i|/

√d. Hence, 〈vj′ , ui〉 = 〈vj′ , u

′i〉 ≥ (1− εi

uv)/√

d.

For every edge (u, v) and state i, find j′ as in the previous lemma and define a function4

πuv(i) = j′. Then replace every constraint (xu, xv) ∈ Πuv with a stronger constraint πuv(xu) = xv.Now we can apply the original analysis of Algorithm 1 to the new problem. In the proof we needto substitute εi

uv′ for εi

uv, 1− (1− εuv)/√

d for εuv, and 1− (1− ε)/√

d for ε. The only missing stepis the following lemma.Lemma 3.6′. For every edge (u, v) the following statements hold.

1. The average value of εiuv w.r.t. the measure µuv is less than or equal to εuv.

2. The average value of εiuv′ w.r.t. the measure µuv is less than or equal to 1− 1−εuv√

d.

3. min(sui , svπuv(i)) ≥ d(1− εi

uv′)4µuv(i)k.

Proof. Let αi be the angle between ui and wiuv and let α′i be the angle between ui and vπuv(i).

1. Indeed,

k∑i=1

µuv(i) · εiuv =

k∑i=1

|ui|2 + |wiuv|2 − (|ui|2 + |wi

uv|2) · cos αi

2

≤k∑

i=1

|ui|2 + |wiuv|2 − 2 · |ui| · |wi

uv| · cos αi

2

=k∑

i=1

|ui − wiuv|2

2= εuv.

2. This follows from part 1 and the definition of εiuv′.

3. Due to the triangle inequality constraint, |wiuv| cos αi ≤ |ui|. Thus

(1− εiuv)

2µuv(i) = cos2 αi ·|ui|2 + |wi

uv|2

2≤ |ui|2.

Similarly |vπuv(i)| cos α′i ≤ |ui| and

(1− εiuv′)2|ui|2 ≤ cos2 α′i · |ui|2 ≤ |vπuv(i)|2.

Combining these two inequalities and noting that (1− εiuv′) = (1− εi

uv)/√

d, we get

d(1− εiuv′)4µuv(i) ≤ (1− εi

uv′)2|ui|2 ≤ |vπuv(i)|2.

The lemma follows.4The function πuv is not necessarily a permutation.

15

Page 16: Near-Optimal Algorithms for Unique Games

We now address the issue that for some edges (u, v) and states j there may not necessarily existi s.t. (i, j) ∈ Πuv. We call such j a state of degree 0. The key observation is that in our algorithmswe may enforce additional constraints like xu = i or xu 6= i by setting ui = 1 or ui = 0 respectfully.Thus we can add extra states and enforce that the vertices are not in these states. Then we addpairs (i, j) where i is a new state, and j is a state of degree 0 (or vice-versa). Alternatively we canrewrite the objective function by adding an extra term:

minimize12

∑(u,v)∈E

(k∑

i=1

∣∣ ui − wiuv

∣∣2 + |w0uv|2

),

where w0uv is the sum of vj over j of degree 0.

Acknowledgements

We would like to thank Sanjeev Arora, Uri Feige, Johan Hastad, Muli Safra and Luca Trevisan forvaluable discussions and Eden Chlamtac for his comments. The second and third authors thankMicrosoft Research for their hospitality.

References

[1] A. Agarwal, M. Charikar, K. Makarychev, and Y. Makarychev. O(√

log n) approximationalgorithms for Min UnCut, Min 2CNF Deletion, and directed cut problems. In Proceedingsof the 37th Annual ACM Symposium on Theory of Computing, pp. 573–581, 2005.

[2] G. Andersson, L. Engebretsen, and J. Hastad. A new way to use semidefinite programmingwith applications to linear equations mod p. Journal of Algorithms, Vol. 39, pp. 162–204,2001.

[3] S. Chawla, R. Krauthgamer, R. Kumar,Y. Rabani, and D. Sivakumar. On the hardness ofapproximating multicut and sparsest-cut. In Proceedings of the 20th IEEE Conference onComputational Complexity, pp. 144–153, 2005.

[4] E. Chlamtac, K. Makarychev and Y. Makarychev. How to play any Unique Game. manuscript,February 2006.

[5] I. Dinur, E. Mossel, and O. Regev. Conditional hardness for approximate coloring. ECCCTechnical Report TR05-039, 2005.

[6] U. Feige and L. Lovasz. Two-prover one round proof systems: Their power and their problems.In Proceedings of the 24th ACM Symposium on Theory of Computing, pp. 733–741, 1992.

[7] U. Feige and D. Reichman. On systems of linear equations with two variables per equa-tion. In Proceedings of the 7th International Workshop on Approximation Algorithms forCombinatorial Optimization, vol. 3122 of Lecture Notes in Computer Science, pp. 117–127,2004.

16

Page 17: Near-Optimal Algorithms for Unique Games

[8] M. Goemans , D. Williamson. Improved approximation algorithms for maximum cut andsatisfiability problems using semidefinite programming. Journal of the ACM, vol. 42, no. 6,pp. 1115–1145, Nov. 1995.

[9] A. Gupta and K. Talwar. Approximating Unique Games. In Proceedings of the 17th ACM-SIAM Symposium on Discrete Algorithms, pp. 99–106, 2006.

[10] S. Khot. On the power of unique 2-prover 1-round games. In Proceedings of the 34th ACMSymposium on Theory of Computing, pp. 767–775, 2002.

[11] S. Khot, G. Kindler, E. Mossel, and R. O’Donnell. Optimal inapproximability results forMAX-CUT and other two-variable CSPs? In Proceedings of the 45th IEEE Symposium onFoundations of Computer Science, pp. 146–154, 2004.

[12] S. Khot, G. Kindler, E. Mossel, and R. O’Donnell. Optimal inapproximability results forMAX-CUT and other 2-variable CSPs? ECCC Report TR05-101, 2005.

[13] S. Khot and O. Regev. Vertex cover might be hard to approximate to within 2 − ε. InProceedings of the 18th IEEE Conference on Computational Complexity, pp. 379–386, 2003.

[14] S. Khot and N. Vishnoi. The unique games conjecture, integrality gap for cut problems and theembeddability of negative type metrics into `1. In Proceedings of the 46th IEEE Symposiumon Foundations of Computer Science, pp. 53–62, 2005.

[15] E. Mossel, R. O’Donnell, and K. Oleszkiewicz. Noise stability of functions with low influences:invariance and optimality. In Proceedings of the 46th IEEE Symposium on Foundations ofComputer Science, pp. 21–40, 2005.

[16] Z. Sidak. Rectangular Confidence Regions for the Means of Multivariate Normal Distributions.Journal of the American Statistical Association, vol. 62, no. 318, pp. 626–633, Jun. 1967.

[17] L. Trevisan. Approximation Algorithms for Unique Games. In Proceedings of the 46th IEEESymposium on Foundations of Computer Science, pp. 197–205, 2005.

A Properties of Normal Distribution

For completeness we present some standard results used in the paper.Denote the probability that a standard normal random variable is bigger than t ∈ R by Φ(t),

in other wordsΦ(t) ≡ 1− Φ0,1(t) = Φ0,1(−t),

where Φ0,1 is the normal distribution function.

Lemma A.1. 1. For every t > 0,

t√2π(t2 + 1)

e−t2

2 < Φ(t) <1√2πt

e−t2

2

17

Page 18: Near-Optimal Algorithms for Unique Games

2. There exist positive constants c1, C1, c2, C2 such that for all 0 < p < 1/3, t ≥ 0 and ρ ≥ 1 thefollowing inequalities hold:

c1√2π(t + 1)

e−t2

2 ≤ Φ(t) ≤ C1√2π(t + 1)

e−t2

2 ;

c2

√log (1/p) ≤ Φ−1(p) ≤ C2

√log (1/p);

3. There exists a positive constant C3, s.t. for every 0 < δ ≤ 2 and t ≥ 1/δ the followinginequality holds:

Φ(δt +1δt

) ≥ C3(t · Φ(t))δ2 · t−1.

Proof. 1. First notice that

Φ(t) =1√2π

∫ ∞

te−

x2

2 dx =1√2π

−e−x2

2

x

∣∣∣∣∣∣∞

t

−∫ ∞

t

e−x2

2

x2dx

=

1√2πt

e−t2

2 − 1√2π

∫ ∞

t

e−x2

2

x2dx.

ThusΦ(t) <

1√2πt

e−t2

2 .

On the other hand

1√2π

∫ ∞

t

e−x2

2

x2dx <

1√2πt2

∫ ∞

te−

x2

2 dx =Φ(t)t2

.

Hence

Φ(t) >1√2πt

e−t2

2 − Φ(t)t2

,

andΦ(t) >

t√2π(t2 + 1)

e−t2

2 .

2. This trivially follows from (1).

3. Using (2) we get

Φ(δt +1δt

) ≥ C ·(

1 + δt +1δt

)−1

· e−(δt+ 1

δt)2

2 ≥ C ′ · (δt + 1)−1 · e−δ2·t2

2

≥ C ′′

e−t2

2

t + 1

δ2

· tδ2 · t−1 ≥ C ′′′ · (t · Φ(t))δ2 · t−1

18

Page 19: Near-Optimal Algorithms for Unique Games

We will use the following result of Z. Sidak [16]:

Theorem A.2 (Sidak). Let ξ1, . . . , ξk be normal random variables with mean zero, then for anypositive t1, . . . , tk,

Pr (|ξ1| ≤ t1, |ξ2| ≤ t2, . . . , |ξk| ≤ tk) ≥ Pr (|ξ1| ≤ t1) Pr (|ξ2| ≤ t2, . . . , |ξk| ≤ tk) .

Note that these random variable do not have to be independent.

Corollary A.3. Let ξ1, . . . , ξk be normal random variables with mean zero, then for any positivet1, . . . , tk,

Pr (ξ1 ≥ t1 | |ξ2| ≤ t2, . . . , |ξk| ≤ tk) ≤ Pr (ξ1 ≥ t1) .

Proof. By Theorem A.2,

Pr (|ξ1| ≤ t1 | |ξ2| ≤ t2, . . . , |ξk| ≤ tk) ≥ Pr (|ξ1| ≤ t1) .

Thus

Pr (ξ1 ≥ t1 | |ξ2| ≤ t2, . . . , |ξk| ≤ tk) =12− 1

2Pr (|ξ1| ≤ t1 | |ξ2| ≤ t2, . . . , |ξk| ≤ tk)

≤ 12− 1

2Pr (|ξ1| ≤ t1) = Pr (ξ1 ≥ t1)

B Analysis of Algorithm 1

In this section we will prove some technical lemmas we used in the analysis of the first algorithm.

Lemma B.1. Let ξ and η be correlated standard normal random variables, 0 < ε < 1, t ≥ 1. Ifcov(ξ, η) ≥ 1− ε, then

Pr (ξ ≥ t and η ≥ t) ≥ C ·min(1, (√

εt)−1) · t−1 · (t · Φ(t))2

2−ε . (13)

for some positive constant C.

Proof. Let us represent ξ and η as follows:

ξ = σX +√

1− σ2 · Y ; η = σX −√

1− σ2 · Y,

where

σ2 = Var[ξ + η

2

]; X =

ξ + η

2σ; Y =

ξ − η

2√

1− σ2.

Note that X and Y are independent standard normal random variables; and

σ2 = Var[ξ + η

2

]=

14

[2 + 2 cov(ξ, η) ] ≥ 1− ε

2. (14)

19

Page 20: Near-Optimal Algorithms for Unique Games

Notice that 1/2 ≤ σ2 ≤ 1. We now estimate the probability (13) as follows

Pr(ξ ≥ t and η ≥ t

)= Pr

(σX ≥ t +

√1− σ2 · |Y |

)≥ Pr

(X ≥ t

σ+

σ

t

)· Pr

(|Y | ≤ σ2

√1− σ2 · t

)By Lemma A.1 (3) we get

Pr(ξ ≥ t and η ≥ t

)≥ C ·

(t−1 · (tΦ(t))1/σ2

)·min

(1,

σ2

√1− σ2 · t

)≥ C ′ ·min((

√ε · t)−1, 1) · t−1 · (t · Φ(t))

22−ε .

Corollary B.2. Let ξ and η be standard normal random variables with covariance greater than orequal to 1− ε; let Φ(t) = 1/k. Then

Pr (ξ ≥ t and η ≥ t) ≥ Ω

(min

(1,

1√ε log k

)· 1√

log k·(

k√log k

)− 22−ε

).

Lemma B.3. Let ξ, η, ε, k and t be as in Corollary B.2, and let ξ1, . . . , ξm be i.i.d. standardnormal random variables and m ≤ 2k, then

E

[m∑

i=1

Iξi≥t | ξ ≥ t and η ≥ t

]= O(1),

where Iξi≥t is the indicator of the event ξi ≥ t.

Proof. Let X and Y be as in the proof of Lemma B.1. Put αi = cov(X, ξi) and express each ξi as

ξi = αiX +√

1− α2i · Zi. By Bessel’s Inequality α2

1 + · · ·+ α2m ≤ 1 (since random variables ξi are

orthogonal). We now estimate

Pr(ξi ≥ t | σX ≥ t +√

1− σ2 · |Y |) =

Pr(ξi ≥ t | σX ≥ t +

√1− σ2 · |Y | and X ≤ 4t

)· Pr

(X ≤ 4t | σX ≥ t +

√1− σ2 · |Y |

)+Pr

(ξi ≥ t | σX ≥ t +

√1− σ2 · |Y | and X > 4t

)· Pr

(X > 4t | σX ≥ t +

√1− σ2 · |Y |

)Notice that

Pr(ξi ≥t | σX ≥ t +

√1− σ2 · |Y | and X < 4t

)= Pr

(αiX +

√1− α2

i · Zi ≥ t | σX ≥ t +√

1− σ2 · |Y | and X < 4t)

=∫ 4t

t/σPr(αix +

√1− α2

i · Zi ≥ t | σx ≥ t +√

1− σ2 · |Y |)dF (x)

≤ maxx∈[t/σ,4t]

Pr(√

1− α2i · Zi ≥ t− αix |

√1− σ2 · |Y | ≤ σx− t

)by Corollary A.3

≤ maxx∈[t/σ,4t]

Pr(√

1− α2i · Zi ≥ t− αix

)≤ Pr (Zi ≥ (1− 4αi)t) .

20

Page 21: Near-Optimal Algorithms for Unique Games

It suffices to prove that

m∑i=1

Pr (Zi ≥ (1− 4αi)t) =m∑

i=1

Φ((1− 4αi)t) = O(1).

Fix a sufficiently large constant c. The number of αi that are greater than 1/c is O(1). Thenumber of αi such that log−1 k ≤ αi ≤ 1/c is O(log2 k) and for them Φ((1 − 4αi)t) = O(k−1/2)(since c is a sufficiently large constant). Finally, if αi < 1/ log k, then Φ((1−4αi)t) = O(k−1). Thisfinishes the proof.

Lemma B.4. The function (1− x)2fk(x) is convex on the interval [0, 1].

Proof. Let m = k/√

log k. Compute the first and the second derivatives of fk:

f ′′k (x) =(m− 2

2−x

)′′= −2 log m ·

(m− 2

2−x

(2− x)2

)′

= 4 log m · m− 22−x

(2− x)3·(

log m

2− x− 1)

.

Now((1− x)2 · fk(x)

)′′ = (1 − x)2 · f ′′k (x) − 4(1 − x)f ′k(x) + 2fk(x). Observe that fk(x)is always positive, and f ′k(x) is always negative. Therefore, if f ′′k (x) is positive, we are done:((1− x)2 · fk(x)

)′′ ≥ 0. Otherwise, we have((1− x)2 · fk(x)

)′′ = (1− x)2 · f ′′k (x)− 4(1− x)f ′k(x) + 2fk(x) ≥ f ′′k (x) + 2fk(x)

≥ 4 log m ·m− 22−x

(log m

2− 1)

+ 2m− 22−x = 2m− 2

2−x (log m− 1)2 ≥ 0.

C Analysis of Algorithm 2

In this section, we present the formal proof of Theorem 4.1. We will follow an informal outline ofthe proof sketched in the main text. We start with estimating probability (8).

Lemma C.1. Let ξ and ζ be two independent random normal variables with variance 1 and σ2

respectively (0 < σ < 1). Then for every positive t

Pr (ξ ≤ t and ξ + ζ ≥ t) = O(σe(σt+1)2

2 · e−t2

2 ).

Remark C.1. In the “typical” case e(σt+1)2/2 is a constant.

21

Page 22: Near-Optimal Algorithms for Unique Games

Proof. We have

Pr(ξ ≤ t and ξ + ζ ≥ t

)=∫ ∞

0Pr (ξ ≤ t and ξ + x ≥ t) dFζ(x)

=1√2πσ

∫ ∞

0Pr (ξ ≤ t and ξ + x ≥ t) e−

x2

2σ2 dx

=1√2π

∫ ∞

0Pr (ξ ≤ t and ξ + σ y ≥ t) e−

y2

2 dy

=1√2π

∫ t/σ

0Pr (t− σ y ≤ ξ ≤ t) e−

y2

2 dy +1√2π

∫ ∞

t/σPr (t− σ y ≤ ξ ≤ t) e−

y2

2 dy.

Let us bound the first integral. Since the density of the random variable ξ on the interval (t−σy, t)

is at most 1√2π

e−(t−σy)2

2 and y ≤ ey, we have

Pr (t− σ y ≤ ξ ≤ t) ≤ σy · 1√2π

e−(t−σy)2

2 ≤ σ√2π

· e−t2

2 · e(σt+1)y.

Therefore,

1√2π

∫ t/σ

0Pr (t− σ y ≤ ξ ≤ t) e−

y2

2 dy ≤ σe−t2

2

∫ t/σ

0e(σt+1)y · e−

y2

2 dy

≤ σe−t2

2

∫ ∞

−∞e−

(y−(σt+1))2

2 · e(σt+1)2

2 dy

= O

(σe

−t2

2 · e(σt+1)2

2

).

We now upper bound the second integral. If t ≥ 1, then

1√2π

∫ ∞

t/σPr (t− σ y ≤ ξ ≤ t) e−

y2

2 dy ≤ 1√2π

∫ ∞

t/σe−

y2

2 dy = Φ(t/σ) = O

e−t2

2σ2

t/σ + 1

= O

σ e−t2

2

t + σ

= O

(σ e−

t2

2

).

If t ≤ 1, then

1√2π

∫ ∞

t/σPr (t− σ y ≤ ξ ≤ t) e−

y2

2 dy ≤ 1√2π

∫ ∞

0σy · e−

y2

2 dy = O (σ) = O

(σ e−

t2

2

).

The desired inequality follows from the upper bounds on the first and second integrals.

We need a slight generalization of the lemma.

Corollary C.2. Let ξ and ζ be two independent random normal variables with variance 1 and σ2

respectively (0 < σ < 1). Then for every t ≥ 0 and 0 ≤ ε < 1

Pr (ξ + ζ ≥ (1− ε)t | |ξ| ≤ t) = O

((σ + εt) · c(ε, σ, t) · e−t2/2

1− 2Φ(t)

),

22

Page 23: Near-Optimal Algorithms for Unique Games

wherec(ε, σ, t) = e

(σt+1)2

2+εt2 .

Remark C.2. As in the previous lemma, in the “typical” case c(ε, σ, t) is a constant.

Proof. First note that

Pr (ξ + ζ ≥ (1− ε)t | |ξ| ≤ t) ≤ Pr (ξ + ζ ≥ (1− ε)t and ξ ≤ t)Pr (|ξ| ≤ t)

=Pr (ξ + ζ ≥ (1− ε)t and ξ ≤ t)

1− 2Φ(t)

Now,

Pr (ξ + ζ ≥ (1− ε)t and ξ ≤ t) ≤ Pr (ξ + ζ ≥ t and ξ ≤ t)+Pr ((1− ε)t ≤ ξ + ζ ≤ t) .

By Lemma C.1, the first probability is bounded as follows:

Pr (ξ + ζ ≥ t and ξ ≤ t) ≤ O

(σe

(σt+1)2

2 · e−t2

2

).

Since Var [ξ + ζ] ≤ 1 + σ2, the second probability is at most

Pr ((1− ε)t ≤ ξ + ζ ≤ t) ≤ εt · e−((1−ε)t)2

2(1+σ2) ≤ εt · e(2ε+σ2)t2

2 · e−t2

2 ,

here we used the following inequality

(1− ε)2t2

2(1 + σ2)=

(1− ε)2(1− σ2)t2

2(1− σ4)≥ (1− 2ε− σ2)t2

2≥ t2

2− (2ε + σ2)t2

2.

The corollary follows.

In the following lemma we formally define the random variables ζ1 and ζ2.

Lemma C.3. Let ξ1, ξ2, η1 and η2 be standard normal random variables such that ξ1 and ξ2 areindependent; η1 and η2 are independent; and

• cov(ξ1, η1) ≥ 1− ε ≥ 0 and cov(ξ2, η2) ≥ 1− ε ≥ 0 (for some positive ε);

• cov(ξ1, η2) ≥ 0 and cov(ξ2, η1) ≥ 0.

Then there exist normal random variables ζ1 and ζ2 independent of ξ1 and ξ2 with variance at most2ε such that

|η1| − |η2| ≥ (1− 4ε)|ξ1| − (1 + 3ε)|ξ2| − |ζ1| − |ζ2|.

Proof. Express η1 as a linear combination of ξ1, ξ2, and a normal r.v. ζ1 independent of ξ1 and ξ2:

η1 = α1ξ1 + β1ξ2 + ζ1,

23

Page 24: Near-Optimal Algorithms for Unique Games

similarly,η2 = α2ξ1 + β2ξ2 + ζ2.

Note that α1 = cov(η1, ξ1) ≥ 1− ε and β1 = cov(η1, ξ2) ≥ 0. Thus

Var [ζ1] ≤ Var [η1]− α21 ≤ 1− (1− ε)2 ≤ 2ε.

Similarly, α2 ≥ 0, β2 ≥ 1− ε, and Var[ζ2] ≤ 2ε. Since η1 and η2 are independent, we have

α1α2 + β1β2 + cov(ζ1, ζ2) = cov(η1, η2) = 0.

Therefore (note that cov(ζ1, ζ2) ≤ 0; α1α2 ≥ 0; β1β2 ≥ 0),

α2 =−β1β2 − cov(ζ1, ζ2)

α1≤√

Var[ζ1] Var[ζ2]1− ε

≤ 2ε

1− ε.

Taking into account that α2 ≤ 1, we get α2 ≤ min(1, 2ε1−ε) ≤ 3ε. Similarly, β1 ≤ 3ε. Finally, we

have

|η1| − |η2| ≥ (α1 − α2)|ξ1| − (β1 + β2)|ξ2| − |ζ1| − |ζ2|≥ (1− 4ε)|ξ1| − (1 + 3ε)|ξ2| − |ζ1| − |ζ2|.

In what follows we assume that ξ1 is the largest r.v. in absolute value among ξ1, . . . , ξm and itsabsolute value is t. For convenience we define three events:

At = |ξi| ≤ t for all 3 ≤ i ≤ m ;

Et = At ∩ |ξ1| = t and |ξ2| ≤ t ;

E = |ξ1| ≥ |ξi| for all i =⋃t≥0

Et.

Now we are ready to combine Corollary C.2 and Lemma C.3.

Lemma C.4. Let ξ1, · · · , ξm and η1, · · · , ηm be two sequences of standard normal random variables.Suppose that

1. the random variables in each of the sequences are independent,

2. the covariance of every ξi and ηj is nonnegative,

3. cov(ξ1, η1) ≥ 1− ε and cov(ξ2, η2) ≥ 1− ε, where ε ≤ 1/7.

Then

Pr (|η1| ≤ |η2| | Et) = O

((√

ε + εt) · e−t2/2 · c(7ε,√

8ε, t)1− 2Φ(t)

), (15)

where c(ε, σ, t) is from Corollary C.2.

24

Page 25: Near-Optimal Algorithms for Unique Games

Proof. By Lemma C.3, we have

|η1| − |η2| ≥ (1− 4ε)|ξ1| − (1 + 3ε)|ξ2| − |ζ1| − |ζ2|.

Therefore,

Pr (|η1| ≤ |η2| | Et) ≤ Pr ((1 + 3ε)|ξ2|+ |ζ1|+ |ζ2| ≥ (1− 4ε)|ξ1| | Et)≤ Pr (|ξ2|+ |ζ1|+ |ζ2| ≥ (1− 7ε)t | Et)

≤∑

s,s1,s2∈±1

Pr (sξ2 + s1ζ1 + s2ζ2 ≥ (1− 7ε)t | Et)

Let us fix signs s, s1, s2 ∈ ±1 and denote ξ = sξ2, ζ = s1ζ1 + s2ζ1, then we need to show that

Pr (ξ + ζ ≥ (1− 7ε)t | Et) = O

((√

ε + εt) · e−t2/2 · c(7ε,√

8ε, t)1− 2Φ(t)

).

Observe that the random variables ξ, ζ and the event At are independent of ξ1, thus

Pr (ξ + ζ ≥ (1− 7ε)t | Et)= Pr (ξ + ζ ≥ (1− 7ε) t | At and |ξ1| = t and |ξ| ≤ t)= Pr (ξ + ζ ≥ (1− 7ε) t | At and |ξ| ≤ t)= Pr (ζ ≥ (1− 7ε) t− ξ | At and |ξ| ≤ t) .

Since ξ and At are independent, for every fixed value of ξ we can apply Corollary A.3. Thus

Pr (ζ ≥ (1− 7ε)t− ξ | At and |ξ| ≤ t) ≤ Pr (ζ ≥ (1− 7ε)t− ξ | |ξ| ≤ t)= Pr (ξ + ζ ≥ (1− 7ε)t | |ξ| ≤ t) .

Finally, by Corollary C.2 (where σ2 = Var [ζ] ≤ 8ε),

Pr (ξ + ζ ≥ (1− 7ε)t | |ξ| ≤ t) = O

((√

ε + εt) · e−t2/2 · c(7ε,√

8ε, t)1− 2Φ(t)

).

Corollary C.5. Under assumptions of Lemma C.4,1. if εt2 ≤ 1, then

Pr (|η1| ≤ |η2| | Et) = O

(√

ε(t + 1) · Φ(t)

1− 2Φ(t)

);

2. if t > 1, thenPr (|η1| ≤ |η2| | Et) = O

(√ε)

.

Proof. 1. If εt2 ≤ 1, then εt ≤√

ε and

c(7ε,√

8ε, t) = e(√

8εt+1)2

2+7εt2 = O(1).

25

Page 26: Near-Optimal Algorithms for Unique Games

Notice that(√

ε + εt) · e−t2/2

1− 2Φ(t)= O

((√

ε + εt) · (t + 1) · Φ(t)1− 2Φ(t)

),

since

Φ(t) = Θ

(e−t2/2

t

).

2. If ε > 1/32 the statement holds trivially. So assume that ε ≤ 1/32. Then

(√

8εt + 1)2

2+ 7εt2 ≤ 3t2

8+ O(t).

Thus t·e−t2

2 ·c(7ε,√

8ε, t) is upper bounded by some absolute constant. Since t ≥ 1, the denominator1− 2Φ(t) of the expression (15) is bounded away from 0.

We now give a bound on the “typical” absolute value of the largest random variable.

Lemma C.6. The following inequality holds:

Pr(|ξ1| ≥ 2

√log m | E

)≤ 1

m.

Proof. Note that the probability of the event E is 1/m, since all random variables ξ1, . . . , ξm areequally likely to be the largest in absolute value. Thus we have

Pr(|ξ1| ≥ 2

√log m | E

)≤

Pr(|ξ1| ≥ 2

√log m

)Pr (E)

≤ 1m2

/1m

=1m

.

Lemma C.7. Let ξ1, · · · , ξm and η1, · · · , ηm be two sequences of standard normal random vari-ables as in Theorem 4.1. Assume that cov(ξ1, η1) ≥ 1 − ε and cov(ξ2, η2) ≥ 1 − ε, where ε <min(1/(4 log m), 1/7). Then

Pr (|η1| ≤ |η2| | E) = O

(√ε log m

m

).

Proof. Write the desired probability as follows:

Pr (|η1| ≤ |η2| | E) = Pr(|η1| ≤ |η2| and |ξ1| ≤ 2

√log m | E

)+ Pr

(|η1| ≤ |η2| and |ξ1| ≥ 2

√log m | E

)First consider the case |ξ1| ≤ 2

√log m. Denote by dF|ξ1| the density of |ξ1| conditional on E. Then

Pr(|η1| ≤ |η2| and |ξ1| ≤ 2

√log m | E

)=∫ 2

√log m

0Pr (|η1| ≤ |η2| | E and |ξ1| = t) dF|ξ1|(t)

=∫ 2

√log m

0Pr (|η1| ≤ |η2| | Et) dF|ξ1|(t)

26

Page 27: Near-Optimal Algorithms for Unique Games

Now by Corollary C.5,∫ 2√

log m

0Pr (|η1| ≤ |η2| | Et) dF|ξ1|(t) =

∫ 2√

log m

0O

(2√

ε log m Φ(t)1− 2Φ(t)

)dF|ξ1|(t).

Let us change the variable to x = 1−2Φ(t). What is the probability density function of 1−2Φ(|ξ1|)given E? For each i the r.v. 1−2Φ(|ξi|) is uniformly distributed on the interval [0, 1]. Now |ξi| > |ξj |if and only if 1 − 2Φ(|ξi|) > 1 − 2Φ(|ξj |), therefore 1 − 2Φ(|ξ1|) is distributed as the maximum ofm independent random variables on [0, 1] given E. Its density function is (xm)′ = mxm−1 (forx ∈ [0, 1]). We have∫ 2

√log m

0

2√

ε log m Φ(t)1− 2Φ(t)

dF|ξ1|(t) ≤∫ ∞

0

2√

ε log m Φ(t)1− 2Φ(t)

dF|ξ1|(t)

=∫ 1

0

2√

ε log m · (1− x)/2x

·mxm−1dx = m√

ε log m

∫ 1

0(1− x)xm−2dx

= m√

ε log m

(1

m− 1− 1

m

)=√

ε log m

m− 1.

Now consider the case |ξ1| ≥ 2√

log m, by Corollary C.5,

Pr(|η1| ≤ |η2| | E and |ξ1| ≥ 2

√log m

)= O

(√ε)

.

By Lemma C.6,

Pr(|ξ1| ≥ 2

√log m | E

)≤ 1

m.

This concludes the proof.

Now we will prove a lemma, which differs from Theorem 4.1 only by one additional condition (4).

Lemma C.8. Let ξ1, · · · , ξm and η1, · · · , ηm be two sequences of standard normal random variables.Let εi = cov(ξi, ηi). Suppose that

1. the random variables in each of the sequences are independent,

2. the covariance of every ξi and ηj is nonnegative,

3. 1m

∑mi=1 εi = ε,

4. εi ≤ min(1/(4 log m), 1/7).

Then the probability that the largest r.v. in absolute value in the first sequence has the same indexas the largest r.v. in absolute value in the second sequence is 1−O(

√ε log m).

Proof. By Lemma C.7,

Pr(|η1| ≤ |η2| | |ξ1| ≥ max

j≥2|ξj |)

= O

(√log m

m

√max (ε1, ε2)

).

27

Page 28: Near-Optimal Algorithms for Unique Games

Applying the union bound, we get

Pr(|η1| ≤ maxi≥2

|ηj | | |ξ1| ≥ maxj≥2

|ξj |) = O

(√log m

m

m∑i=2

√max (ε1, εi)

)

= O

(√log m

(m√

ε1 +m∑

i=1

√εi

))by Jensen’s inequality

≤ O(√

log m(√

ε1 +√

ε))

.

Since the probability that |ξi| = maxj |ξj | equals 1/m for each i, the probability that the largestr.v. in absolute value among ξi, and the largest r.v. in absolute value among ηj have differentindexes is at most

O

(1m

m∑i=1

√log m · (

√εi +

√ε)

)≤ O

(√log m · (

√ε +

√ε))

= O(√

ε log m)

.

Proof of Theorem 4.1. Denote εi = 1 − cov(ξi, ηi). Then (ε1 + · · · + εm) ≤ mε. We may assumethat ε < min(1/(4 log m), 1/7) — otherwise, the theorem follows trivially.

Consider the set I = i : εi < min(1/(4 log m), 1/7). Since ε < min(1/(4 log m), 1/7), the set Iis not empty. Applying Lemma C.8 to random variables ξii∈I and ηii∈I , we conclude that thethe largest r.v. in absolute value among ξii∈I has the same index as the largest r.v. in absolutevalue among ξii∈I with probability

1−O

√log |I| · 1|I|∑i∈I

εi

= 1−O(√

ε log m)

.

Since each ξi is the largest r.v. among ξ1,. . . , ξm in absolute value with probability 1/m, theprobability that the largest r.v. among ξ1,. . . , ξm does not belong to ξii∈I is m−|I|

m . Similarly,the probability that the largest r.v. among η1,. . . , ηm does not belong to ηii∈I is (m − |I|)/m.Therefore, by the union bound, the probability that the largest r.v. in absolute value among ξi,and the largest r.v. in absolute value among ηj have different indexes is at most

1−O(√

ε log m)− 2m− |I|

m. (16)

We now upper bound the last term.

2m− |I|

m

by the Markov inequality≤ 2

ε

min(1/(4 log m), 1/7)

≤ 2 (4 log m + 7)ε = O(ε log m) = O(√

ε log m).

(Here we use that ε log m < 1.)Plugging this bound into (16) we get that the desired probability is 1 − O(

√ε log m). This

finishes the proof.

28

Page 29: Near-Optimal Algorithms for Unique Games

D An Alternate Approach

We would like to present an alternative version of Algorithm 1. It demonstrates another approachfor taking into account the lengths of vectors ui: we can choose a separate threshold tui for eachui. This algorithm achieves the same approximation ratio. The analysis uses similar ideas and weomit it here.Input: A solution of the SDP, with the objective value ε · |E|.Output: An assignment of variables xu.

1. Define ui = ui/|ui| if ui 6= 0, 0 otherwise.

2. Pick a random Gaussian vector g with independent components distributed as N (0, 1).

3. For each vertex u:

(a) For each i project the vector g to ui:

ξui = 〈g, ui〉.

(b) Fix a threshold tui s.t. Pr (ξui ≥ tui) = |ui|2 (i.e. tui is the (1 − |ui|2)-quantile of thestandard normal distribution). Note that tui depends on ui.

(c) Pick ξui ’s that are larger than the threshold tui :

Su = i : ξui ≥ tui .

(d) Pick at random a state i from Su and assign xu = i.

29


Recommended