+ All Categories
Home > Documents > Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization...

Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization...

Date post: 13-Jul-2020
Category:
Upload: others
View: 26 times
Download: 0 times
Share this document with a friend
20
Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart
Transcript
Page 1: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Introduction toOptimization

Global Optimization

Marc ToussaintU Stuttgart

Page 2: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Global Optimization

• Is there an optimal way to optimize (in the Blackbox case)?

• Is there a way to find the global optimum instead of only local?

2/20

Page 3: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Core references

• Jones, D., M. Schonlau, & W. Welch (1998). Efficient globaloptimization of expensive black-box functions. Journal of GlobalOptimization 13, 455-492.

• Jones, D. R. (2001). A taxonomy of global optimization methods basedon response surfaces. Journal of Global Optimization 21, 345-383.

• Poland, J. (2004). Explicit local models: Towards optimal optimizationalgorithms. Technical Report No. IDSIA-09-04.

3/20

Page 4: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

More up-to-date – very nice GP-UCB introduction

4/20

Page 5: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Outline

• Play a game

• Multi-armed bandits & Upper Confidence Bound (UCB)

• Optimization as infinite bandits; GPs as response surfaces

• Standard criteria:– Upper Confidence Bound (UCB)– Maximal Probability of Improvement (MPI)– Expected Improvement (EI)

5/20

Page 6: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Multi-armed bandits

• There are n machines.Each machine has an average reward fi – but you don’t know the fi’s.

What do you do?

6/20

Page 7: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Multi-armed bandits

• Let at ∈ {1, .., n} be the choice of machine at time tLet yt ∈ {0, 1} be outcome with mean 〈yt〉 = fat

• A policy or strategy maps all the history to a new action:

π : [(a1, y1), (a2, y2), ..., (at-1, yt-1)] 7→ at

• Example objectives: find a policy π that

max

⟨T∑

t=1

yt

ormax 〈yT 〉

or other variants.7/20

Page 8: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Exploration vs. Exploitation

• Such kinds of problems appear in many contexts(Global Optimization, AI, Reinforcement Learning, etc)

• In simple domains (standard MDPs), actions influence the (external)world state→ actions navigate through the state space

In learning domains, actions influence your knowledge→ actionsnavigate through state and belief space

In multi-armed bandits, the bandits usually do not have an internal statevariable – they are the same every round.

8/20

Page 9: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Exploration vs. Exploitation

• The “knowledge” can be represented as the full history

ht = [(a1, y1), (a2, y2), ..., (at-1, yt-1)]

or, in the Bayesian thinking, as belief

bt = P (X|ht) =P (ht|X)

P (ht)P (X)

where X is all the (unknown) properties of the world

• In the multi-armed bandit case:X = (f1, .., fn)

bt = P (X|ht) =∏

i N(fi|yi,t, σi,t) (if bandits are Gaussian)

9/20

Page 10: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Navigating through Belief Space

b0

a1

y1

b1

a2

y2

a3

y3

b3

X

b2

– Maximizing for 〈y3〉 requires to have a “good” b2– Actions a1 and a2 should be planned to achieve best possible b2– Action a3 then greedily chooses machine with highest yi,2

• Exploration: Choose the next action at to min 〈H(bt)〉• Exploitation: Choose the next action at to max 〈yt〉• Maximizing for 〈yT 〉 (or similar) requires exploration and exploitation

Such policies can in principle be computed → POMDPs (or Lai & Robbins)

But in the following we discuss more efficient 1-step criteria10/20

Page 11: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Upper Confidence Bound (UCB) selection

1: Initializaiton: Play each machine once2: repeat3: Play the machine i that maximizes yi +

√2 lnnni

4: until

yi is the average reward of machine i so farni is how often machine i has been played so farn =

∑i ni is the number of rounds so far

(The lnn makes this work also for non-Gaussian bandits, e.g. heavy-tailed.)

See lane.compbio.cmu.edu/courses/slides_ucb.pdf for a summary of Auer et al.

11/20

Page 12: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

UCB algorithms

• UCB algorithms determine a confidence interval such that

yi − σi < fi < yi + σi

with high probability.UCB chooses the upper bound of this confidence intervalStrong theory on efficiency of this method in comparision to optimal

• UCB methods are also used for planning:Upper Confidence Bounds for Trees (UCT)

12/20

Page 13: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

How exactly is this related to global optimization?

13/20

Page 14: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Global Optimization = infinite bandits

• In global optimization f(x) defines a “reward” for every x ∈ Rn

– Instead of a finite number of actions at we now have xt

• Optimal Optimization could be defined as: find a π that

min

⟨T∑

t=1

f(xt)

ormin 〈f(xT )〉

• In principle we know what an optimal optimization algorithm wouldhave to do – it is just computationally infeasible (in general)

14/20

Page 15: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Gaussian Processes as belief

• Assume we have a history

ht = [(x1, y1), (x2, y2), ..., (xt-1, yt-1)]

• Gaussian Processes are a Machine Learning method that– provides a mean estimate f(x) (response surface)– provides a variance estimate σ2(x)↔ confidence intervals

• Caveat: One needs to make assumptions about the kernel(e.g., how smooth the function is)

15/20

Page 16: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

1-step criteria based on GPs

• Maximize Probability of Improvement (MPI)

xt = argmaxx

∫ y∗

−∞N(y|f(x), σ(x))

• Maximize Expected Improvement (EI)

xt = argmaxx

∫ y∗

−∞N(y|f(x), σ(x)) (y∗ − y)

• Maximize UCBxt = argmax

xf(x) + βtσ(x)

[Often, βt = 1 is chosen. UCB theory allows for better choices. See Srinivas et al.] 16/20

Page 17: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

17/20

Page 18: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

18/20

Page 19: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

19/20

Page 20: Introduction to Optimization · 2016-12-21 · Introduction to Optimization Global Optimization Marc Toussaint U Stuttgart. Global Optimization Is there an optimal way to optimize

Global Optimization

• Given data, we compute a belief over f(x)

• The belief expresses mean estimate f(x) and confidence σ(x)– Use Gaussian Processes or other Bayesian ML methods.

• Optimal Optimization would imply planning in belief space

• Efficient Global Optimization uses 1-step criteria– Upper Confidence Bound (UCB)– Maximal Probability of Improvement (MPI)– Expected Improvement (EI)

• Global Optimization with gradient information→ Gaussian Processes with derivative observations

20/20


Recommended