+ All Categories
Home > Documents > Lecture 4: Q-learning (table) - GitHub Pages · Lecture 4: Q-learning (table) exploit&exploration...

Lecture 4: Q-learning (table) - GitHub Pages · Lecture 4: Q-learning (table) exploit&exploration...

Date post: 07-Jun-2019
Category:
Upload: ngocong
View: 216 times
Download: 0 times
Share this document with a friend
23
Lecture 4: Q-learning (table) exploit&exploration and discounted future reward Reinforcement Learning with TensorFlow&OpenAI Gym Sung Kim <[email protected]>
Transcript

Lecture 4: Q-learning (table)exploit&exploration and discounted future reward

Reinforcement Learning with TensorFlow&OpenAI GymSung Kim <[email protected]>

Dummy Q-learning algorithm

Learning Q (s, a)?

Learning Q(s, a) Table: one success!

11

11

1

111

Exploit VS Exploration

11

11

1

111

http://home.deib.polimi.it/restelli/MyWebSite/pdf/rl5.pdf

Exploit VS Exploration

0.0 0.0 0.0 0.0 0.0

Exploit (weekday) VS Exploration (weekend)

0.5 0.6 0.3 0.2 0.5

Exploit VS Exploration: E-greedy

e = 0.1

if rand < e: a = random

else: a = argmax(Q(s, a))

Exploit VS Exploration: decaying E-greedy

for i in range (1000)

e = 0.1 / (i+1)

if random(1) < e: a = random

else: a = argmax(Q(s, a))

Exploit VS Exploration: add random noise

0.5 0.6 0.3 0.2 0.5

Exploit VS Exploration: add random noise

0.5 0.6 0.3 0.2 0.5

a = argmax(Q(s, a) + random_values)

a = argmax([0.5 0.6 0.3 0.2 0.5]+[0.1 0.2 0.7 0.3 0.1])

Exploit VS Exploration: add random noise

0.5 0.6 0.3 0.2 0.5

for i in range (1000) a = argmax(Q(s, a) + random_values / (i+1))

Exploit VS Exploration

11

11

1

111

Dummy Q-learning algorithm

Discounted future reward

11

11

1

111

1

1

Learning Q (s, a) with discounted reward

Discounted future reward

Learning Q (s, a) with discounted reward

Discounted reward ( = 0.9)

1

Q-learning algorithm

Q-Table Policy

(3) quality (reward) for the given action

(eg, LEFT: 0.5, RIGHT 0.1 UP: 0.0, DOWN: 0.8)

Q (s, a)

111 11111

1

1

(1) state, s

(2) action, a

Convergence

• In deterministic worlds

• In finite states

Machine Learning, Tom Mitchell, 1997

Next

Lab: Q-learning Table


Recommended