+ All Categories
Home > Documents > Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. ·...

Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. ·...

Date post: 19-Aug-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
20
Lecture 10: 딥강화학습 ( 교재 Chapter 8 딥강화학습) <기계학습 개론> 2019 강의 서울대학교 컴퓨터공학부 장병탁 교재: 장교수의 딥러닝, 홍릉과학출판사 , 2017. Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Version 20171109/20191030 © 2017, 장교수의 딥러닝, SNUCSE Biointelligence Lab., http://bi.snu.ac.kr 1
Transcript
Page 1: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

Lecture 10: 딥강화학습(교재 Chapter 8 딥강화학습)

<기계학습개론> 2019강의서울대학교컴퓨터공학부

장병 탁

교재:장교수의딥러닝,홍릉과학출판사, 2017.

Biointelligence LaboratorySchool of Computer Science and Engineering

Seoul National University

Version 20171109/20191030

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 1

Page 2: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

목차

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 2

8.1 강화학습과 MDP 문제 ...…………………………… 3

8.2 MC 학습과 TD 학습 ......……………………………. 8

8.3 Sarsa와 Q학습 알고리듬 .………...……………….. 11

8.4 딥큐넷(DQN) ……….………………...,…………… 14

8.5 딥강화학습의 활용: AlphaGo ........………….……. 17

요약 ………..………..………………………..……….….. 19

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

Page 3: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

Introductionn 감독학습:

n 무감독학습: or

n 강화학습 (RL): p(a|s)

q 강화학습(reinforcement learning, RL)의특징§ 에이전트의상태(s)와행동(a)에 대해보상(r)을최대화하는정책 p(a|s) 를찾는문제§ 환경과상호작용하는목표를가진에이전트(로봇)

§ 순차적의사결정문제, 행동제어문제§ 미래보상고려, 지연된보상(delayed reward)

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 3

8.1 강화학습과MDP 문제 (1/5)

( )y f x=~ ( )x p x( )x f x=

Page 4: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 4

Introduction8.1 강화학습과MDP 문제 (2/5)

Page 5: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

n Markov Decision Process (MDP)-­ Markov Decision Process =

: States of the agent: Actions of the agent: State transition probability : Reward : Discount factor

-­ Markov Property:

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 5

8.1 강화학습과MDP 문제 (3/5)

, , , , S A P R γ

' 1(S ' | S ,A )ass t t tP P s s a+= = = =

1(R | S ,A )as t t tR s a+= Ε = =

1 1 1,...,(S | S ) (S | S )t t t tP P+ +=

SA

RP

γ

Page 6: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

n 강화학습 (RL) = Approximate Solution to MDP

n 반환 G (return): Discounted accumulated mean reward

q 정책 π (policy of the agent):

q 가치함수 V (value function =장기적 반환):

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 6

8.1 강화학습과MDP 문제 (4/5)

1 2 1 1 10...t t t t k t tk

G R R R R G∞+ + + + + +=

= + γ + = γ = + γ∑

( | ) ( | )t ta s P A a S sπ = = =

10

( ) [ | ] [ | ]kt t t k t

kV s E G S s E R S sπ π π

+ +=

= = = γ =∑

10

( , ) [ | , a] [ | ,A ]kt t t t k t t

kQ s a E G S s A E R S s aπ π π

+ +=

= = = = γ = =∑

Page 7: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

q 최적정책: Value function을최대화하는 정책을찾음

q 벨만최적식 (Bellman optimality equation)

q 동적프로그래밍(dynamic programming, DP)

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 7

8.1 강화학습과MDP 문제 (5/5)

* ( , )arg max Q s aππ

π =

* V ( )arg max sππ

π =

𝑉∗ 𝑠 = max)

𝑅+) + 𝛾 . 𝑃++0) 𝑉∗(𝑠2)

+0∈5

𝑄∗ 𝑠, 𝑎 = 𝑅+) + 𝛾 . 𝑃++0) max

)0+0∈5

𝑄∗(𝑠2, 𝑎2)

Page 8: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

Policy Iterationn 강화학습의 학습 패러다임

- Evaluation: 를현재의 정책 를통해 학습

- Improvement: 를현재의 가치함수 를통해 학습

- Improvement의 예: Greedy improvement

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 8

8.2 MC 학습과 TD 학습 (1/3)

1( ) ( , )arg maxk ka

s Q s aπ + =

( , )Q s aπ

( , )Q s aπ

( )sπ

( )sπ

Page 9: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

Methods for RLn Monte-Carlo RL (MC 학습)

- DP 방법에서에피소드의 Return (G)을 Value function으로부터 Sampling하여근사(MC근사)

- 에피소드가끝나야학습이가능- 에피소드의길이가길거나무한한경우사용하기어려움

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 9

8.2 MC 학습과 TD 학습 (2/3)

1 2[ ... | , ]t t t t tG R R S s A a+ += + γ + = =

1 2( , ) [G ( , ) ( , ) ... | , ]t tq s a E s a G s a S s A aπ = + + = =

Page 10: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

Methods for RLn Temporal Difference RL (TD학습)

-­ Bootstrapping을 사용해에피소드가끝나지않아도학습가능

-­ MC 방법에비해효율적

q 모델기반 vs. 모델프리 RL

q MC:모델기반 (전이확률필요)

q TD: 모델프리(전이확률불필요)

q 오프라인 vs. 온라인학습

q MC:오프라인(샘플모아서 update)

q TD: 온라인(바로 update)

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 10

8.2 MC 학습과 TD 학습 (3/3)

1 1( ) [ ( ) | ]t t tV s E R V s S sπ π+ += + γ =

1 1 1( , ) [ ( ,a ) | ,A ]t t t t tQ s a E R Q s S s aπ π+ + += + γ = =

Page 11: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

Value Based RLn Bootstrapping Value Based RL-­ Value function만을학습

-­ Action의선택은 greedy policy를사용:-­대표적으로 SARSA와Q-­learning이있음

-­ Table을사용하는방법과 Approximation을사용하는방법이 있음

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 11

8.3 SARSA와 Q학습알고리듬 (1/3)

1( ) ( , )arg maxk ka

s Q s aπ + =

Page 12: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

Value Based RLn SARSA

-­ On-­policy 알고리듬: 경험을모으는 policy가현재 policy임

알고리즘 1: Sarsa 학습알고리즘

Q(s,a)를임의값으로초기화For에피소드=1,…, n dos초기화s에따르는 Q(예, ε-­greedy) 를통해유래된 a를선택

For시퀀스 t =1,…,T doa선택,보상 r과다음상태 s’관측s’=st+1에따르는 Q (예, ε-­greedy)를통해유래된 a’=at+1을선택

s←s’;; a←a’;;End For

End For

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 12

8.3 SARSA와 Q학습알고리듬 (2/3)

1 1 1 ( , ) ( , ) [ ( , ) ( , )]t t t t t t t t tQ s a Q s a r Q s a Q s aα + + += + + γ −

1 1 1 ( , ) ( , ) [ ( , ) ( , )]t t t t t t t t tQ s a Q s a r Q s a Q s aα + + += + + γ −

Page 13: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

Value Based RLn Q-­learning

-­ Off-­policy 알고리듬: 경험을모으는 policy가현재 policy와다름

알고리듬 2;; Q-­Learning 알고리듬

Q(s,a)를임의값으로초기화For 에피소드=1, …, n dos초기화For 시퀀스 t =1,…,T do

s에 따르는 Q (예, ε-­greedy)를통해유래된 a를선택a 선택, 보상 r과다음상태 s’=st+1 관측

s←s’;;End For

End For

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 13

8.3 SARSA와 Q학습알고리듬 (3/3)

1 1 ( , ) ( , ) [ max ( , ) ( , )]t t t t t t t taQ s a Q s a r Q s a Q s aα + += + + γ −

1 1 ( , ) ( , ) [ max ( , ) ( , )]t t t t t t t taQ s a Q s a r Q s a Q s aα + += + + γ −

Page 14: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

n Q-­learning의업데이트식

n DQN은Q학습의 오류함수최소화에딥신경망과오류역전파 알고리듬을사용

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 14

8.4 딥큐넷(DQN) (1/3)

21 [ max ( ', ) ( , )]2 a

L r Q s a Q s a= + γ −

State

Q(State,Action=0)

Q(State,Action=1)

.

.

.

Q(State,Action=K)

1 1 ( , ) ( , ) [ max ( , ) ( , )]t t t t t t t taQ s a Q s a r Q s a Q s aα + += + + γ −

Page 15: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

n Target Network Trickn 학습초기 Q(s’,a’)이부정확하고변화가심함è학습성능저하

n DQN과동일한구조를가지고있으며학습도중 weight값이변하지않는별도의네트워크

(Target Network)에서Q(s’,a’)를계산 -­ Target Network의 weight값들은주기적으로DQN의것을복사

n Replay Memory Trickn Changing Data Distribution: Agent의행동에따라들어오는데이터의분포가변화함 (e.g. 어

떤 mini batch를학습한후무조건왼쪽으로가도록 policy가변화è이후왼쪽으로가지않

는경우의데이터를얻을수없게되어학습이불가능)

n (State, Action, Reward, Next State)데이터를 Buffer에저장해놓고그 안에서 Random Sampling하여mini batch를구성하는방법으로해결

n Reward Clipping Trickn 도메인에따라 Reward의크기가다르기때문에 Q value의크기도다름

n Q value의크기 variance가매우큰경우신경망학습이어려움

n Reward의크기를 [-­1, +1] 사이로제한하여안정적학습

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 15

8.4 딥큐넷(DQN) (2/3)

Page 16: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 16

8.4 딥큐넷(DQN) (3/3)

n Atari 2600 비디오게임에서실험n 절반이상의 게임에서사람보다우수n 기존방식(linear)에비해 월등한향상n 일부게임은 실패(reward가 sparse, 복잡한단계를필요)

n PaperMnih, Volodymyr, et al. "Human-­level control through deep reinforcement learning." Nature 518.7540 (2015): 529-­533.

n Videohttps://www.youtube.com/watch?v=TmPfTpjtdgg

n Open sourcehttps://github.com/gliese581gg/DQN_tensorflow

Page 17: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

n AlphaGon 강화학습(A3C)과몬테칼로 트리탐색(MCTS)알고리즘을결합n (A3C = Asynchronous Actor-­Critic Agents, MCTS = Monte-­Carlo Tree Search)n 탐색공간이매우큰바둑에서학습을통해서인간을뛰어넘는성능데모n 정책망(policy network, actor)과가치망(value network, critic)을따로학습n 정책망은프로기사들의기보를사용해지도학습후강화학습적용

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 17

8.5 딥강화학습의활용: AlphaGo (1/2)

Page 18: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

n Monte-­Carlo Tree Search-­ Selection: ,-­ Expansion: 잎노드에도착하면정책망을통해새로운잎노드생성

-­ Evaluation:-­ Backup:

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 18

8.5 딥강화학습의활용: AlphaGo (2/2)

argmax( ( , ) ( , ))t t ta

a Q s a u s a= +( , )( , )

1 ( , )t

tt

P s au s aN s a

∝+

( ) (1 ) ( )L L LV s v s zθλ λ= − +

( , ) 1( , , )i

N s a s a i=∑ 1( , ) 1( , , ) ( )( , )

iL

iQ s a s a i V s

N s a= ∑

Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., ... & Dieleman, S. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484-­489.

zL = 시뮬레이션 통한 보상

N(s,a) = 상태행동 방문횟수

Page 19: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

n 강화학습과 MDP 문제

n MDP: State, Action, Reward, Transition, Discount로구성.순차적의사결정

n 강화학습은 MDP문제를근사적으로해결하는머신러닝방법

n Reward기대치를최대화하는방향으로 Agent의행동을선택하도록학습

n Sarsa와 Q-­Learning

n Value Function을 Approximation하고이를최대화하는 Action 선택

n Sarsa, Q-­Learning

n DQN: Deep Neural Network로 Q값을근사

n 딥강화학습의활용

n AlphaGo바둑 AI

n RL과 Monte Carlo Tree Search

n 정책망,가치망에딥러닝을활용

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

요약

19© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

Page 20: Lecture’10:’딥강화학습 교재 Chapter’8’딥강화학습 · 2019. 10. 30. · Lecture’10:’딥강화학습 (교재 Chapter’8’딥강화학습)

질문n 감독학습, 무감독학습과비교하여강화학습이 다른점은무엇인가?

n 마코프결정문제(MDP)를정의하고다양한 해결방법들을기술하시오.

n 벨만최적식, 동적프로그래밍의개념을설명하시오.

n MDP문제 해결을위한방법으로서의 강화학습을설명하시오.

n 강화학습의다양한 전략들을기술하고서로간의차이를 설명하시오.

n 모델기반 vs. 모델프리 RL의차이를설명하시오.

n 오프라인 vs.온라인 RL의차이를설명하시오.

n On-­policy vs. Off-­policy RL의차이를설명하시오.

n MC학습(몬테칼로기반동적프로그래밍방식)방법을설명하시오.

n TD학습방법을기술하고 MC방법과의차이를 설명하시오.

n TD학습, Sarsa, Q학습알고리듬을설명하고 차이점을기술하시오

n 알파고가사용한 딥강화학습 DQN의핵심 아이디어들을설명하시오.

© 2017, 장교수의딥러닝, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 20


Recommended