+ All Categories
Home > Documents > Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck”...

Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck”...

Date post: 26-Jun-2018
Category:
Upload: lyhanh
View: 218 times
Download: 0 times
Share this document with a friend
28
Direct Gradient-Based Reinforcement Learning: II. Gradient Ascent Algorithms and Experiments Jonathan Baxter Research School of Information Sciences and Engineering Australian National University [email protected] Lex Weaver Department of Computer Science Australian National University [email protected] Peter Bartlett Research School of Information Sciences and Engineering Australian National University [email protected] September 20, 1999 Abstract In [2] we introduced , an algorithm for computing arbitrarily ac- curate approximations to the performance gradient of parameterized partially ob- servable Markov decision processes ( s). The algorithm’s chief advantages are that it requires only a single sample path of the underlying Markov chain, it uses only one free parameter which has a natural interpretation in terms of bias-variance trade-off, and it requires no knowledge of the underlying state. In addition, the algorithm can be applied to infinite state, control and observation spaces. In this paper we present , a conjugate-gradient ascent algo- rithm that uses as a subroutine to estimate the gradient direction. uses a novel line-search routine that relies solely on gradient es- timates and hence is robust to noise in the performance estimates. , an on-line gradient ascent algorithm based on is also presented. The chief theoretical advantage of this gradient based approach over value- function-based approaches to reinforcement learning is that it guarantees improve- ment in the performance of the policy at every step. To show that this advantage is real, we give experimental results in which was used to op- timize a simple three-state Markov chain controlled by a linear function, a two- dimensional “puck” controlled by a neural network, a call admission queueing 1
Transcript
Page 1: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

Direct Gradient-Based Reinforcement Learning:II. Gradient Ascent Algorithms and Experiments

Jonathan BaxterResearch School of Information Sciences and Engineering

Australian National [email protected]

Lex WeaverDepartment of Computer Science

Australian National [email protected]

Peter BartlettResearch School of Information Sciences and Engineering

Australian National [email protected]

September 20, 1999

Abstract

In [2] we introduced�����������

, an algorithm for computing arbitrarily ac-curate approximations to the performance gradient of parameterized partially ob-servable Markov decision processes (

��������s).

The algorithm’s chief advantages are that it requires only a single sample pathof the underlying Markov chain, it uses only one free parameter ��� ������� whichhas a natural interpretation in terms of bias-variance trade-off, and it requires noknowledge of the underlying state. In addition, the algorithm can be applied toinfinite state, control and observation spaces.

In this paper we present � �������������� , a conjugate-gradient ascent algo-rithm that uses

�����������as a subroutine to estimate the gradient direction.

� �������������� uses a novel line-search routine that relies solely on gradient es-timates and hence is robust to noise in the performance estimates.

�������������,

an on-line gradient ascent algorithm based on�����������

is also presented.The chief theoretical advantage of this gradient based approach over value-

function-based approaches to reinforcement learning is that it guarantees improve-ment in the performance of the policy at every step. To show that this advantageis real, we give experimental results in which � ��������������� was used to op-timize a simple three-state Markov chain controlled by a linear function, a two-dimensional “puck” controlled by a neural network, a call admission queueing

1

Page 2: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

problem, and a variation of the classical “mountain-car” task. In all cases the al-gorithm rapidly found optimal or near-optimal solutions.

1 Introduction

Function approximation is necessary to avoid the curse of dimensionality associatedwith large-scale dynamic programming and reinforcement learning problems. Thedominant paradigm is to use the function to approximate the state (or state and ac-tion) values. Most algorithms then seek to minimize some form of error between theapproximate value function and the true value function, usually by simulation (see [13]and [4] for comprehensive overviews). While there have been a multitude of empiricalsuccesses for this approach (see e.g [10, 14, 15, 3, 18, 11] to name but a few), it lacksany fundamental theoretical guarantees on the performance of the policy generated bythe approximate value function (see [2, Section 1] for further discussion).

Motivated by these difficulties, in [2] we introduced����������

, a new algorithm forcomputing arbitrarily accurate approximations to the performance gradient of param-eterized partially observable Markov decision processes (

��������’s). Our algorithm

is essentially an extension of Williams’ ����� ��� � � ��� algorithm [17] and similar morerecent algorithms [7, 5, 9, 8].

More specifically, suppose ���� are the parameters controlling the��������

. Forexample, � could be the parameters of an approximate neural-network value-functionthat generates a stochastic policy by some form of randomized look-ahead, or � couldbe the parameters of an approximate � function used to stochastically select controls1.Let ������� denote the average reward of the

���������with parameter setting � .

�����������computes an approximation ����������� to ��������� based on a single continuous sample pathof the underlying Markov chain. The accuracy of the approximation is controlled bythe parameter ���� �! #"$� . It was proved in [2, Theorem 3] that

���������&%('*),+�.-�/ �0���1�����32

The trade-off preventing us choosing � arbitrarily close to 1 is that the variance of����������’s estimates of � � �����.� increase with � . However, on the bright side, [2,

Theorem 4] showed that the approximation error is proportional to

"546�"5487 9!:07

where 9;: is the subdominant eigenvalue of the Markov chain underlying the��������

.Thus for “rapidly mixing”

���������’s (for which 9<: is significantly less than " ), esti-

mates of the performance gradient with acceptable bias and variance can be obtained.Provided � � ������� is a sufficiently accurate approximation of �=�1����� —in fact, � � �������

need only be within >��.? of ��������� —adjustments to the parameters � of the form �A@�CBEDF�.��������� for small step-size D , will guarantee improvement in the average reward

1Stochastic policies are not strictly necessary in our framework, but the policy must be “differentiable”in the sense that GIHKJ*LNM exists.

2

Page 3: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

������� . In this case, gradient-based optimization algorithms using ���������.� as their gra-dient estimate will be guaranteed to improve the average reward ������� on each step.Except in the case of table-lookup, most value-function based approaches to reinforce-ment learning cannot make this guarantee. See [16] for some analysis in the case of��� � 9<� and a demonstration of performance degradation during the course of traininga neural network backgammon player.

In this paper we present � �������������� , a conjugate-gradient ascent algorithm thatuses the estimates of �0��������� provided by

����������. Critical to the successful opera-

tion of � �������������� is a novel line search subroutine that reduces noise by relyingsolely upon gradient estimates. We also present

�������������, an on-line variant of our

algorithm that updates the parameters at every time step.��� ��������

is similar toalgorithms proposed in [7] and [9].

The two algorithms are applied to a variety of problems, beginning with a simple3-state Markov decision process (MDP) controlled by a linear function for which thetrue gradient can be exactly computed. We show rapid convergence of the gradientestimates � � �����.� to the true gradient, in this case over a large range of values of � . Withthis simple system we are able to illustrate vividly the bias/variance tradeoff associatedwith the selection of � . We then use � �������������� and

��� ��������to find a good

policy for the MDP. � �������������� reliably finds a near-optimal policy in less than100 iterations of the Markov chain, an order of magnitude faster than

��� ��������.

Next we demonstrate the effectiveness of � �������������� in training a neural net-work controller to control a “puck” in a two-dimensional world. The task in this caseis to reliably navigate the puck from any starting configuration to an arbitrary targetlocation in the minimum time, while only applying discrete forces in the � and � direc-tions.

In the third experiment, we use � �������������� to train a controller for the calladmission queueing problem treated in [8]. In this case � �������������� finds near-optimal solutions within about 2000 iterations of the underlying queue.

In the fourth and final experiment, � �������������� is used to train a switched neural-network controller for a two-dimensional variation on the classical “mountain-car” task[13, Example 8.2].

The rest of this paper is organized as follows. In Section 2 we introduce the defi-nitions needed to understand

�����������. In Section 3 we describe � �������������� , the

gradient-based line-search subroutine, and�������������

. In Section 4 we present ourexperimental results.

2 The ����� ��� algorithm

A partially observable, Markov decision process (��������

) consists of a state space � ,observation space � and a control space � . For each state �&�� there is a deterministicreward ����� � . Although the results in [2] only guarantee convergence of

����������in the

case of finite � (but rather arbitrary � and � ), the algorithm can be applied regardlessof the nature of � so we do not restrict the cardinality of � , � or � .

Consider first the case of discrete � , � and � . Each control �8�� determines astochastic matrix � ���1� % � ���������1� � giving the transition probability from state � to state

3

Page 4: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

�( � � � ). For each state � � , an observation � � is generated independently

according to a probability distribution ����� � over observations in � . We denote the prob-ability of � by ���0��� � . A randomized policy is simply a function � mapping observationsinto probability distributions over the controls � . That is, for each observation �� � ,� ����� is a distribution over the controls in � . Denote the probability under � of control� given observation � by ���1����� .

For continuous � � and � , � ��� ���1� becomes a kernel � � � ����� giving the probabilitydensity of transitions from � to

�, ����� � becomes a probability density function on �

with � � ��� � the density at � , and � ����� becomes a probability density function on � with� � ����� the density at � .To each randomized policy � there corresponds a Markov chain in which state

transitions are generated by first selecting an observation � in state � according to thedistribution ����� � , then selecting a control � according to the distribution � ����� , andfinally generating a transition to state

�according to the probability � � ������� .

At present we are only dealing with a fixed��������

. To parameterize the���������

we parameterize the policies, so that � now becomes a function � ���� ��� of a set ofparameters �A �� , as well as of the observation � . The Markov chain correspondingto � has state transition matrix � �����&% � � � � ����� � given by

� � �������&%� ����� ��� � ����� ���� � � � �������1�N2 (1)

The following technical assumptions are required for the operation of����������

.

Assumption 1. The derivatives, ��� � � ���� ������� � ��� /� ! !

exist for all � � , �A � and � �� .

Assumption 2. The ratios "#%$$$'& ��(� ���� � �& �*) $$$� � ���� ���,+- ��� /� ! ! are uniformly bounded by .0/21 , for all � � , � � and � ��� .

Assumption 3. The magnitudes of the rewards, 7 ����� �#7 , are uniformly bounded by 34/1 for all states � .Assumption 4. Each � ���.�3 � ��� , has a unique stationary distribution, 5 ���.� .

The average reward �����.� is simply the expected reward under the stationary distri-bution 5 ����� :

�������&%6� � �78 �� � �0��� �32 (2)

Because of Assumption 4, for any starting state � , ������� is also equal to the expectedlong-term average of the reward,

'*),+9 -;: �=< "> 9�? /@���BA �����DC � $$$$$ � A % �DE8

4

Page 5: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

where the expectation is over sequences of states � A� #2#2 2� � 9�? / of the Markov chainspecified by � ����� .����������

([2, Algorithm 2] and reproduced in Algorithm 1) is an algorithm forcomputing an approximation

� 9 to �=�����.� . In [2, Theorem 7] we proved:

',)*+9 -;: � 9 % �.���������N

where �0�!������� ( � � �� " � ) is an approximation to �=�1����� satisfying

���������&%('*),+�.-�/ �0���1�����3

[2, Theorem 3]. Note that����������

relies only upon a single sample path from thePOMDP. Also, it does not require knowledge of the transition probability matrix � ,nor of the observation process � ; it only requires knowledge of the randomized policy� .

Algorithm 1���������� ��� > ����� �� [2, Algorithm 2].

1: Given:

� � � �� " � .� >�� � .

� Parameters � �� .

� Randomized policy � ���� �� � satisfying Assumptions 1 and 2.

� ��������with rewards satisfying Assumption 3, and which when controlled

by � ���� �� � generates stochastic matrices � ���.� satisfying Assumption 4.

� Arbitrary (unknown) starting state � A .2: Set A % � and

� AC% � ( AK � A � ).3: for % � to

> 4 " do4: Observe � C (generated according to ����� C � )5: Generate control � C according to � ���� ��C �6: Observe �����DC�� / � (where the next state � C�� / is generated according to

� �� �� ���� ��� C � ).7: Set C�� / % �� C�B � ��( ��� � ���( ���� � �8: Set

� C�� /5% � C B ����� C�� / �� C�� /9: end for

10:� 9 @ � 9�� >

11: return� 9

We cannot set � arbitrarily close to " in�����������

, since the variance of the esti-mate

� 9 increases with increasing � . Thus � has a natural interpretation in terms of abias-variance trade-off: small values of � give lower variance in the estimates

� 9 , but

5

Page 6: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

higher bias in that� 9 may be far from ��������� , whereas values of � close to " yield

small bias but correspondingly larger variance. This bias/variance trade-off is vividlyillustrated in the experiments of Section 4.

3 Stochastic gradient ascent algorithms

In this section we introduce two algorithms: � ��������������� , a variant of the Polak-Ribiere conjugate gradient algorithm (see e.g. [6, � 5.5.2]), and

��� ��������, a fully

on-line algorithm that updates the parameters � at each iteration of the��������

.

3.1 The � ����� ��� ��� algorithm

� ��������������� , described in Algorithm 2, is a version of the Polak-Ribiere conjugate-gradient algorithm that is designed to operate using only noisy (and possibly) biasedestimates of the gradient of the objective function (for example, the estimates

� 9 pro-vided by

����������). The novel feature of � �������������� is

��� ���� �� , a linesearchsubroutine that uses only gradient information to find the local maximum in the searchdirection. The use of gradient information ensures

��� ���� �� is robust to noise inthe performance estimates. Both � ��������������� and

��� ���&� �� can be applied to anystochastic optimization problem for which noisy (and possibly) biased gradient esti-mates are available.

The argument A to � ��������������� provides an initial step-size for��� ���� �� .

When � � ��� � ������� : falls below the argument � , � �������������� terminates.

3.2 The ������������� algorithm

The key to the successful operation of � �������������� is the linesearch algorithm��� ���� �� (Algorithm 3).��� ���&� �� uses only gradient information to bracket the

maximum in the direction ��� , and then quadratic interpolation to jump to the maximum.We found the use of gradients to bracket the maximum far more robust than the

use of function values. To bracket the maximum using function values, three points� /$ � : ��� , all lying in the direction ��� from � , must be found such that �����./#�6/����� : � and �������$� / ����� : � . Thus, we need to estimate sign � �����./N�=4 ����� : � � (andsign � �������$� 4 ����� : � � ). If we only have access to noisy estimates of �����.� (for example,estimates obtained by simulation), then regardless of the magnitude of the variance of������� , the variance of sign � ����� / � 46�����$:$� � approaches " (the maximum possible) as � /approaches � : . Thus, to reliably bracket the maximum using noisy estimates of �������we need to be able to reduce the variance of the estimates when � / and � : are close.In our case this means running the simulation from which the estimates are derived forlonger and longer periods of time.

An alternative approach to bracketing the maximum in the direction � � from �is to find two points ��/ and � : in that direction such that

� ��� � ����/#� �0� � � � and� ��� � ��� : � ���!� / � . The maximum must then lie between �./ and � : . The advan-tage of this approach is that even if the estimates

� ��� � ����� are noisy, the variance

6

Page 7: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

Algorithm 2 � �������������� � � ��� � �� A �3�1: Given:

� � ��� � � �� � � : a (possibly noisy and biased) estimate of the gradientof the objective function to be maximized.

� Starting parameters � �� (set to maximum on return).

� Initial step size �A � � .

� Gradient resolution � .2: � %��A% � ��� � �����3: while ��� � :�� � do4:

��� ���&� ��C� � ��� � �� ��F �A �3�5:

� % � ��� � �����6: D�% � � 4���� � � � �� � :7: �A% � B D�8: if � � � / � then9: � % �

10: end if11: � % �12: end while

of sign � � ��� � ��� /N��� �!� � (and sign � � ��� � ��� : � �N�!� � ) is independent of the distance be-tween � / and � : , and in particular does not grow as the two points approach one another.The disadvantage is that it is not possible to detect extreme overshooting of the max-imum using only gradient estimates. However, with careful control of the line searchwe did not find this to be a problem.

In Algorithm 3, lines 5–25 bracket the maximum by finding a parameter setting� ? % � A B ? �!� such that

� ��� � ��� ? � �.�!� � 4 � , and a second parameter setting� � % ��A&B � �!� such that

� ��� � ��� � � �N�!� / � . The reason for � rather than � in theseexpressions is to provide some robustness against errors in the estimates

� ��� � ���.� .It also prevents the algorithm “stepping to 1 ” if there is no local maximum in thedirection �!� . Note that we use the same � as used in � ��������������� to determine whento terminate due to small gradient (line 4 in � ��������������� ).

Provided that the signs of the gradients at the bracketing points � ? and � � showthat the maximum of the quadratic defined by these points lies between them, line 27will jump to the maximum. Otherwise the algorithm simply jumps to the midpointbetween � ? and � � .

3.3 ��� ��� ��� : updating the parameters at every time step

� ��������������� operates by iteratively choosing “uphill” directions and then searchingfor a local maximum in the chosen direction. If the

� ��� � argument to � ��������������is�����������

, the optimization will involve many iterations of the underlying���������

7

Page 8: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

Algorithm 3��� ���&� ��C� � ��� � ��A� �!� �A� �3�

1: Given:

� � ��� � � �� � � : a (possibly noisy and biased) estimate of the gradientof the objective function.

� Starting parameters ��A �� (set to maximum on return).

� Search direction � � ��� with� ��� � ����A$� � �!� � � .

� Initial step size A � � .

� Inner product resolution � � % � .

2: C% �A3: ��% � A�B �!�4:

� % � ��� � �����5: if

� �N�!� / � then6: Step back to bracket the maximum:7: repeat8: � % 9: � ��% � � � �

10: C% ���11: ��% ��AIB �!�12:

� % � ��� � �����13: until

� � �!� � 4 �14: ? % 15: � ? % � �#� �16: else17: Step forward to bracket the maximum:18: repeat19: ? % 20: � ? % � �#�!�21: C% � 22: ��% � A B �!�23:

� % � ��� � �����24: until

� � �!� / �25: � % 26: � � % � �#�!�27: end if28: if � ? � � and � � / � then29: %������ �

?� � ���

� �?���

30: else31: % ���

�� �:

32: end if33: ��A % � A�B �!�

8

Page 9: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

between parameter updates.An alternative approach, similar in spirit to algorithms described in [7, 9, 8], is to

adjust the parameter vector at every iteration of the underlying��������

. Algorithm 4,��� ��������, presents one such algorithm along these lines. We are currently working

on a convergence proof for this algorithm.

Algorithm 4��� �������� ��� > � A � � � .

1: Given:

� � � �� " � .� >�� � .

� Initial parameter values ��A �� .

� Randomized parameterized policies� � ���� � � � � ��� �� satisfying Assump-

tions 1 and 2.

� ��������with rewards satisfying Assumption 3, and which when controlled

by � ���� �� � generates stochastic matrices � ���.� satisfying Assumption 4.

� Step sizes D8C �% �� " #2 2#2 satisfying �(D8C�% 1 and � D :C / 1 .

� Arbitrary (unknown) starting state � A .2: Set A % � ( A ��� ).3: for % � to

> 4 " do4: Observe � C (generated according to ����� C � ).5: Generate control � C according to � ���� � C �6: Observe ����� C�� /#� (where the next state � C�� / is generated according to ��� � �� � ��� C � .7: Set C�� /5% �� C B � ��( ��� � ���( ���� � �8: Set ��C�� / % ��C1B D�C �����DC�� / �� C�� /9: end for

10: return � 94 Experiments

In this section we present several sets of experimental results. Throughout this sec-tion, where we refer to � �������������� we mean � �������������� with

�����������as its� ��� � argument.

In the first set of experiments, we consider a system in which a controller is usedto select actions for a 3-state Markov Decision Process (

�����). For this system we are

able to compute the true gradient exactly using the matrix equation

�=�����.��% 5�� ����� ��� ���.����� 4 � ������B� 5�� ����� � ? / � (3)

9

Page 10: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

Origin Destination State ProbabilitiesState Action � . �

� �<" 0.0 0.8 0.2� � � 0.0 0.2 0.8. �<" 0.8 0.0 0.2. � � 0.2 0.0 0.8

� �<" 0.0 0.8 0.2� � � 0.0 0.2 0.8

Table 1: Transition probabilities of the three-state MDP

where � ����� is the transition matrix of the underlying Markov chain with the controller’sparameters set to � , 5 � ����� is the stationary distribution corresponding to � ����� (writtenas a row vector), ��5 � ����� is the matrix in which each row is the stationary distribution,and � is the (column) vector of rewards (see [2, � 2.1] for a derivation of (3)). Hence wecan compare the estimates

� 9 generated by����������

with the true gradient �=������� ,both as a function of the number of iterations

>and as a function of the discount

parameter � . We also optimize the performance of the controller using the on-linealgorithm,

��� ��������, and � �������������� . � ��������������� reliably converges to a

near optimal policy with around 100 iterations of the�����

, while the on-line methodrequires approximately 1000 iterations. This should be contrasted with training a linearvalue-function for this system using

��� � " � [12], which can be shown to converge to avalue function whose one-step lookahead policy is suboptimal [16].

In the second set of experiments, we consider a simple “puck-world” problem inwhich a small puck must be navigated around a two-dimensional world by applyingthrust in the � and � directions. We train a 1-hidden-layer neural-network controllerfor the puck using � �������������� . Again the controller reliably converges to nearoptimality.

In the third set of experiments we use � ��������������� to optimize the admissionthresholds for the call-admission problem considered in [8].

In the final set of experiments we use � ��������������� to train a switched neural-network controller for a two-dimensional variant of the “mountain-car” task [13, Ex-ample 8.2].

4.1 A three-state MDP

In this section we consider a three-state�����

, in each state of which there is a choiceof two actions � / and �0: . Table 1 shows the transition probabilities as a function ofthe states and actions. Each state � has an associated two-dimensional feature vector� ���1� % � � / ���<�N � :.���<� � and reward �����1� which are detailed in Table 2. Clearly, theoptimal policy is to always select the action that leads to state � with the highestprobability, which from Table 1 means always selecting action � : .

This rather odd choice of feature vectors for the states ensures that a value func-tion linear in those features and trained using

��� � " � —while observing the optimal

10

Page 11: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

��� �C��% � � / � � �&% / :/ �� : � � �&% �

/ ���� .��% � � / � .�&% �

/ �� :.� .�&% / :/ �

��� � �I% " � /K� � �&% �/ �

� : � � �&% �/ �

Table 2: Three-state rewards and features.

policy—will implement a suboptimal one-step greedy lookahead policy itself (see [16]for a proof). Thus, in contrast to the gradient based approach, for this system,

��� � "$�training a linear value function is guaranteed to produce a worse policy if it starts outobserving the optimal policy.

4.1.1 Training a controller

Our goal is to learn a stochastic controller for this system that implements an optimal(or near-optimal) policy. Given a parameter vector �A% ��� / �$: � � ���$� , we generate apolicy as follows. For any state � , let

/ ���1� � % � / � / ���1��B � : � : ���1� : ���1� � % ��� � / ���1��B � � � : ���1�32

Then the probability of choosing action �!/ in state � is given by��� � ���<�I% � � � � �� � � � � B� � � � �

while the probability of choosing action � : is given by� � � ���1�&% � � � � �� � � � � B� � � � � % "54 � ��� ���<�N2

The ratios� � ���� � �� ��� � � needed by Algorithms 1 and 4 are given by,

� ��� � ���1���� � ���<� % � � � � �� � � � � B � � � � � � � / ���1�3 � :0���<�3 4 � / ���<�3 4 � :����<� � (4)

� � � � ���1���� � ���<� % � � � � �� � � � � B � � � � � �,4 � / ���1�3 4 � :.���1�3 � / ���<�3 � :����<� � (5)

4.1.2 Gradient estimates

With a parameter vector2 of � %(�," #"� #4 "� #4 " � , estimates� 9 of � � � were generated

using�����������

, for various values of>

and � � �� #"$� . To measure the progress of� 9 towards to the true gradient ��� , ��� was calculated from (3) and then for each valueof>

the angle between� 9 and ��� and the relative error

����� ? ��� �� ��� � were recorded. The

angles and relative errors are plotted in Figures 1, 2 and 3.

11

Page 12: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

-20

0

20

40

60

80

100

120

140

160

1 10 100 1000 10000 100000 1e+06 1e+07

Ang

le (

degr

ees)

T

beta=0.0

-20

0

20

40

60

80

100

120

140

160

1 10 100 1000 10000 100000 1e+06 1e+07

Ang

le (

degr

ees)

T

beta=0.4

-20

0

20

40

60

80

100

120

140

160

1 10 100 1000 10000 100000 1e+06 1e+07

Ang

le (

degr

ees)

T

beta=0.8

-20

0

20

40

60

80

100

120

140

160

1 10 100 1000 10000 100000 1e+06 1e+07

Ang

le (

degr

ees)

T

beta=0.95

Figure 1: Angle between the true gradient �=� and the estimate� 9 for the three-state

Markov chain, for various values of the discount parameter � .� 9 was generated by

Algorithm 1. Averaged over 500 independent runs. Note the higher variance at large>

for the larger values of � . Error bars are one standard deviation.

The graphs illustrate a typical trade-off for the����������

algorithm: small valuesof � give higher bias in the estimates, while larger values of � give higher variance(the bias is only shown in Figure 3 for the norm deviation because it was too small tomeasure for the angular deviation). That said, the bias introduced by having � / " isvery small for this system. In the worst case, � % �!2 � , the final gradient direction isindistinguishable from the true direction while the relative deviation

� ����? � � �� ��� � is only� 2 ��� .

4.1.3 Training via conjugate-gradient ascent

� ��������������� with����������

as the “� ��� � ” argument was used to train the parame-

ters of the controller described in the previous section. Following the low bias observedin the experiments of the previous section, the argument � of

����������was set to � .

After a small amount of experimentation, the arguments �A and � of � ��������������were set to " � � and ��2 � ����" respectively. None of these values were critical, althoughthe extremely large initial step-size ( A ) did considerably reduce the time required forthe controller to converge to near-optimality.

2Other initial values of the parameter vector were chosen with similar results. Note that � ����������������generates a suboptimal policy.

12

Page 13: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

0

0.5

1

1.5

2

2.5

3

1 10 100 1000 10000 100000 1e+06 1e+07

Rel

ativ

e N

orm

Diff

eren

ce

T

beta=0.0

0

0.5

1

1.5

2

2.5

3

1 10 100 1000 10000 100000 1e+06 1e+07

Rel

ativ

e N

orm

Diff

eren

ce

T

beta=0.4

0

0.5

1

1.5

2

2.5

3

3.5

1 10 100 1000 10000 100000 1e+06 1e+07

Rel

ativ

e N

orm

Diff

eren

ce

T

beta=0.8

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

1 10 100 1000 10000 100000 1e+06 1e+07

Rel

ativ

e N

orm

Diff

eren

ce

T

beta=0.95

Figure 2: A plot of� ����? ��� �� ��� � for the three-state Markov chain, for various values of

the discount parameter � .� 9 was generated by Algorithm 1. Averaged over 500

independent runs. Note the higher variance at large>

for the larger values of � . Errorbars are one standard deviation.

We tested the performance of � ��������������� for a range of values of the argument>to����������

from " to ����>�� . Since��� ���&� �� only uses

����������to determine

the sign of the inner product of the gradient with the search direction, it does not needto run

�����������for as many iterations as � �������������� does. Thus,

��� ���� �� de-termined its own

>parameter to

����������as follows. Initially, (somewhat arbitrarily)

the value of>

within��� ���� �� was set to " � " � the value used in � �������������� (or

1 if the value in � �������������� was less than 10).��� ���� �� then called

����������to obtain an estimate

� 9 of the gradient direction. If� 9 � �!� / � ( �!� being the desired

search direction) then>

was doubled and��� ���� �� was called again to generate a

new estimate� 9 . This procedure was repeated until

� 9 ���!� � � , or>

had beendoubled four times. If

� 9 �N�!� was still negative at the end of this process,��� ���&� ��

searched for a local maximum in the direction 45� � , and the number of iterations>

used by � �������������� was doubled on the next iteration (the conclusion being thatthe direction � � was generated by overly noisy estimates from

�����������).

Figure 4 shows the average reward ������� of the final controller produced by� ��������������� , as a function of the total number of simulation steps of the under-lying Markov chain. The plots represent an average over � � � independent runs of� ��������������� . Note that �!2 � is the average reward of the optimal policy. The param-eters of the controller were (uniformly) randomly initialized in the range � 4 �!2,"� ��2*" �

13

Page 14: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

0.001

0.01

0.1

1

10

1 10 100 1000 10000 100000 1e+06 1e+07

Rel

ativ

e N

orm

Diff

eren

ce

T

beta=0.0beta=0.40beta=0.80beta=0.95

Figure 3: Graph showing the final bias in the estimate� 9 (as measured by

� ����? ��� �� ��� � )

as a function of � for the three-state Markov chain.� 9 was generated by Algorithm

1. Note both axes are log scales.

before each call to � �������������� . After each call to � �������������� , the average re-ward of the resulting controller was computed exactly by calculating the stationarydistribution for the controller. From Figure 4, optimality is reliably achieved usingapproximately 100 iterations of the Markov chain.

4.1.4 Training directly on-line with � � ��� ���The controller was also trained on-line using Algorithm 4 (

��� ��������) with fixed

step-sizes D8C %�� with �% ��2*" #"� #" �� #" � � . Reducing step-sizes of the form D C %�� � were tried, but caused intolerably slow convergence. Figure 5 shows the performanceof the controller (measured exactly as in the previous section) as a function of the totalnumber of iterations of the Markov chain, for different values of the step-size � . Thegraphs are averages over 100 runs, with the controller’s weights randomly initialized inthe range �,4 ��2*" �!2," � at the start of each run. From the figure, convergence to optimal isabout an order of magnitude slower than that achieved by � ��������������� , for the beststep-size of � % "�2 � . Step-sizes much greater that �C% " �!2 � failed to reliably convergeto an optimal policy.

4.2 Puck World

In this section, experiments are described in which � �������������� and��� ��������

were used to train 1-hidden-layer neural-network controllers to navigate a small puck

14

Page 15: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

0.2

0.3

0.4

0.5

0.6

0.7

0.8

1 10 100 1000 10000

CO

NJG

RA

D F

inal

Rew

ard

Markov Chain Iterations

Figure 4: Performance of the 3-state Markov chain controller trained by � ��������������as a function of the total number of iterations of the Markov chain. The performancewas computed exactly from the stationary distribution induced by the controller. ��2 �is the average reward of the optimal policy. Averaged over 500 independent runs. Theerror bars were computed by dividing the results into two separate bins depending onwhether they were above or below the mean, and then computing the standard deviationwithin each bin.

15

Page 16: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

0.5

0.55

0.6

0.65

0.7

0.75

0.8

10 100 1000 10000

Ave

rage

Rew

ard

Markov Chain Iterations

c=0.10.7

0.71

0.72

0.73

0.74

0.75

0.76

0.77

0.78

0.79

0.8

10 100 1000 10000

Ave

rage

Rew

ard

Markov Chain Iterations

c=1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

10 100 1000 10000

Ave

rage

Rew

ard

Markov Chain Iterations

c=100.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

10 100 1000 10000

Ave

rage

Rew

ard

Markov Chain Iterations

c=100

Figure 5: Performance of the 3-state Markov chain controller as a function of the num-ber of iteration steps in the on-line algorithm, Algorithm 4, for fixed step sizes of��2*" " " � , and " ��� . Error bars were computed as in Figure 4.

around a two-dimensional world.

4.2.1 The World

The puck was a unit-radius, unit-mass section of a cylinder constrained to move in theplane in a region 100 units square. The puck had no internal dynamics (i.e rotation).Collisions with the region’s boundaries were inelastic with a (tunable) coefficient ofrestitution � (set to �!2 > for the experiments reported here). The puck was controlledby applying a 5 unit force in either the positive or negative � direction, and a 5 unitforce in either the positive or negative � direction, giving four different controls intotal. The control could be changed every " � " � of a second, and the simulator operatedat a granularity of " � " � � of a second. The puck also had a retarding force due to airresistance of ��2 � � ��� speed : . There was no friction between the puck and the ground.

The puck was given a reward at each decision point ( " � " � of a second) equal to4 � where

�was the distance between the puck and some designated target point. To

encourage the controller to learn to navigate the puck to the target independently ofthe starting state, the puck state was reset every 30 (simulated) seconds to a randomlocation and random � and � velocities in the range �,4 " �� " � � , and at the same time thetarget position was set to a random location.

Note that the size of the state-space in this example is essentially infinite, being

16

Page 17: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

of the order of � PRECISION where PRECISION is the floating point precision of themachine ( � � bits).

4.2.2 The controller

A one-hidden-layer neural-network with six input nodes, eight hidden nodes and fouroutput nodes was used to generate a probabilistic policy in a similar manner to thecontroller in the three-state Markov chain example of the previous section. Four of theinputs were set to the raw � and � locations and velocities of the puck at the currenttime-step, the other two were the differences between the puck’s � and � location andthe target’s � and � location respectively. The location inputs were scaled to lie between4 " and " , while the velocity inputs were scaled so that a speed of " � units per secondmapped to a value of " . The hidden nodes computed a ������� squashing function, whilethe output nodes were linear. Each hidden and output node had the usual additionaloffset parameter. The four output nodes were exponentiated and then normalized as inthe Markov-chain example to produce a probability distribution over the four controls( � � units thrust in the � direction, � � units thrust in the � direction). Controls wereselected at random from this distribution.

4.2.3 Conjugate gradient ascent

We trained the neural-network controller using � ��������������� with the gradient esti-mates generated by

�����������. After some experimentation we chose � % �!2 > � and> % " � ���� ��� � as the parameters � �������������� supplied to

����������.��� ���� ��

used the same value of � and the scheme discussed in Section 4.1.3 to determine thenumber of iterations with which to call

����������.

Due to the saturating nature of the neural-network hidden nodes (and the expo-nentiated output nodes), there was a tendency for the network weights to converge tolocal minima at “infinity”. That is, the weights would grow very rapidly early on inthe simulation, but towards a suboptimal solution. Large weights tend to imply verysmall gradients and thus the network becomes “stuck” at these suboptimal solutions.We have observed a similar behaviour when training neural networks for pattern clas-sification problems. To fix the problem, we subtracted a small quadratic penalty termD �3��� : from the performance estimates and hence also a small correction � D<� � from thegradient calculation3 for � � .

We used a decreasing schedule for the quadratic penalty weight D (arrived atthrough some experimentation). D was initialized to ��2 � and then on every tenth it-eration of � �������������� , if the performance had improved by less than 10% fromthe value ten iterations ago, D was reduced by a factor of 10. This schedule solvednearly all the local minima problems, but at the expense of slower convergence of thecontroller.

A plot of the average reward of the neural-network controller is shown in Figure 6,as a function of the number of iterations of the

��������. The graph is an average over

100 independent runs, with the parameters initialized randomly in the range �,4 ��2*" �!2," �3When used as a technique for capacity control in pattern classification, this technique goes by the name

“weight decay”. Here we used it to condition the optimization problem.

17

Page 18: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

-55

-50

-45

-40

-35

-30

-25

-20

-15

-10

-5

0 3e+07 6e+07 9e+07 1.2e+08 1.5e+08

Ave

rage

Rew

ard

Iterations

Figure 6: Performance of the neural-network puck controller as a function of the num-ber of iterations of the puck world, when trained using � �������������� . Performanceestimates were generated by simulating for "� ��� �� � ��� iterations. Averaged over 100independent runs (excluding the four bad runs in Figure 7).

at the start of each run. The bad runs shown in Figure 7 were omitted from the averagebecause they gave misleadingly large error bars.

Note that the optimal performance (within the neural-network controller class)seems to be around 4 � for this problem, due to the fact that the puck and target lo-cations are reset every � � simulated seconds and hence there is a fixed fraction of thetime that the puck must be away from the target. From Figure 6 we see the final per-formance of the puck controller is close to optimal. In only 4 of the 100 runs did� ��������������� get stuck in a suboptimal local minimum. Three of those cases werecaused by overshooting in

��� ���&� �� (see Figure 7), which could be prevented byadding extra checks to � �������������� .

Figure 8 illustrates the behaviour of a typical trained controller. For the purpose ofthe illustration, only the target location and puck velocity were randomized every 30seconds, not the puck location.

4.3 Call Admission Control

In this section we report the results of experiments in which � �������������� was appliedto the task of training a controller for the call admission problem treated in [8, Chapter7].

18

Page 19: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

-55-50-45-40-35-30-25-20-15-10

-505

0 5e+07 1e+08 1.5e+08 2e+08 2.5e+08 3e+08 3.5e+08

Ave

rage

Rew

ard

Iterations

Figure 7: Plots of the performance of the neural-network puck controller for the fourruns (out of 100) that converged to substantially suboptimal local minima.

target

Figure 8: Illustration of the behaviour of a typical trained puck controller.

19

Page 20: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

Call Type 1 2 3Bandwidth Demand � 1 1 1Arrival Rate � "�2 � " 2 � " 2 �Average Holding Time � ��2 � ��2 � �!2 �Reward � 1 2 4

Table 3: Parameters of the call admission control problem.

4.3.1 The Problem

The call admission control problem treated in [8, Chapter 7] models the situation inwhich a telecommunications provider wishes to sell bandwidth on a communicationslink to customers in such a way as to maximize long-term average reward.

Specifically, the problem is a queuing problem. There are three different types ofcall, each with its own call arrival rate �I� "$� , �I� � � , ��� ��� , bandwidth demand � � " � , �K� � � ,� � ��� and average holding time ��� " � , � � � � , � � �.� . The arrivals are Poisson distributedwhile the holding times are exponentially distributed. The link has a maximum band-width of 10 units. When a call arrives and there is sufficient available bandwidth, theservice provider can choose to accept or reject the call (if there is not enough availablebandwidth the call is always rejected). Upon accepting a call of type � , the serviceprovider receives a reward of ����� � units. The goal of the service provider is to maxi-mize the long-term average reward.

The parameters associated with each call type are listed in Table 3. With thesesettings, the optimal policy (found by dynamic programming in [8]) is to always acceptcalls of type 2 and 3 (assuming sufficient available bandwidth) and to accept calls oftype 1 if the available bandwidth is at least 3. This policy has an average reward of��2 � ��� , while the “always accept” policy has an average reward4 of ��2 � � � .

4.3.2 The Controller

As in [8], the controller had three parameters � % ���./ � : ��� � , one for each type ofcall. Upon arrival of a call of type � , the controller chooses to accept the call withprobability � ������% � // ������ / � �� ? �� � � if ��B��K��� ��� " � ,

� otherwise,

where � is the currently used bandwidth. This is the class of controllers studied in [8].

4.3.3 Conjugate gradient ascent

� ��������������� was used to train the above controller, with����������

generating thegradient estimates from a range of values of � and

>. The influence of � on the

performance of the trained controllers was marginal, so we set � % �!2 � which gave the4There is some discrepancy between our average rewards and those quoted in [8]. This is probably due

to a discrepancy in the way the state transitions are counted, which was not clear from the discussion in [8].

20

Page 21: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

0.45

0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

1000 10000 100000

CO

NJG

RA

D F

inal

Rew

ard

Total Queue Iterations

class optimalbeta=0.0

Figure 9: Performance of the call admission controller trained by � ��������������� as afunction of the total number of iterations of the queue. The performance was computedby simulating the controller for 100,000 iterations. The average reward of the globallyoptimal policy is ��2 � ��� , the average reward of the optimal policy within the class is��2 � , and the plateau performance of � �������������� is ��2 � � � . The graphs are averagesfrom 100 independent runs.

lowest-variance estimates. We used the same value of>

for calls to�����������

within� ��������������� and within

��� ���&� �� , and this was varied between " � and " �� � ��� .The controller was always started from the same parameter setting � % � �� �! �.� (aswas done in [8]). The value of this initial policy is ��2 � >!" . The graph of the averagereward of the final controller produced by � ��������������� as a function of the totalnumber of iterations of the queue is shown in Figure 9. A performance of ��2 � ��� wasreliably achieved with less than � � ��� iterations of the queue.

Note that the optimal policy is not achievable with this controller class since it isincapable of implementing any threshold policy other than the “always accept” and “al-ways reject” policies. Athough not provably optimal, a parameter setting of �0/�� � 2 �and any suitably large values of �K: and � � (we chose � :% � � % " � ) generates some-thing close to the optimal policy within the controller class, with an average reward of��2 � . Figure 10 shows the probability of accepting a call of each type under this policy,as a function of the available bandwidth.

The controllers produced by � �������������� with ��% ��2 � and sufficiently large>

are essentially “always accept” controllers with an average reward of �!2 � � � , within 2%of the optimum achievable in the class. To produce policies even nearer to the optimalpolicy in performance, � ��������������� must keep �./ close to its starting value of � ,and hence the gradient estimate

� 9 % � � / � : � � � produced by����������

must

21

Page 22: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 2 3 4 5 6 7 8 9 10

Acc

epta

nce

Pro

babi

lity

Available Bandwidth

call type 1call types 2 or 3

Figure 10: Probability of accepting a call of each type under the call admission policywith near-optimal parameters � / % � 2 �� � : % � � % " � . Note that calls of type 2 and 3are essentially always accepted.

have a relatively small first component. Figure 11 shows a plot of normalized� 9 as a

function of � , for> % " � � �! ��� � (sufficiently large to ensure low variance in

� 9 ) andthe starting parameter setting �A% � �� �� ��� . From the figure,

� / starts at a high valuewhich explains why � �������������� produces “always accept” controllers for �6% �!2 � ,and does not become negative until � � ��2 > � , a value for which the variance in

� 9even for moderately large

>is relatively high.

A plot of the performance of � �������������� for � % ��2 > and � % ��2 > � is shownin Figure 12. Approximately half of the remaining 2% in performance can be obtainedby setting � % �!2 > , while for � %(��2 > � a sufficiently large choice for

>gives most

of the remaining performance. For this problem, there is a huge difference betweengaining 98% of optimal performance, which is achieved for � % ��2 � and less than2000 iterations of the queue, and gaining 99% of the optimal which requires �E% ��2 >and of the order of 500,000 queue iterations. A similar convergence rate and finalapproximation error to the latter case were reported for the on-line algorithms in [8,Chapter 7], although the results of only one run were given in each case.

4.4 Mountainous Puck World

The “mountain-car” task is a well-studied problem in the reinforcement learning liter-ature [13, Example 8.2]. As shown in Figure 13, the task is to drive a car to the topof a one-dimensional hill. The car is not powerful enough to accelerate directly up thehill against gravity, so any successful controller must learn to “oscillate” back and forth

22

Page 23: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Nor

mal

ized

Del

ta

Beta

Delta1Delta2Delta3

Figure 11: Plot of the three components of� 9 for the call admission problem, as a

function of the discount parameter � . The parameters were set at � % � �! �� ��� . >was set to "� ��� �� � ��� . Note that

� / does not become negative (the correct sign) until� � �!2 > � .

0.775

0.78

0.785

0.79

0.795

0.8

0.805

100000 1e+06 1e+07

CO

NJG

RA

D F

inal

Rew

ard

Total Queue Iterations

class optimalbeta=0.90

0.78

0.782

0.784

0.786

0.788

0.79

0.792

0.794

0.796

0.798

0.8

0.802

1.5e+06 2e+06 2.5e+06 3e+06

CO

NJG

RA

D F

inal

Rew

ard

Total Queue Iterations

class optimalbeta=0.95

Figure 12: Performance of the call admission controller trained by � �������������� as afunction of the total number of iterations of the queue. The performance was calculatedby simulating the controller for 1,000,000 iterations. The graphs are averages from 100independent runs.

23

Page 24: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

Figure 13: The classical “mountain-car” task is to apply forward or reverse thrust tothe car to get it over the crest of the hill. The car starts at the bottom and does not haveenough power to drive directly up the hill.

until it builds up enough speed to crest the hill.In this section we describe a variant of the mountain car problem based on the puck-

world example of Section 4.2. With reference to Figure 14, in our problem the task is tonavigate a puck out of a valley and onto a plateau at the northern end of the valley. Asin the mountain-car task, the puck does not have sufficient power to accelerate directlyup the hill, and so has to learn to oscillate in order to climb out of the valley. Onceagain we were able to reliably train near-optimal neural-network controllers for thisproblem, using � �������������� and

��� ���� �� , and with����������

generating thegradient estimates.

4.4.1 The World

The world dimensions, physics, puck dynamics and controls were identical to the flatpuck world described in Section 4.2, except that the puck was subject to a constantgravitational force of " � units, the maximum allowed thrust was � units (instead of � ),and the height of the world varied as follows:

height ���� ��� %�� �1" � if � / � � or � � � �� 2 �

�"�4 ��� �� 7 ��� � ? � A �: � � otherwise 2

With only � units of thrust, a unit mass puck can not accelerate directly out of thevalley.

Every 120 (simulated) seconds, the puck was initialized with zero velocity at thebottom of the valley, with a random � location. The puck was given no reward while

24

Page 25: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

Figure 14: In our variant of the mountain-car problem the task is to navigate a puck outof a valley and onto the northern plateau. The puck starts at the bottom of the valleyand does not have enough power to drive directly up the hill.

in the valley or on the southern plateau, and a reward of " ����4 : while on the northernplateau, where was the speed of the puck. We found the speed penalty helped toimprove the rate of convergence of the neural network controller.

4.4.2 The controller

After some experimentation we found that a neural-network controller could be reli-ably trained to navigate to the northern plateau, or to stay on the northern plateau oncethere, but it was difficult to combine both in the same controller (this is not so sur-prising since the two tasks are quite distinct). To overcome this problem, we trained a“switched” neural-network controller: the puck used one controller when in the valleyand on the southern plateau, and then switched to a second neural-network controllerwhile on the northern plateau. Both controllers were one-hidden-layer neural-networkswith nine input nodes, five hidden nodes and four output nodes. The nine inputs werethe normalized ( � 4 "� #" � -valued) � , � and puck locations, the normalized � , � and locations relative to center of the northern wall, and the � , � and puck velocities. Thefour outputs were used to generate a policy in the same fashion as the controller ofSection 4.2.2.

25

Page 26: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

0

10

20

30

40

50

60

70

80

0 2e+07 4e+07 6e+07 8e+07 1e+08

Ave

rage

Rew

ard

Iterations

Figure 15: Performance of the neural-network puck controller as a function of the num-ber of iterations of the mountainous puck world, when trained using � �������������� .Performance estimates were generated by simulating for " � � �! ��� � iterations. Aver-aged over 100 independent runs.

4.4.3 Conjugate gradient ascent

The switched neural-network controller was trained using the same scheme discussedin Section 4.2.3, except this time the discount factor � was set to ��2 >�� .

A plot of the average reward of the neural-network controller is shown in Figure 15,as a function of the number of iterations of the

��������. The graph is an average

over 100 independent runs, with the neural-network controller parameters initializedrandomly in the range � 4 �!2,"� �!2," � at the start of each run. In this case no run failedto converge to near-optimal performance. From the figure we can see that the puck’sperformance is nearly optimal after about 40 million total iterations of the puck world.Although this figure may seem rather high, to put it in some perspective note that arandom neural-network controller takes about 10,000 iterations to reach the northernplateau from a standing start at the base of the valley. Thus, 40 million iterations isequivalent to only about 4,000 trips to the top for a random controller.

Note that the puck converges to a final average performance around 75, whichindicates it is spending at least 75% of its time on the northern plateau. Observationof the puck’s final behaviour shows it behaves nearly optimally in terms of oscillatingback and forth to get out of the valley.

26

Page 27: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

5 Conclusion

This paper showed how to use the performance gradient estimates generated bythe

�����������algorithm from [2] to optimize the average reward of parameterized��������

s. The optimization relies on the use of��� ���� �� , a robust line-search algo-

rithm that uses gradient estimates, rather than value estimates to bracket the maximum.� ��������������� and

��� ���� �� were found to perform well on four quite distinct prob-lems: optimizing a controller for a three-state

�����, optimizing a neural-network con-

troller for navigating a puck around a two-dimensional world, optimizing a controllerfor a call admission problem, and optimizing a switched neural-network controller in avariation of the classical mountain-car task. We also presented

�������������, an on-line

version of � �������������� .For the three-state

�����and the call admission problems we were able to provide

graphic illustrations of how the bias and variance of the gradient estimates �!�!� canbe traded against one another by varying � between � (low variance, high bias) and "(high variance, low bias).

Relatively little tuning was required to generate these results. In addition, thecontrollers operated on direct and simple representations of the state, in contrast tothe much more complex representations usually required of value-function based ap-proaches.

An interesting avenue for further research would be an empirical comparison ofvalue-function based methods and the algorithms of this paper in domains where theformer are known to produce good results.

Despite the success of � �������������� /��� ���� �� in the experiments described

here, the on-line algorithm�������������

has advantages in other settings. In particular,when it is applied to multi-agent reinforcement learning, both gradient computationsand parameter updates can be performed for distinct agents without any communicationbeyond the global distribution of the reward signal. This idea has led to a biologicallyplausible parameter optimization procedure for spiking neural networks (see [1]), andwe are currently investigating the application of the on-line algorithm in multi-agentreinforcement learning problems.

References[1] P. L. Bartlett and J. Baxter. in preparation. September 1999.

[2] J. Baxter and P. L. Bartlett. Direct Gradient-Based Reinforcement Learning: I. GradientEstimation Algorithms. Technical report, Research School of Information Sciences andEngineering, Australian National University, July 1999.

[3] J. Baxter, A. Tridgell, and L. Weaver. Learning to Play Chess Using Temporal-Differences.Machine Learning, 1999. To appear.

[4] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific,1996.

[5] X.-R. Cao and Y.-W. Wan. Algorithms for Sensitivity Analysis of Markov Chains ThroughPotentials and Perturbation Realization. IEEE Transactions on Control Systems Technol-ogy, 6:482–492, 1998.

27

Page 28: Direct Gradient-Based Reinforcement Learning: II. …honavar/rl-direct2.pdfdimensional “puck” controlled by a neural network, a call admission queueing 1 problem, and a variation

[6] T. L. Fine. Feedforward Neural Network Methodology. Springer, New York, 1999.

[7] H. Kimura, K. Miyazaki, and S. Kobayashi. Reinforcement learning in POMDPs withfunction approximation. In D. H. Fisher, editor, Proceedings of the Fourteenth Interna-tional Conference on Machine Learning (ICML’97), pages 152–160, 1997.

[8] P. Marbach. Simulation-Based Methods for Markov Decision Processes. PhD thesis, Labor-tory for Information and Decision Systems, MIT, 1998.

[9] P. Marbach and J. N. Tsitsiklis. Simulation-Based Optimization of Markov Reward Pro-cesses. Technical report, MIT, 1998.

[10] A. L. Samuel. Some Studies in Machine Learning Using the Game of Checkers. IBMJournal of Research and Development, 3:210–229, 1959.

[11] S. Singh and D. Bertsekas. Reinforcement learning for dynamic channel allocation incellular telephone systems. In Advances in Neural Information Processing Systems: Pro-ceedings of the 1996 Conference, pages 974–980. MIT Press, 1997.

[12] R. Sutton. Learning to Predict by the Method of Temporal Differences. Machine Learning,3:9–44, 1988.

[13] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press,Cambridge MA, 1998. ISBN 0-262-19398-1.

[14] G. Tesauro. Practical Issues in Temporal Difference Learning. Machine Learning, 8:257–278, 1992.

[15] G. Tesauro. TD-Gammon, a self-teaching backgammon program, achieves master-levelplay. Neural Computation, 6:215–219, 1994.

[16] L. Weaver and J. Baxter. Reinforcement Learning From State and Temporal Differences.Technical report, Department of Computer Science, Australian National University, May1999. http://wwwsyseng.anu.edu.au/ � jon/papers/std full.ps.gz.

[17] R. J. Williams. Simple Statistical Gradient-Following Algorithms for Connectionist Rein-forcement Learning. Machine Learning, 8:229–256, 1992.

[18] W. Zhang and T. Dietterich. A reinforcement learning approach to job-shop scheduling.In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence,pages 1114–1120. Morgan Kaufmann, 1995.

28


Recommended