Rate distortion function in bettinggame system
Masayuki KumonAssociation for Promoting Quality Assurance in Statistics
1
Abstract
Among various aspects of game theoretic
probability, when exploring
mathematical structure of the optimal
strategies in betting games,
Kullback-Leibler divergence is naturally
derived as the optimal exponential growth
rate of the betting capital process.
2
This structure had been obtained by Prof.
Takeuchi nearly fifty years ago.
Inspired by Claude Shannon’s Information
Theory, an optimizing betting strategy was
also pioneered by John Larry Kelly Jr. in
A New Interpretation of Information
Rate. Bell System Technical
Journal, Vol.35, 917-926, 1956.
3
The optimalities of Kelly’s strategy
• Minimal expected time property
• Asymptotic largest magnitude property
were investigated by Leo Breiman in
Optimal Gambling Systems for
Favorable Games. Fourth Berkeley
Symposium on Probability and
Statistics I, 65-78, 1961.
4
The historical reviews and the recent
developments concerning Kelly’s strategy
such as T. M. Cover’s Universal Portfolio
are presented in
L. C. MacLean, E. O. Thorp, W. T.
Ziemba eds. The Kelly Capital
Growth Investment Criterion :
Theory and Practice. Handbook in
Finantial Economic Series, Vol.3,
World Scientific, London, 2010.
5
In this talk, the following are addressed.
• Game mutual information which
measures information transmission
between betting games is introduced.
• Two characteristics Game channel
capacity and Game rate distortion
function are defined from the mutual
information, and these meanings are
explained.
6
• The effect of the optimal strategy in
conditional betting game is verified by
using real stock price data.
• As an application of Game rate
distortion function, an efficient lossy
source coding scheme based on the
optimal conditional betting strategy is
proposed.
7
1. Mutual information in betting game
system
1.1 Definition of mutual information
¥ Mutual information in information
theory
X ∼ PX(x) Y ∼ PY (y) (X, Y ) ∼ PXY (x, y)
H(X) = −EPX[log PX(X)] etc.
8
• Shannon’s source coding theorem :
Entropy H(X) is the nearly achievable
lower bound on the average length of the
shortest description of the random
variable X.
I(X; Y ) = H(X) − H(X|Y ) = H(Y ) − H(Y |X)
= H(X) + H(Y ) − H(X, Y )
= D(PXY ‖PXPY ) ≥ 0
I(X; Y ) = 0 ⇔ PXY (x, y) = PX(x)PY (y)
9
Y)|H(X X)|H(Y
H(X) H(Y)
Y)H(X,
Y)I(X;
10
¥ mutual information in betting games
A, B ∼ two betting games
C ∼ joint betting game of A and B
PA, PB, PC : empirical prob. of Reality
QA, QB, QC : risk neutral prob. of Forecaster
µA := D(PA‖QA) : quantity of the game A
µB := D(PB‖QB) : quantity of the game B
µC := D(PC‖QC) : quantity of the game C
11
A B
B|A AB|
C
B)I(A;
12
I(A; B) := µB|A − µB = µA|B − µA
= µC − (µA + µB)
∵ µC = µA + µB|A = µB + µA|B (additivity)
µB|A := µC − µA = D(PB|A‖QB|A|PA)
µB|A : quantity of the conditional betting game
B|A given A
D(PB|A‖QB|A|PA) :
conditional K-L divergence between
PB|A and QB|A given PA
13
¥ Decomposition of I(A; B)
I(A; B) = I1(A; B) − I2(A; B)
I1(A; B) = D(PC‖PAPB) ≥ 0 :
usual mutual information between PA and PB
I2(A; B) = EPC
[log
QC(X, Y )
QA(X)QB(Y )
]
QC(x, y) = QA(x)QB(y) ⇒ I2(A; B) = 0
14
movesReality'
channelerasuresymmetricBinary
1a
2a
1b
3b
p
qp1
qp1
A B2bp
q
q
15
moves'Forecaster
channelerasuresymmetricBinary
1
2
1
3
r
sr1
sr1
A B2r
s
s
16
PB|A(y|x) = δxy QB|A(y|x) = PB(y) X = Y⇒ D(PB|A‖QB|A|PA)
=∑
x∈XPA(x)
∑
y∈YPB|A(y|x) log
PB|A(y|x)
QB|A(y|x)
=∑
x∈XPA(x)
∑
y∈Yδxy log
δxy
PB(y)
= −∑
x∈XPA(x) log PA(x) = H(X)
17
movesReality'
channelEntropy
1a
2a
1b
A B2b
3a 3b
18
moves'Forecaster
channelEntropy
1
2
1
3
A B2
3
1b
2b
3b
19
1.2 Game channel capacity
¥ Channel capacity in information theory
C = supPX
I(X; Y ) : capacity of channel X ⇒ Y
• Shannon’s channel coding theorem :
Capacity C is the supremum of rates R at
which information can be sent with
arbitrarily low probability of error.
20
¥ Channel capacity in betting games
Cg := supPA,QA=PA
I(A; B) :
capacity of betting game channel A ⇒ B
I(A; B) = µB|A − µB = µA|B − µA
Cg = supPA
µA|B = supPA
D(PA|B‖QA|B|PB) ≥ 0
21
feedbackwithchannelionCommunicat
W W�
A B
)Y,X|P(Y1nn
n
nX nY
knY
Encoder Decoder
Delay
k
)Y,X|P(Y1nn)Y,X|P(Y )Y,X|P(Yn )Y,X|P(Y )Y,X|P(Y )Y,X|P(Y )Y,X|P(Y )Y,X|P(Y )Y,X|P(YEncoder
X)|H(YH(Y)supC
)P|Q||D(PsupC
X
A
P
BB|AB|AP
g
X)|H(YH(Y)supC
)P|Q||D(PsupC
X
A
PXX
BPPB|AB|AD(PD(PPAA
g
DecoderChannel
22
1.3 Game rate distortion function
¥ Rate distortion function in information
theory
R(D) = infPX|X :EP
XXd(X,X)≤D
I(X; X) :
Rate distortion function of transmission X ⇒ X
• Shannon’s rate distortion theorem :
Rate distortion function R(D) is the
infimum of rates R that asymptotically
achieve the distortion D.
23
¥ Rate distortion function in betting
games
Rg(D) := infPA|A:EP
AAd(X,X)≤D,QA|A:QA=PA
I(A; A) :
Rate distortion function of transmission A ⇒ A
I(A; A) = µA|A − µA = µA|A − µA
Rg(D) = infPA|A
µA|A = infPA|A
D(PA|A‖QA|A|PA) ≥ 0
24
dfeedforwarwithdistortionRate
nX
nX�
A A�
RRateEncoder Decoder
Delay
kn
Encoder
)X|H(XH(X)infR(D)
)P|Q||D(Pinf(D)R
X|X
A|A
P
AA|AA|APg
�
�
�
��
)X))|H(XH(X)infR(D)
)P|Q||D(Pinf(D)R
X|X
A|A
PXX
APPA|AA|AD(PD(P
PAA
g
�
�
�
��
Decoder
25
2. Optimal conditional betting strategy
2.1 Optimal limit order strategy (cf. [6])
¥ Investor selects δ > 0 and sequentially
decides the trading times 0 < t1 < t2 < · · ·as follows.
26
S(t) > 0 : continuous asset price of Market
ti+1 : first time after ti such that
S(ti+1)
S(ti)= 1 + δ or
1
1 + δ
⇔ log S(ti+1) − log S(ti) = η or − η
η = log(1 + δ)
27
0 200 400 600 800 1000
0.00
0.02
0.04
0.06
0.08
0.10
Limit order for dlog S
Time
LS
t1 t2 t3 t4 t5 t6
��1==
��2
28
Embedded Coin-Tossing Game
K0 := 1
FOR n = 1, 2, . . . :
Investor announces αn ∈ RMarket announces xn ∈ {0, 1}Kn = Kn−1(1 + αn(xn − ρ))
END FOR
ρ =1
2 + δ: risk neutral prob. set by Investor
29
• Notations
χn1 , χn
0 : number of xi = 1, 0 (i = 1. . . . , n)
P (xn) =B(χn
1 + c1, χn0 + c0)
B(c1, c0)xn = x1 · · · xn
B(c1, c0) =Γ(c1)Γ(c0)
Γ(c1 + c0)c1, c0 > 0 :
beta binomial distribution modeled by Investor
30
Maximize EP [log Kn] ⇒ {αi}ni=1
αi =pi − ρ
ρ(1 − ρ)i = 1, . . . , n
pi = P (xi = 1|xi−1) =χi−1
1 + c1
i − 1 + c1 + c0
The optimal capital process of Investor is
expressed as the likelihood ratio
Kn =P (xn)
Q(xn)=
B(χn1 + c1, χn
0 + c0)/B(c1, c0)
ρχn1 (1 − ρ)χn
0
31
From the Stirling’s formula
log Kn = nD (pn‖q) − 1
2log n + O(1)
pn =
(χn
1
n,χn
0
n
): empirical prob. by Market
q = (ρ, 1 − ρ) : risk neutral prob. by Investor
D (pn‖q) : empirical K-L divergence
32
2.2 Optimal conditional limit order
strategy (cf. [7])
¥ Investor determines the betting ratios
αn ∈ R of conditional betting game B|Agiven A as follows.
α1 = 0, αn =
α+n if xn = 1
α−n if xn = 0
n = 2, 3, . . .
33
• Notations
χnx1, χn
x0 : number of xi = 1, 0 (i = 1. . . . , n)
χn11, χn
10, χn01, χn
00 :
number of (xi, yi) = (1, 1), (1, 0), (0, 1), (0, 0)
(i = 1, . . . , n)
P +(yn|xn) =B(χn
11 + c1, χn10 + c0)
B(c1, c0)
P −(yn|xn) =B(χn
01 + c1, χn00 + c0)
B(c1, c0):
beta binomial distribution modeled by Investor
35
Maximize EP [log Kn] P = P + × P − ⇒ {α±i }n
i=2
α+i =
p+i − ρ
ρ(1 − ρ)α−
i =p−
i − ρ
ρ(1 − ρ)
p+i = P +(yi = 1|xi−1) =
χi−111 + c1
χi−111 + χi−1
10 + c1 + c0
p−i = P −(yi = 1|xi−1) =
χi−101 + c1
χi−101 + χi−1
00 + c1 + c0
36
The optimal capital process of Investor is
expressed as the likelihood ratio
Kn = K+n × K−
n ξn = (x1, y1) · · · (xn, yn)
K+n =
P +(ξn)
Q+(ξn)=
B(χn11 + c1, χn
10 + c0)/B(c1, c0)
ρχn11(1 − ρ)χn
10
K−n =
P −(ξn)
Q−(ξn)=
B(χn01 + c1, χn
00 + c0)/B(c1, c0)
ρχn01(1 − ρ)χn
00
37
log Kn = nD(pn,y|x‖q|pn,x
)
−1
2
(log χn
x1 + log χnx0
)+ O(1)
pn,y|1 =
(χn
11
χnx1
,χn
10
χnx1
)pn,y|0 =
(χn
01
χnx0
,χn
00
χnx0
):
empirical conditional prob. by Market
q = (ρ, 1 − ρ) :
risk neutral prob. by Investor
D(pn,y|x‖q|pn,x
)pn,x =
(χn
x1
n,χn
x0
n
):
empirical conditional K-L divergence38
¥ In the following figures
SA(t) : daily closing prices of Nikkei 225
SB(t) : daily opening prices of
Toyota Sony Nintendo
2003/1/6 - 2007/9/28
Kn : capital process of Investor
39
pn1 = pn,1|1 pn0 = pn,0|0 :
empirical conditional prob. by Market
MDIV = D(pn,y|x‖q|pn,x
):
empirical conditional K-L divergence
mLKn =log Kn
n:
empirical exponential growth rate of Kn
40
0 200 400 600 800 1000 1200
5000
10000
15000
Toyota Opening & Nikkei Closing Prices
Days
0 200 400 600 800 1000 1200
5000
10000
15000
03/1/6-07/9/28
ToyotaNikkei
41
0 200 400 600 800 1000 1200
5000
10000
15000
Sony Opening & Nikkei Closing Prices
Days
0 200 400 600 800 1000 1200
5000
10000
15000
03/1/6-07/9/28
SonyNikkei
42
0 200 400 600 800 1000 1200
10000
20000
30000
40000
50000
60000
Nintendo Opening & Nikkei Closing Prices
Days
0 200 400 600 800 1000 1200
10000
20000
30000
40000
50000
60000
03/1/6-07/9/28
NintendoNikkei
43
0 200 400 600 800
0.0
0.2
0.4
0.6
0.8
1.0
Capital Process for Toyota 826 rounds
Rounds
k = 6.7
a1==2 a2==2
p0 = 0.498
Kn d = 0.01*2 k
K(T) = 0.01
44
0 100 200 300 400 500
0.0
0.2
0.4
0.6
0.8
1.0
Conditional Prob. of Toyota & Nikkei
Rounds
0 100 200 300 400 500
0.0
0.2
0.4
0.6
0.8
1.0
pn1pn0
45
0 100 200 300 400
0.0
0.2
0.4
0.6
0.8
1.0
Conditional Prob. of Sony & Nikkei
Rounds
0 100 200 300 400
0.0
0.2
0.4
0.6
0.8
1.0
pn1pn0
46
0 100 200 300 400 500 600
0.0
0.2
0.4
0.6
0.8
1.0
Conditional Prob. of Nintendo & Nikkei
Rounds
0 100 200 300 400 500 600
0.0
0.2
0.4
0.6
0.8
1.0
pn1pn0
47
Nikkei&Toyota
48
Nikkei&Sony
49
Nikkei&Nintendo
50
0 100 200 300 400 500
0.0 e+00
5.0 e+13
1.0 e+14
1.5 e+14
2.0 e+14
2.5 e+14
Capital Process for Toyota & Nikkei 500 rounds
Rounds
k = 6.5
p1 = 0.71 p0 = 0.67
Kn d = 0.01*2 k
K(T) = 5.64e+13
51
0 100 200 300 400
010000
20000
30000
40000
50000
60000
Capital Process for Sony & Nikkei 432 rounds
Rounds
k = 6
p1 = 0.64 p0 = 0.62
Kn d = 0.01*2 k
K(T) = 14449
52
0 100 200 300 400 500 600
010
2030
4050
Capital Process for Nintendo & Nikkei 597 rounds
Rounds
k = 6.4
p1 = 0.61 p0 = 0.53
Kn d = 0.01*2 k
K(T) = 20
53
3. Source coding and betting strategy
3.1 Lossy source coding with feedforward
¥ Source coding model
Xn = (X1, . . . , Xn) ∈ X n : source sequence
Xn = (X1, . . . , Xn) ∈ X n : estimated sequence
dn : X n × X n → R+ : distortion measure
54
dfeedforwarwithsystemcodingSource
nX
nX�
A A�
RRateEncoder Decoder
Delay
sequenceonreproducti:)X,...,X(X
functionsdecodingofsequence
:n,...1,iXX}2,...2,{1,:g
functionencoding:}2,...2,{1,:f
n1
n
kinR
i
nRn
n
���
�
Encoder Decoder
sequenceonreproducti:)X,...,X(X
functionsdecodingofsequence
:n,...1,iXX}2,...2,{1,:g
functionencoding:}2,...2,{1,:f
n1
n
kinR
i
nRn
nff
���
�
nfn
1ii }{gDelay
kn
55
• (2nR, n) code with k-delayed feedforward
fn : X n → {1, 2, . . . , 2nR} : encoding function
gi : {1, 2, . . . , 2nR} × X i−k → X i = 1, . . . , n :
sequence of decoding functions
fn(Xn) = w ∈ {1, 2, . . . , 2nR}gi(w, Xi−k) = Xi i = 1, . . . , n
Dn = EXn
[dn(Xn, Xn)
]:
distortion associated with the (2nR, n) code
56
(R, D) : achievable
⇔ ∃(2nR, n) code with lim supn→∞
Dn ≤ D
• Rate distortion theorem
R ≥ Rkff(D) ⇒ (R, D) is achievable
Rkff(D) = lim inf
PXn|Xn ,Dn≤D
1
nIk(X
n → Xn) :
rate distortion function with k-delayed
feedforward
57
Ik(Xn → Xn) =
n∑
i=1
I(Xi+k−1; Xi|Xi−1)
= I(Xn; Xn) −n∑
i=k+1
I(Xi−k; Xi|Xi−1) :
directed information from Xn to Xn with
k-delayed feedforward
n∑
i=k+1
I(Xi−k; Xi|Xi−1) :
information quantity obtained for free
58
ontransmissisymmetricBinary
1a
2a
1a�
2a�
1
0
0
1A A�
distortion&movesReality'
59
1
2
1a�
2a�
1
1
A A�
moves'Forecaster
ontransmissisymmetricBinary60
¥ Binary symmetric transmission
Rg(D) = I(A; A)
⇔ ρ = PA|A(x2|x1) = PA|A(x1|x2) = D
σ = QA|A(x2|x1) = QA|A(x1|x2)
=α1 − a1 + D(α2 − α1)
a2 − a1
Rg(D) = D(PA|A‖QA|A|PA)
= D(ρ‖σ) − D(a1‖α1)
61
QC = QA × QB ⇔ σ = α1 = 0.5 QA|A = PA
Rg(D) = D(PA|A‖PA|PA)
= D(ρ‖0.5) − D(a1‖0.5)
= H(a1) − H(D) = R(D)
Rg(0) = D(δA|A‖PA|PA)
= D(0‖0.5) − D(a1‖0.5)
= H(a1) = R(0)
62
0.0 0.1 0.2 0.3 0.4
0.0
0.5
1.0
1.5
Rate distortion function Rg(D) in BST
D
Rg(D)
0.0 0.1 0.2 0.3 0.4
0.0
0.5
1.0
1.5
0.0 0.1 0.2 0.3 0.4
0.0
0.5
1.0
1.5 a1==0.3
��
1==0.6
��
1==0.5
��
1==0.4
63
¥ Conditional betting game A|A given A
• Notations
χnx1, χn
x0 : number of xi = 1, 0 (i = 1. . . . , n)
χnx1, χn
x0 : number of xi = 1, 0 (i = 1. . . . , n)
χn11, χn
10, χn01, χn
00 :
number of (xi−k, xi) = (1, 1), (1, 0), (0, 1), (0, 0)
(i = k + 1, . . . , n)
64
P ±A
(xn|xn) = P +A
(xn|xn) × P −A
(xn|xn)
P +A
(xn|xn) =B(χn
11 + c1, χn10 + c0)
B(c1, c0)
P −A
(xn|xn) =B(χn
01 + c1, χn00 + c0)
B(c1, c0):
conditional beta binomial distribution
modeled by Skeptic
65
QA(xn|xn) = PA(xn) =B(χn
x1 + c1, χnx0 + c0)
B(c1, c0):
beta binomial distribution modeled by
Forecaster
Maximize EPA± [log Kn] ⇒ {αA±i }n
i=1
αA±i = 0 1 ≤ i ≤ k
αA±i =
αA+i if xi−k = 1
αA−i if xi−k = 0
k + 1 ≤ i ≤ n
66
αA+i =
pP +
A
i − qQA
i
qQA
i (1 − qQA
i )αA−
i =p
P −A
i − qQA
i
qQA
i (1 − qQA
i )
pP +
A
i = P +A
(xi = 1|xi−k) =χi−1
11 + c1
χi−111 + χi−1
10 + c1 + c0
pP −
A
i = P −A
(xi = 1|xi−k) =χi−1
01 + c1
χi−101 + χi−1
00 + c1 + c0
qQA
i = QA(xi = 1|xi−k) =χi−1
x1 + c1
i − 1 + c1 + c0
67
The optimal capital process of Skeptic is
expressed as the likelihood ratio
KP ±A (xn|xn) =
n∏
i=1
(1 + αA±
i (xi − qQA
i ))
=n∏
i=1
P ±A
(xi|xi−k)
QA(xi|xi−k)=
P +A
(xn|xn) × P −A
(xn|xn)
QA(xn|xn)
68
log KP ±A (xn|xn) = nD
(pn,x|x‖qn,x|pn,x
)
−1
2
(log χn
x1 + log χnx0
)+ O(1)
pn,x|1 =
(χn
11
χnx1
,χn
10
χnx1
)pn,x|0 =
(χn
01
χnx0
,χn
00
χnx0
):
empirical conditional prob. of Reality
qn,x =
(χn
x1
n,χn
x0
n
):
empirical risk neutral prob. of Forecaster
D(pn,x|x‖qn,x|pn,x
):
empirical conditional K-L divergence69
3.2 Efficient source coding scheme
¥ Betting strategy and data compression
(a variant of the arithmetic coding)
• Encoding
xn = x1 . . . xn ∈ {0, 1}n ⇒ xn = x1 . . . xn ∈ {0, 1}n
such that PXn|Xn achieves Rkff(D)
x|x(n) = (x1|x1, . . . , xn|xn) :
observed sequence by the encoder
2n sequences {xn} : in lexicographical order
70
The encoder calculates the cumulative sum
G±A(x|x(n)) =
∑
x′|x(n)≤x|x(n):typical
R±A(x′|x(n))
R±A(x′|x(n)) =
1
KP ±A (x′|x(n))
=QA(x′|x(n))
P ±A
(x′|x(n))
x′|x(n) : typical
⇔∣∣∣∣1
nlog KP ±
A (x′|x(n)) − Rkff(D)
∣∣∣∣ < ε
G±A(x|x(n)) ∈ [0, 1] as n → ∞
71
` =⌈log KP ±
A (x|x(n))⌉
+ 1 m = dlog ne :
specifed numbers of bits⌊G±
A(x|x(n))
⌋= .c1c2 . . . c` ci ∈ {0, 1} :
binary decimal to ` place accuracy
χnx1 = d1d2 . . . dm di ∈ {0, 1} :
binary number to m digits
c(`) = (c1, c2, . . . , c`) d(m) = (d1, d2, . . . , dm) :
code sequences sent to the decoder
72
• Decoding
When there exists a feedforward X → X
and χnx1 is known, the decoder can also
sequentially calculate the cumulative sum
G±A(x|x(n)) =
∑
x′|x(n)≤x|x(n):typical
R±A(x′|x(n))
until G±A(x|x(n)) ≥ .c(`)
⇒ x|x(n) : the encoded sequence
73
From the expression
log KP ±A (xn|xn) = nD
(pn,x|x‖qn,x|pn,x
)
−1
2
(log χn
x1 + log χnx0
)+ O(1)
the required number of bits is
` + m =⌈log KP ±
A (x|x(n))⌉
+ 1 + dlog ne=
⌈nD
(pn,x|x‖qn,x|pn,x
)⌉+ O (log n)
74
The empirical codeword length L∗n = `+m
n
per source symbol is
L∗n =
` + m
n
≤ D(pn,x|x‖qn,x|pn,x
)+ O
(log n
n
)
→ Rkff(D) as n → ∞
75
References
[1] Leo Breiman. Optimal gambling
systems for favorable games. Fourth
Berkeley Symposium on Probability
and Statistics I, 65-78, 1961.
[2] Thomas M. Cover. Universal
portfolios. Mathematical Finance, 1
(1), 1-29, 1991.
76
[3] John Larry Kelly Jr. A new
interpretation of information rate.
Bell System Technical Journal,
Vol.35, 917-926, 1956.
[4] L. C. MacLean, E. O. Thorp and W.
T. Ziemba eds. The Kelly Capital
Growth Investment Criterion :
Theory and Practice. Handbook in
Finantial Economic Series, Vol.3,
World Scientific, London, 2010.
77
[5] M. Kumon, A. Takemura and K.
Takeuchi. Capital process and
optimality properties of a Bayesian
Skeptic in coin-tossing games.
Stochastic Analysis and Applications,
26, 1161-1180, 2008.
[6] K. Takeuchi, M. Kumon and A.
Takemura. A new formulation of asset
trading games in continuous time with
essential forcing of variation exponent.
78
Bernoulli, 15, 1243-1258, 2009.
[7] K. Takeuchi, M. Kumon and A.
Takemura. Multistep Bayesian
strategy in coin-tossing games and its
application to asset trading games in
continuous time. Stochastic Analysis
and Applications, 28, 842-861, 2010.
[8] K. Takeuchi, A. Takemura and M.
Kumon. New procedures for testing
whether stock price processes are
79
martingales. Computational
Economics, 37, 67-88, 2011.
[9] M. Kumon, A. Takemura and K.
Takeuchi. Sequential optimizing
strategy in multi-dimensional bounded
forecasting games. Stochastic Analysis
and Applications, 121, 155-183, 2011.
[10] M. Kumon. Studies of information
quantities and information geometry
of higher order cumulant spaces.
80
Statistical Methodology, 7, 152-172,
2010.
[11] M. Kumon, A. Takemura and K.
Takeuchi. Conformal geometry of
statistical manifold with application to
sequential estimation. Sequential
Analysis, 30, 308-337, 2011.
81