+ All Categories
Home > Documents > Moments of Random Matrices and Weingarten Functions

Moments of Random Matrices and Weingarten Functions

Date post: 02-Feb-2022
Category:
Upload: others
View: 7 times
Download: 0 times
Share this document with a friend
59
Moments of Random Matrices and Weingarten Functions by Yinzheng Gu A project submitted to the Department of Mathematics and Statistics in conformity with the requirements for the degree of Master of Science Queen’s University Kingston, Ontario, Canada August 2013 Copyright c Yinzheng Gu, 2013
Transcript
Page 1: Moments of Random Matrices and Weingarten Functions

Moments of Random Matrices andWeingarten Functions

by

Yinzheng Gu

A project submitted to theDepartment of Mathematics and Statistics

in conformity with the requirements forthe degree of Master of Science

Queen’s UniversityKingston, Ontario, Canada

August 2013

Copyright c© Yinzheng Gu, 2013

Page 2: Moments of Random Matrices and Weingarten Functions

Abstract

Let G be the unitary group, orthogonal group, or (compact) symplectic group, equipped with

its Haar probability measure, and suppose that G is realized as a matrix group. Consider a

random matrix X = (xi,j)1≤i,j≤N picked up from G. We would like to know how to compute

the moments

E[xi1j1 · · ·xinjnxi′1j′1 · · ·xi′nj′n

]or E [xi1j1 · · ·xi2nj2n ] .

In this report, we focus on the unitary group UN and use the methods established in [5]

and [9] which express the moments as sums in terms of Weingarten functions. The func-

tion WgU(·, N), called the unitary Weingarten function, has rich combinatorial structures

involving Jucys-Murphy elements. We discuss and prove some of its properties.

Finally, we consider some applications of the formula for integration with respect to the

Haar measure over the unitary group UN . We compute matrix-valued expectations with the

goal of having a better understanding of the operator-valued Cauchy transform.

i

Page 3: Moments of Random Matrices and Weingarten Functions

Acknowledgments

First and foremost, I would like to thank my supervisors Serban Belinschi and James A.

Mingo for their guidance and continuous support.

I would also like to thank all of my professors and fellow students. I have learned a great

deal from their wisdom and helpfulness.

I am also thankful to all members and staff in the Department of Mathematics and

Statistics at Queen’s University.

Finally, I am grateful to my family, especially my parents for their unconditional love

and encouragement.

ii

Page 4: Moments of Random Matrices and Weingarten Functions

Table of Contents

Abstract i

Acknowledgments ii

Table of Contents iii

1 Introduction and Preliminaries 11.1 Expectation of products of entries . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1.1 Integration formula for the unitary group . . . . . . . . . . . . . . . . 11.1.2 Integration formula for the orthogonal group . . . . . . . . . . . . . . 4

1.2 Partitions and pairings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Weingarten Functions and Jucys-Murphy Elements 82.1 Gram matrices and Weingarten matrices . . . . . . . . . . . . . . . . . . . . 82.2 Jucys-Murphy elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3 The group algebra C[Sn] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.4 Group representation theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.4.1 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.4.2 The regular representation . . . . . . . . . . . . . . . . . . . . . . . . 152.4.3 Maschke’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.4.4 Artin-Wedderburn theorem . . . . . . . . . . . . . . . . . . . . . . . 17

2.5 Young tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.5.1 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.5.2 The irreducible representations of Sn . . . . . . . . . . . . . . . . . . 19

2.6 Some properties of unitary Weingarten functions . . . . . . . . . . . . . . . . 212.6.1 The invertibility of G . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.6.2 Proof of Proposition 2.4 . . . . . . . . . . . . . . . . . . . . . . . . . 222.6.3 Asymptotics of WgU . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.7 Additional properties of unitary Weingarten functions . . . . . . . . . . . . . 242.7.1 Explicit formulas for G and WgU . . . . . . . . . . . . . . . . . . . . 242.7.2 Character expansion of WgU . . . . . . . . . . . . . . . . . . . . . . . 28

3 Integration over the Unitary Group and Applications 303.1 Unitarily invariant random matrices . . . . . . . . . . . . . . . . . . . . . . . 313.2 Expectation of products of matrices . . . . . . . . . . . . . . . . . . . . . . . 34

iii

Page 5: Moments of Random Matrices and Weingarten Functions

3.3 Matricial cumulants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4 Conclusion and Future Work 444.1 The Cauchy transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.2 The operator-valued Cauchy transform . . . . . . . . . . . . . . . . . . . . . 45

A Free Probability Theory 49

B Tables of Values 51B.1 Unitary Weingarten functions . . . . . . . . . . . . . . . . . . . . . . . . . . 51B.2 Orthogonal Weingarten functions . . . . . . . . . . . . . . . . . . . . . . . . 52

Bibliography 53

iv

Page 6: Moments of Random Matrices and Weingarten Functions

Chapter 1

Introduction and Preliminaries

Let UN denote the group of N×N unitary matrices, with the group operation that of matrixmultiplication. In other words, an N × N complex matrix U ∈ UN if UU∗ = U∗U = IN ,where IN is the N ×N identity matrix and U∗ is the conjugate transpose of U .

Viewing it as a subset of MN(C) – the algebra of N × N complex matrices with usualmatrix multiplication, it is common to endow UN with the corresponding subspace topologyso that UN is compact as a topological space. By Haar’s theorem, every locally compactHausdorff topological group has a unique (up to a positive multiplicative constant) left-translation-invariant measure and a unique (up to a positive multiplicative constant) right-translation-invariant measure. When the group is compact, these two measures coincide andwe call it the Haar measure. This Haar measure is finite, therefore we can normalize it to aprobability measure – the unique Haar probability measure on the compact group.

Definition 1.1. A random matrix is a matrix of given type and size whose entries consistof random numbers from some specified distribution. In other words, a random matrix is amatrix-valued random variable.

Definition 1.2. We equip the compact group UN with its Haar probability measure – aprobability measure on UN which is invariant under multiplication from the left and multi-plication from the right by any arbitrary N×N unitary matrix. Random matrices distributedaccording to this measure will be called Haar-distributed unitary random matrices. Thus theexpectation E over this ensemble is given by integrating with respect to the Haar probabilitymeasure.

1.1 Expectation of products of entries

1.1.1 Integration formula for the unitary group

The expectation of products of entries (also called moments or matrix integrals) of Haar-distributed unitary random matrices can be described in terms of a special function on the

1

Page 7: Moments of Random Matrices and Weingarten Functions

permutation group. Since such considerations go back to Weingarten [23], Collins [5] callsthis function the (unitary) Weingarten function and denotes it by WgU .

Notation 1.3. Let Sn be the symmetric group on [n] = 1, 2, . . . , n. As we shall see later,for α ∈ Sn and n ≤ N ,

WgU(α,N) =

∫UNU11 · · ·UnnU1α(1) · · ·Unα(n)dU

= E[U11 · · ·UnnU1α(1) · · ·Unα(n)

],

where U is an N ×N Haar-distributed unitary random matrix, dU is the normalized Haarmeasure, and WgU is called the (unitary) Weingarten function.

A crucial property of the Weingarten function WgU(α,N) is that it depends only onthe conjugacy class (in other words, the cycle structure) of the permutation α. This willbe explained in more details in the next chapter. First, we introduce some notations by anexample.

Example 1.4. As mentioned above, WgU(α,N) depends only on the cycle structure of α.Therefore when a table of values is provided, the value of WgU(α,N) is usually given ac-cording to the cycle structure of α. For example, WgU([2, 1], N) denotes the common valueof every α ∈ S3 whose cycle decomposition consists of a transposition and a fixed point.

Suppose α, β, γ ∈ S3 such that α = (1, 2) = (1, 2)(3), β = (1, 3) = (1, 3)(2), andγ = (1, 2, 3), then by the table of values (for some values) of the unitary Weingarten functions,provided in appendix B, we have

WgU(α,N) = WgU(β,N) = WgU([2, 1], N) =−1

(N2 − 1)(N2 − 4)and

WgU(γ,N) = WgU([3], N) =2

N(N2 − 1)(N2 − 4).

In the paper [9], a formula is given so that general matrix integrals over the unitary groupUN can be calculated as follows:

Theorem 1.5. Let U be an N × N Haar-distributed unitary random matrix and n ≤ N ,then

E[Ui1j1 · · ·UinjnUi′1j′1 · · ·Ui′nj′n

]=∑

α,β∈Sn

δi1i′α(1) · · · δini′α(n)δj1j′β(1) · · · δjnj′β(n)WgU(βα−1, N), (1.1)

2

Page 8: Moments of Random Matrices and Weingarten Functions

where Uij denotes the ijth entry of U and δij =

1, i = j

0, i 6= j.

Remark 1.6. 1. Note that Notation 1.3 follows immediately from this theorem and thusE[U11 · · ·UnnU1α(1) · · ·Unα(n)

]is sometimes referred to as the integral representation of

WgU(α,N).

2. If n 6= n′, then by the invariance of the Haar measure, we have

E[Ui1j1 · · ·UinjnUi′1j′1 · · ·Ui′n′j′n′

]= E

[λUi1j1 · · ·λUinjnλUi′1j′1 · · ·λUi′n′j′n′

]for every λ ∈ C with |λ| = 1.

Without loss of generality, assume n > n′, then E[Ui1j1 · · ·UinjnUi′1j′1 · · ·Ui′n′j′n′

]=

λn−n′E[Ui1j1 · · ·UinjnUi′1j′1 · · ·Ui′n′j′n′

]and thus the integral vanishes since it is impossi-

ble to have λn−n′= 1 ∀ λ ∈ C with |λ| = 1.

Notation 1.7. Let A = (aij)Ni,j=1 ∈MN(C) be an N ×N complex matrix, we denote by Tr

(or TrN) the non-normalized trace and tr (or trN) the normalized trace:

Tr(A) :=N∑i=1

aii and tr(A) :=1

N

N∑i=1

aii =1

NTr(A).

Example 1.8. (An application of Theorem 1.5)Let U be an N ×N Haar-distributed unitary random matrix and B be an N ×N complexmatrix. We wish to compute E [UBU∗], where E denotes expectation with respect to theHaar measure.

Note that E [UBU∗] denotes the matrix-valued expectation of the random matrix UBU∗.In other words,

E [UBU∗] :=

(∫UN

(UBU∗)ij dU

)Ni,j=1

,

where dU is the normalized Haar measure.

Therefore we first compute the ijth entry of E [UBU∗] and obtain

E [UBU∗]ij

=N∑

j1,j2=1

E[Uij1Bj1j2(U∗)j2j]

3

Page 9: Moments of Random Matrices and Weingarten Functions

=N∑

j1,j2=1

Bj1j2E[Uij1Ujj2

]=

N∑j1,j2=1

Bj1j2δijδj1j2WgU(e,N) (by (1.1) with n = 1 and α = β = e)

= δij

N∑j1=1

Bj1j1WgU([1], N)

= δij1

NTr(B)

= δijtr(B).

This allows us to conclude that E [UBU∗] = tr(B)IN , where IN denotes the N ×N iden-tity matrix.

1.1.2 Integration formula for the orthogonal group

The real analogue of a unitary matrix is an orthogonal matrix.

Definition 1.9. An orthogonal matrix is a square matrix with real entries whose rows andcolumns are orthogonal unit vectors (i.e. orthonormal vectors). Equivalently, a matrix O isorthogonal if its transpose is equal to its inverse: OT = O−1. The set of N ×N orthogonalmatrices forms a group ON , known as the orthogonal group.

The main focus of this report will be on the unitary group UN , but it is worth mentioningthat in the paper [9], the authors also defined a function WgO on S2n they called the or-thogonal Weingarten function. Then they proved the following formula for integration withrespect to the Haar measure over the orthogonal group ON when n ≤ N :

Theorem 1.10. Let O be an N ×N Haar-distributed orthogonal random matrix, then

E [Oi1j1 · · ·Oi2nj2n ] =∑

p,q∈P2(2n)

δi1ip(1) · · · δi2nip(2n)δj1jq(1) · · · δj2njq(2n)⟨WgO(p), q

⟩, (1.2)

where P2(2n) is the set of all pairings of [2n] = 1, 2, . . . , 2n and⟨WgO(p), q

⟩denotes the

pqth entry of the orthogonal Weingarten matrix (to be introduced in section 2.1).

Observe that, similar to the unitary case, the moments of an odd number of factorsvanish. In other words, we always have E

[Oi1j1 · · ·Oi2n+1j2n+1

]= 0 as the Haar measure is

invariant under the transformation 1 → −1. However, to fully understand (1.1) and (1.2),we need some preliminaries on the constituent parts of the formulas, to be discussed in the

4

Page 10: Moments of Random Matrices and Weingarten Functions

following sections.

It is also worth mentioning that the authors of [9] considered the case of integration overthe symplectic group in their paper as well, but for our purposes we will not discuss it inthis report.

1.2 Partitions and pairings

Definition 1.11. Let S be a finite totally ordered set. We call π = V1, . . . , Vr a partitionof the set S if and only if the Vi (1 ≤ i ≤ r) are pairwise disjoint, non-empty subsets of Ssuch that V1 ∪ · · · ∪ Vr = S. We call V1, . . . , Vr the blocks of π.

Definition 1.12. For any integer n ≥ 1, let P(n) be the set of all partitions of [n]. Ifn is even, then a partition π ∈ P(n) is called a pair partition or pairing if each block ofπ consists of exactly two elements. The set of all pairings of [n] is denoted by P2(n). Forany partition π ∈ P(n), let #(π) denote the number of blocks of π (also counting singletons).

Let Sn be the symmetric group on [n]. Given a permutation π ∈ Sn, it is often conve-nient to consider the cycles of π as a partition of [n], thus #(π) also denotes the numberof cycles in the cycle decomposition of π (also counting fixed points) when π is a permuta-tion in Sn. This map Sn → P(n) forgets the order of elements in a cycle and so is not abijection. Conversely, given a partition π ∈ P(n), we put the elements of each block intoincreasing order and consider this as a permutation. Restricted to pairings this is a bijection.

Definition 1.13. Let π, σ be in P(n). The set P(n) is a finite partially ordered set (poset)in which π ≤ σ means each block of π is completely contained in one of the blocks of σ (thatis, if π can be obtained out of σ by refining the block structure). The partial order obtainedin this way is called the reversed refinement order.

Definition 1.14. Let P be a finite partially ordered set and let π, σ be in P . If the setU = τ ∈ P | τ ≥ π and τ ≥ σ is non-empty and has a minimum τ0 (that is, an elementτ0 ∈ U which is smaller than all the other elements of U), then τ0 is called the join of π andσ, and is denoted by π ∨ σ.

To make use of formula (1.2), we need to address one more question: given two pairingsp and q, what is the relationship between the cycles of pq and the blocks of p ∨ q? First weillustrate it with an example.

5

Page 11: Moments of Random Matrices and Weingarten Functions

Example 1.15. If p, q ∈ P2(8) such that p = (1, 2)(3, 5)(4, 8)(6, 7) and q = (1, 6)(2, 5)(3, 7)(4, 8).Then pq = (1, 7, 5)(2, 3, 6)(4)(8) and p∨ q = 1, 2, 3, 5, 6, 7, 4, 8. We notice that the cy-cles of pq appear in pairs in the sense that if (i1, . . . , ik) is a cycle of pq, then (q(ik), . . . , q(i1))is also a cycle of pq. This is not a coincidence as shown by the following lemma.

Lemma 1.16. (Lemma 2 in [16])Let p, q ∈ P2(n) be pairings and (i1, i2, . . . , ik) a cycle of pq. Let jr = q(ir). Then (jk, jk−1, . . . , j1)is also a cycle of pq, and these two cycles are distinct; i1, . . . , ik, j1, . . . , jk is a block ofp ∨ q and all are of this form; #(pq) = 2#(p ∨ q).

Proof. Since (i1, i2, . . . , ik) is a cycle of pq, we have pq(ir) = ir+1, thus jr = q(ir) = ppq(ir) =p(ir+1). Hence pq(jr+1) = pqq(ir+1) = p(ir+1) = jr. If i1, . . . , ik and j1, . . . , jk were tohave a non-empty intersection, then for some n, q(pq)n would have a fixed point, but thiswould in turn imply that either p or q had a fixed point, which is impossible. Since q(ir)kr=1

= jsks=1 and p(js)ks=1 = irkr=1, i1, . . . , ik, j1, . . . , jk must be a block of p ∨ q. Sinceevery point of [n] is in some cycle of pq, all blocks must be of this form. Since every blockof p ∨ q is the union of two cycles of pq, we have #(pq) = 2#(p ∨ q).

This gives us a way to obtain the values of⟨WgO(p), q

⟩(p and q are pairings) for given

p and q from the table of values (for some values) of the orthogonal Weingarten functions,provided in appendix B: since the cycles of pq always appear in pairs, we choose one cyclefrom each pair of cycles (in other words, delete half of the cycles of pq), then the value of⟨WgO(p), q

⟩depends only on the cycle structure of the modified pq.

In the previous example where p = (1, 2)(3, 5)(4, 8)(6, 7), q = (1, 6)(2, 5)(3, 7)(4, 8), andpq = (1, 7, 5)(2, 3, 6)(4)(8). We have⟨

WgO(p), q⟩

= WgO([3, 1], N)

=2

(N − 1)(N − 2)(N − 3)(N + 1)(N + 2)(N + 6).

Now let us see how formula (1.2) can be applied to calculate general matrix integrals overthe orthogonal group On.

Example 1.17. Suppose O is an N ×N Haar-distributed orthogonal random matrix whereN ≥ 2.

1. Integrands of type Oi1j1 · · ·Oi2nj2n which do not have pairings integrate to zero. Forexample, E [O3

11O12] = E [O11O11O11O12] = 0.

2. If Oi1j1Oi2j2Oi3j3Oi4j4 = O211O

222 = O11O11O22O22, then the indexes only admit one

pairing, namely p = 1, 2, 3, 4 and q = 1, 2, 3, 4.

6

Page 12: Moments of Random Matrices and Weingarten Functions

Thus

E[O2

11O222

]=⟨WgO(1, 2, 3, 4), 1, 2, 3, 4

⟩= WgO([1, 1], N)

=N + 1

N(N − 1)(N + 2).

3. If Oi1j1Oi2j2Oi3j3Oi4j4 = O411 = O11O11O11O11, then all pairings are admissible because

all the indexes are the same, equal to 1. A similar computation shows that

E[O4

11

]=

3(N + 1)− 6

N(N − 1)(N + 2)=

3

N(N + 2).

7

Page 13: Moments of Random Matrices and Weingarten Functions

Chapter 2

Weingarten Functions andJucys-Murphy Elements

In this chapter, we study some of the properties of Weingarten functions.

2.1 Gram matrices and Weingarten matrices

Let N be a positive integer. We define the unitary Gram matrix GUNn =(GUNn (α, β)

)α,β∈Sn

byGUNn (α, β) = N#(α−1β),

and let WgUNn =(WgUNn (α, β)

)α,β∈Sn

be the pseudo-inverse of GUNn . We call WgUNn the uni-tary Weingarten matrix.

Proposition 2.1. For any m×n matrix A, there exists a unique n×m matrix A+, called theMoore-Penrose pseudo-inverse (or simply pseudo-inverse) of A, satisfying all of the followingfour criteria:

1. AA+A = A;

2. A+AA+ = A+;

3. (AA+)∗ = AA+;

4. (A+A)∗ = A+A.

Note that if A is invertible, then A+ = A−1.

Proof. First, if D is an m×n (rectangular) diagonal matrix, then we define an n×m matrixD+ whose entries are

(D+)ij =

(Dii)

−1, if Dii 6= 0 for i = 1, . . . ,minm,n0, otherwise

.

8

Page 14: Moments of Random Matrices and Weingarten Functions

It is easy to check that D+ is the pseudo-inverse of D.

The existence of A+ then follows from the singular value decomposition theorem whichstates that any m × n matrix A has a factorization of the form A = UDV ∗, where U is anm×m unitary matrix, D is an m× n (rectangular) diagonal matrix with non-negative realnumbers on the diagonal, and V ∗ is the conjugate transpose of an n× n unitary matrix V .Let A+ = V D+U∗ and we can show that A+ is the pseudo-inverse of A:

1. AA+A = UDV ∗V D+U∗UDV ∗ = UDD+DV ∗ = UDV ∗ = A;

2. A+AA+ = V D+U∗UDV ∗V D+U∗ = V D+DD+U∗ = V D+U∗ = A+;

3. (AA+)∗ = (UDV ∗V D+U∗)∗ = (UDD+U∗)∗ = U(DD+)∗U∗ = UDD+U∗ = UDV ∗V D+U∗

= AA+;

4. (A+A)∗ = (V D+U∗UDV ∗)∗ = (V D+DV ∗)∗ = V (D+D)∗V ∗ = V (D+D)V ∗ = V D+U∗UDV ∗

= A+A.

To prove the uniqueness of A+, suppose both B and C are n × m matrices satisfyingall of the pseudo-inverse criteria, then AB = (AB)∗ = B∗A∗ = B∗(ACA)∗ = B∗A∗C∗A∗ =(AB)∗(AC)∗ = ABAC = AC. Similarly, we have BA = CA and therefore B = BAB =BAC = CAC = C.

Analogously, we define the orthogonal Gram matrix GONn =(GONn (p, q)

)p,q∈P2(2n)

by

GONn (p, q) = N#(p∨q),

and the orthogonal Weingarten matrix WgONn as the pseudo-inverse of GONn .

Example 2.2. Suppose n = 3 and N ≥ n, then

GUN3 =

() (1, 2) (1, 3) (2, 3) (1, 2, 3) (1, 3, 2)

() N3 N2 N2 N2 N N(1, 2) N2 N3 N N N2 N2

(1, 3) N2 N N3 N N2 N2

(2, 3) N2 N N N3 N2 N2

(1, 2, 3) N N2 N2 N2 N3 N(1, 3, 2) N N2 N2 N2 N N3

.

Since the determinant of GUN3 is1

N6(N2 − 1)5(N2 − 4)and N ≥ 3 by assumption, GUN3

is invertible and we have WgUN3 =(GUN3

)−1

9

Page 15: Moments of Random Matrices and Weingarten Functions

=1

(N2 − 1)(N2 − 4)

N2 − 2

N−1 −1 −1

2

N

2

N

−1N2 − 2

N

2

N

2

N−1 −1

−12

N

N2 − 2

N

2

N−1 −1

−12

N

2

N

N2 − 2

N−1 −1

2

N−1 −1 −1

N2 − 2

N

2

N2

N−1 −1 −1

2

N

N2 − 2

N

.

Therefore∑σ∈S3

WgU(σ,N) is the sum of entries of any row or column of WgUN3 . In

particular, choosing the first row gives us∑σ∈S3

WgU(σ,N) =∑β∈S3

WgUN3 (e, β)

=1

(N2 − 1)(N2 − 4)

N2 − 2− 3N + 4

N

=

1

N(N + 1)(N + 2).

Remark 2.3. Note that we implicitly used the fact that the elements in the first row of theunitary Weingarten matrix WgUNn define the unitary Weingarten function WgU by

WgU(π,N) = WgUNn (e, π),

where π ∈ Sn. This will be explained in more details in the following sections.

Moreover, the calculation above leads to the following known fact. We will state it as aproposition here, to be proved later.

Proposition 2.4. For all k ≥ 1,∑σ∈Sk

WgU(σ,N) =1

N(N + 1) · · · (N + k − 1).

2.2 Jucys-Murphy elements

Let n be a positive integer, then the group algebra C[Sn] is an algebra (over C) with thesymmetric group Sn as a basis, with multiplication defined by extending the group multipli-cation linearly, and an involution ∗ defined by σ∗ = σ−1, σ ∈ Sn, and extended conjugatelinearly.

10

Page 16: Moments of Random Matrices and Weingarten Functions

Definition 2.5. Let n be a positive integer. Consider the natural embedding C[S1] ⊂C[S2] ⊂ · · ·C[Sn] ⊂ · · · , where elements of C[Sk] act trivially on numbers greater than k.The Jucys-Murphy elements J1, J2, . . . , Jn ∈ C[Sn] are transposition sums defined by:

J1 = 0 and Jk = (1, k) + (2, k) + · · ·+ (k − 1, k) for 2 ≤ k ≤ n.

Remark 2.6. 1. The definition of J1 is a convenient convention.

2. For n ≥ 2, Jn commutes with C[Sn−1]. Indeed, for every σ ∈ Sn−1, we have σJnσ−1 =

n−1∑i=1

σ(i, n)σ−1 =n−1∑i=1

(σ(i), n) = Jn. Therefore JmJn = JnJm for all m and n. This

implies that C[J2, . . . , Jn] is a commutative subalgebra of C[Sn]. It is in fact a max-imal commutative subalgebra known as the Gelfand-Zetlin subalgebra as shown byOkounkov and Vershik in [20].

Next, we need the classical identity (see [13]).

Theorem 2.7. (Jucys) Let N be a positive integer, then

n∏k=1

(N + Jk) =∑σ∈Sn

N#(σ)σ. (2.1)

There are many ways to prove this identity. We will follow the one provided by Zinn-Justin as Proposition 1 in [24], based on a standard inductive construction of permutations.

Proof. First, note that the right-hand side has n! terms and the left-hand side is equal to(N + J1)(N + J2) · · · (N + Jn), which also has n! terms after expanding the product. Ourproof will consist of a term by term identification by induction on n.

If n = 1, then the statement is trivial as both the left-hand side and the right-hand sideare equal to N . Assume the statement holds for all natural numbers less than or equal ton − 1 and consider a permutation σ ∈ Sn. The goal is to identify N#(σ)σ with one of theterms in the expansion of (N + J1) · · · (N + Jn). There are two cases:

1. If n is a fixed point of σ, then we apply the induction hypothesis to σ|1,...,n−1 ∈ Sn−1,and we know that N#(σ|1,...,n−1)σ|1,...,n−1 corresponds to one term in (N+J1) · · · (N+Jn−1). Furthermore, since σ has one more cycle than σ|1,...,n−1, we can identify N#(σ)σ

with the term in (N+J1) · · · (N+Jn) with the same choice as N#(σ|1,...,n−1)σ|1,...,n−1in the first n− 1 factors, and we further pick the multiplication by N in N + Jn.

11

Page 17: Moments of Random Matrices and Weingarten Functions

2. Suppose n is not a fixed point.First, we notice that if σ = (other cycles of σ)(σ−1(n), n, σ(n), . . . ), then σ(σ−1(n), n) =(other cycles of σ)(σ−1(n), σ(n), . . . )(n), and thus we can apply the induction hypothe-sis to (σ(σ−1(n), n))|1,...,n−1 ∈ Sn−1. Since σ has as many cycles as (σ(σ−1(n), n))|1,...,n−1,we can identify N#(σ)σ with the term in (N + J1) · · · (N + Jn) with the same choice as

N#((σ(σ−1(n),n))|1,...,n−1)(σ(σ−1(n), n))|1,...,n−1 in the first n−1 factors, and we furtherpick the transposition (σ−1(n), n) inside N + Jn.

The proof is completed.

This theorem will play an important role in our study of the unitary Weingarten func-tions. First, we review some relevant concepts.

2.3 The group algebra C[Sn]Let C[Sn] be the group algebra with the symmetric group Sn as a basis.

If A(Sn) is the algebra of all complex-valued functions on Sn with multiplication givenby the convolution

(f ∗ g)(π) =∑σ∈Sn

f(σ)g(σ−1π), (f, g ∈ A(Sn), π ∈ Sn),

then C[Sn] is isomorphic as a complex algebra to A(Sn), via the map

ϕ : C[Sn]→ A(Sn), σ 7→ δσ,

where δσ : Sn → C is the function defined by δσ(π) =

1, σ = π

0, σ 6= π.

It is easily checked that for any σ1, σ2 ∈ Sn, δσ1σ2 = δσ1 ∗ δσ2 under this map:

(δσ1 ∗ δσ2)(π) =∑τ∈Sn

δσ1(τ)δσ2(τ−1π)

= δσ1(σ1)δσ2(σ−11 π)

= δσ2(σ−11 π)

= δσ1σ2(π), π ∈ Sn.

In general, if a =∑σ∈Sn

ασσ ∈ C[Sn] where ασ ∈ C, then a 7→ a′ =∑σ∈Sn

ασδσ under ϕ.

Since a′(π) =

(∑σ∈Sn

ασδσ

)(π) = απ, a is sent to the function a′ that maps π to απ.

12

Page 18: Moments of Random Matrices and Weingarten Functions

Moreover, if b =∑τ∈Sn

βττ ∈ C[Sn] where βτ ∈ C, then b 7→ b′ =∑τ∈Sn

βτδτ under ϕ and so

ab =

(∑σ∈Sn

ασσ

)(∑τ∈Sn

βττ

)=∑σ,τ∈Sn

ασβτστ

=∑σ∈Sn

(∑τ∈Sn

ατβτ−1σ

)σ.

Therefore ab 7→ (ab)′ =∑σ∈Sn

(∑τ∈Sn

ατβτ−1σ

)δσ as an element in A(Sn) and thus

(ab)′(π) =

∑σ∈Sn

(∑τ∈Sn

ατβτ−1σ

)δσ

(π)

=∑τ∈Sn

ατβτ−1π

=∑τ∈Sn

a′(τ)b′(τ−1π)

= (a′ ∗ b′)(π), π ∈ Sn.

Under this isomorphism, we may pass easily between C[Sn] and A(Sn). For simplicity,we will denote both C[Sn] and A(Sn) by C[Sn] in the following sections and it should beclear from context which algebra we refer to.

Now, note that the right-hand side of (2.1) is∑σ∈Sn

N#(σ)σ ∈ C[Sn]. It is sent to

∑σ∈Sn

N#(σ)δσ ∈ A(Sn) under the map ϕ. Since

(∑σ∈Sn

N#(σ)δσ

)(π) = N#(π), this means∑

σ∈Sn

N#(σ)σ is sent to the function that maps π to N#(π). If we let G ∈ A(Sn) be the

function G(σ) = N#(σ), then (2.1) becomesn∏k=1

(N + Jk) = G.

2.4 Group representation theory

In this section, we briefly review some concepts from group representation theory that willbe used later in this report. There might be more information than actually needed, it willbe a good introduction of the subject nevertheless.

13

Page 19: Moments of Random Matrices and Weingarten Functions

2.4.1 Basic definitions

We begin with some basic definitions.

Definition 2.8. A representation of a group G on a vector space V over a field K is agroup homomorphism from G to GL(V ), the general linear group on V . In other words, arepresentation is a map ρ : G→ GL(V ) such that ρ(g1g2) = ρ(g1)ρ(g2), for all g1, g2 ∈ G.

Definition 2.9. Let ρ : G→ GL(V ) be a representation.

1. Let V be an n-dimensional vector space and K = C. If we pick a basis of V , thenevery linear map corresponds to a matrix, so we get an isomorphism GL(V ) ∼= GLn(C)(dependent on the basis). The representation ρ now becomes a homomorphism

ρ : G→ GLn(C),

and the number n is called the dimension (or the degree) of the representation.

2. The trivial representation of G assigns to each element of G the identity map on V .

3. If ρ is injective, then we say ρ is a faithful representation. In other words, differentelements g of G are represented by distinct linear mappings ρ(g) and thus ker(ρ) = e.

4. A subrepresentation of ρ is a vector space W ⊆ V such that

ρ(g)(x) ∈ W ∀ g ∈ G and x ∈ W.

This means that every ρ(g) defines a linear map from W to W , i.e. we have a repre-sentation of G on the subspace W and we call W an invariant subspace of ρ.

If we pick a basis of a subrepresentation W and extend it to V , then for all g ∈ G, ρ(g)

takes the form

(∗ ∗0 ∗

)where the top-left block is a dim(W ) × dim(W ) matrix.

Thus, this block gives the matrix for a representation of G on W .

5. If ρ has exactly two subrepresentations, namely the 0-dimensional subspace and Vitself, then the representation ρ is said to be irreducible. If it has a proper subrepre-sentation of non-zero dimension, the representation is said to be reducible. Note thatthe representation of dimension 0 is considered to be neither reducible nor irreducibleand any 1-dimensional representation is irreducible.

6. Let G = Sn, if

ρ : σ 7→ 1

τ 7→ −1

for all even permutations σ and odd permutations τ , then ρ is called the sign repre-sentation, i.e. ρ(σ) is simply multiplication by sgn(σ) for each σ ∈ Sn.

14

Page 20: Moments of Random Matrices and Weingarten Functions

7. Let G = Sn and V be an n-dimensional vector space with standard basis e1, . . . , en.Then we have the permutation representation ρ(σ)(ei) = eσ(i), σ ∈ Sn.For example, suppose G = S3, then we have, in matrix forms,

ρ((1, 2)) =

0 1 01 0 00 0 1

and ρ((2, 3)) =

1 0 00 0 10 1 0

.

In other words, ρ is a permutation representation if ρ(g) is a permutation matrix forall g ∈ G.

8. The character of the representation ρ is the function

χρ : G→ K

g 7→ Tr(ρ(g)).

Note that if g1 and g2 are conjugate in G, then χρ(g1) = χρ(g2) since trace is constantunder conjugation, so we can view χρ as a function on the conjugacy classes of G.

2.4.2 The regular representation

Recall that C[Sn] is the algebra of all complex-valued functions on Sn. It has a basis δσσ∈Sn ,where δσ is the function sending σ to 1 ∈ C and all other group elements to 0 ∈ C.

Given a function f : Sn → C and an element σ ∈ Sn, the left regular representation λ(δσ)is defined by

λ(δσ) : f 7→ λ(δσ)f

on the basis element δσ, where (λ(δσ)f)(π) = f(σ−1π) for all π ∈ Sn.

Similarly, the right regular representation ρ(δσ) sends the function f to ρ(δσ)f on thebasis element δσ, where (ρ(δσ)f)(π) = f(πσ) for all π ∈ Sn.

Now, if G ∈ C[Sn] is the function G(σ) = N#(σ), it can be seen that the unitary Grammatrix GUNn is the matrix of G acting in either the left or right regular representation ofC[Sn], with standard basis δσσ∈Sn . If GUNn is invertible, then G is invertible in C[Sn] andwe let the inverse of G be WgU , the unitary Weingarten function.

2.4.3 Maschke’s theorem

Let V and W be two vector spaces over C. Recall that the direct sum V ⊕W is the vectorspace of all pairs (x, y) such that x ∈ V and y ∈ W . Its dimension is dim(V ) + dim(W ).

15

Page 21: Moments of Random Matrices and Weingarten Functions

Suppose G is a group and we have representations

ρV : G→ GL(V ),

ρW : G→ GL(W ).

Then there is a representation ρV⊕W of G on V ⊕W given by

ρV⊕W : G→ GL(V ⊕W ),

ρV⊕W (g) : (x, y) 7→ (ρV (g)(x), ρW (g)(y)) .

Moreover, suppose dim(V ) = n and dim(W ) = m. We can choose a basis a1, . . . , an forV , and b1, . . . , bm for W , then the set

(a1, 0), . . . , (an, 0), (0, b1), . . . , (0, bm)

is a basis for V ⊕W . For all g ∈ G, ρV (g) ∈ GLn(C), ρW (g) ∈ GLm(C), and ρV⊕W is given

by the (n + m)× (n + m) matrix

(ρV (g) 0

0 ρW (g)

)under the induced basis on V ⊕W .

A matrix of this form is called block-diagonal.

Remark 2.10. Suppose ρ : G→ GL(V ) is a representation. Note that

1. If V = W ⊕ U , then the subspace (x, 0) | x ∈ W ⊆ W ⊕ U is a subrepresentationand it is isomorphic to W . Similarly, the subspace (0, y) | y ∈ U is isomorphic to U .The intersection of these two subrepresentations is 0.

2. If W ⊆ V and U ⊆ V are subrepresentations such that W ∩ U = 0 and dim(W ) +dim(U) = dim(V ), then V ∼= W ⊕ U .

This raises the following question: suppose ρ : G→ GL(V ) is a representation and W ⊆V is a subrepresentation, is there another subrepresentation U ⊆ V such that V = W ⊕ U?

It turns out the answer to this question is always yes, and U is called a complementarysubrepresentation to W . This is known as Maschke’s theorem. It is an important theorem ingroup representation theory and the proof can be found in most of the graduate level algebratexts (see, e.g. [10]). We will state the theorem here and omit the proof.

Theorem 2.11. (Maschke’s theorem)Let ρ : G → GL(V ) be a representation and W ⊆ V be a subrepresentation. Then thereexists a complementary subrepresentation U ⊆ V to W .

This leads to the following result, called the complete reducibility theorem.

16

Page 22: Moments of Random Matrices and Weingarten Functions

Corollary 2.12. Every complex representation of a finite group can be written as a directsum

W1 ⊕W2 ⊕ · · · ⊕Wr

of subrepresentations, where each Wi is irreducible.

Proof. Let G be a finite group and V be a representation of G of dimension n. If V isirreducible, then the statement holds trivially. If not, V contains a proper subrepresentationW ⊆ V , and by Maschke’s theorem, V = W ⊕ U for some other subrepresentation U .

Both W and U have dimension less than n. If they are both irreducible, the proof iscomplete. If not, at least one of them contains a proper subrepresentation, so it splits as adirect sum of smaller representations. Since n is finite, this process will terminate in a finitenumber of steps.

Definition 2.13. If a representation can be decomposed as a direct sum of irreducible sub-representations, then it is said to be completely reducible.

2.4.4 Artin-Wedderburn theorem

The Artin-Wedderburn theorem is a classification theorem for semisimple rings and semisim-ple algebras. For our purposes, we only need the following corollary.

Corollary 2.14. Let G be a finite group, then

C[G] ∼=⊕i

Mni(C),

where the direct sum is indexed by (all of) the irreducible representations of G and ni is thedimension of the ith irreducible representation.

2.5 Young tableaux

In this section, we explore a connection between representations of the symmetric group Snand combinatorial objects called Young tableaux.

2.5.1 Basic definitions

Definition 2.15. A partition of a positive integer n is a sequence of positive integersλ = (λ1, λ2, . . . , λr) satisfying λ1 ≥ λ2 ≥ · · · ≥ λr > 0 and n = λ1 + λ2 + · · · + λr.We write λ ` n to denote that λ is a partition of n.

17

Page 23: Moments of Random Matrices and Weingarten Functions

Definition 2.16. A Young diagram is an array of boxes arranged in left-justified rows, withthe row sizes weakly decreasing (i.e. non-increasing). The Young diagram associated to thepartition λ = (λ1, λ2, . . . , λr) is the one that has r rows, with λi boxes in the ith row.

Example 2.17. The number 4 has five partitions: (4), (3, 1), (2, 2), (2, 1, 1), (1, 1, 1, 1)and the associated Young diagrams are

.

It is clear that there is a one-to-one correspondence between partitions and Young dia-grams.

Definition 2.18. Suppose λ ` n. A Young tableau T of shape λ is an assignment of thenumbers 1, 2, . . . , n to the n boxes of the Young diagram associated to λ such that eachnumber occurs exactly once.

For example, here are all the Young tableaux corresponding to the partition (2, 1):

1 23

1 32

2 13

2 31

3 12

3 21 .

Definition 2.19. A standard Young tableau T of shape λ, denoted by SYT(λ), is a Youngtableau whose entries are increasing along each row (to the right) and each column (down-wards).

The only two standard Young tableaux for (2, 1) are

1 23 and

1 32 .

Definition 2.20. 1. A skew shape is a pair of partitions (λ, µ) such that the Youngdiagram of λ contains the Young diagram of µ, it is denoted by λ/µ.

18

Page 24: Moments of Random Matrices and Weingarten Functions

2. The skew diagram of a shew shape λ/µ is the set-theoretic difference of the Youngdiagrams λ and µ: the set of squares that belong to the diagram of λ but not to thatof µ.

3. A skew tableau of shape λ/µ is obtained by filling the squares of the correspondingskew diagram.

For example, the following is a skew tableau of shape (5, 4, 2, 2)/(3, 1):

1 79 6 2

5 43 8 .

2.5.2 The irreducible representations of SnRepresentations are often used to characterize finite groups. In this report, we are mostlyinterested in the symmetric group Sn. By Corollary 2.12, we know that every representationof Sn can be decomposed as a direct sum of a finite number of irreducible representations,but the corollary does not tell us the number of distinct irreducible representations of agiven group and their dimensions. The answer to the first question is given by the followingtheorem, sometimes called the completeness of irreducible characters.

Theorem 2.21. The number of irreducible representations of a finite group is equal to thenumber of conjugacy classes of that group.

The proof of this theorem is beyond the scope of this report and we refer to [11] for thosewho are interested.

Since we focus on Sn, we want to know the number of conjugacy classes and then classifyall representations of Sn. Let us take a look at S3 first.

Definition 2.22. The standard representation of a symmetric group on [n] = 1, 2, . . . , nis an irreducible representation of dimension n−1 (over a field whose characteristic does notdivide n!) defined in the following way:

Take the permutation representation of Sn as defined in Definition 2.9 and look at the(n− 1)-dimensional subspace of vectors whose sum of coordinates in the basis is zero. Therepresentation restricts to an irreducible representation of dimension n− 1 on this subspace.This is the standard representation.

19

Page 25: Moments of Random Matrices and Weingarten Functions

Example 2.23. Since S3 has three conjugacy classes, it has three irreducible representations:

1. One of them is the trivial representation which is on C and acts by σ(α) = α for σ ∈ S3and α ∈ C.

2. Another one is the alternating representation (or sign representation) which is also onC, but acts by σ(α) = (sgn(σ))α for σ ∈ S3 and α ∈ C.

3. The third one is the standard representation and it is on the subspace(z1, z2, z3) ∈ C3 | z1 + z2 + z3 = 0

acting by σ((z1, z2, z3)) = (zσ(1), zσ(2), zσ(3)) for σ ∈ S3 and (z1, z2, z3) ∈ C3.

These are the only irreducible representations of S3. Let ρ be a representation of S3 andρtriv, ρalt, and ρstd denote the trivial, alternating, and standard representations respectively.Then for any σ ∈ S3, ρ(σ) = aρtriv(σ) ⊕ bρalt(σ) ⊕ cρstd(σ), where a, b, and c are valuesdetermined by ρ. The dimensions of ρtriv(σ), ρalt(σ), and ρstd(σ) are 1, 1, and 2 respectively.

Note that for n > 3, there are more irreducible representations than just these three andit turns out that the number of conjugacy classes of Sn is the number of ways of writing nas a sum of a sequence of non-increasing positive integers.

Proposition 2.24. The conjugacy classes of Sn correspond to the partitions of n.

Proof. The conjugacy classes of Sn are uniquely determined by the cycle type of their ele-ments and each class contains only one cycle type. Thus, there is a bijection between eachconjugacy class of Sn and a partition of n into the cycle type of that class.

Therefore the number of irreducible representations of Sn is same as the number of Youngdiagrams corresponding to the partitions of n. For example, there are five irreducible repre-sentations of S4 according to Example 2.17.

The dimension of each irreducible representation of Sn corresponding to a partition λ ofn is equal to the number of different standard Young tableaux that can be obtained fromthe Young diagram of the representation.

Example 2.25. One of the partitions of 4 is λ = (2, 1, 1) with associated Young diagram

.

20

Page 26: Moments of Random Matrices and Weingarten Functions

There are three standard Young tableaux of shape λ, namely

1 234 ,

1 324 , and

1 423 .

Therefore the irreducible representation of S4 that corresponds to the partition (2, 1, 1) is3-dimensional.

Alternatively, we have a direct formula for the dimension of an irreducible representationof Sn, known as the hook-length formula:

To each box in the Young diagram associated to λ, we assign a number called the hook-length. The hook-length for a box is calculated by taking the number of boxes in the samerow to the right of it plus the number of boxes in the same column below it plus one (forthe box itself).

Let h be the product of all hook-lengths in the Young diagram, then the dimension of

the irreducible representation corresponding to λ isn!

h.

In the previous example where n = 4 and λ = (2, 1, 1), if we label each box of theassociated Young diagram with its hook-length, we have

4 121 .

Therefore the corresponding irreducible representation of S4 has dimension4!

8= 3 as ex-

pected.

2.6 Some properties of unitary Weingarten functions

We are now ready to discuss and prove some of the properties of unitary Weingarten func-tions.

2.6.1 The invertibility of G

Recall thatn∏k=1

(N+Jk) = G, where Jk ∈ C[Sn] are the Jucys-Murphy elements andG ∈ C[Sn]

is the function G(σ) = N#(σ). Since every permutation in Sn can be written as a permuta-tion matrix – an n× n matrix created by rearranging the rows and/or columns of the n× n

21

Page 27: Moments of Random Matrices and Weingarten Functions

identity matrix, and this extends linearly to C[Sn], we can define a matrix norm on C[Sn].

Let A be an n × n matrix, there are many ways to assign a matrix norm to A. Wewill use the spectral norm where ‖A‖ =

√maxλ | λ is an eigenvalue of A∗A. Under this

norm, ‖σ‖ = ‖Aσ‖ = 1 for every permutation matrix Aσ corresponding to the permutationσ ∈ Sn. In particular, ‖(r, s)‖ = 1 for every transposition (r, s) ∈ Sn and thus

‖Jk‖ = ‖(1, k) + · · ·+ (k − 1, k)‖≤ ‖(1, k)‖+ · · ·+ ‖(k − 1, k)‖= k − 1.

It is known that if ‖x‖ < 1, then (1 + x) is invertible with (1 + x)−1 =∞∑k=0

(−1)kxk.

Therefore if we assume n ≤ N , then k ≤ N if k ≤ n and so k− 1 < N , which implies N +Jk

is invertible since N + Jk = N(1 +N−1Jk) and ‖N−1Jk‖ =‖Jk‖N≤ k − 1

N< 1.

This holds for all k = 1, . . . , n and thusG =n∏k=1

(N+Jk) is invertible in C[Sn] when n ≤ N .

2.6.2 Proof of Proposition 2.4

When n ≤ N , we have G−1 = WgU ∈ C[Sn]. Let ρtriv be the trivial representation of Sn,i.e. ρtriv(σ) = 1 for all σ ∈ Sn, then ρtriv(Jk) = ρtriv((1, k) + · · · + (k − 1, k)) = k − 1. If weapply ρtriv to both sides of (2.1), we have

ρtriv

(n∏k=1

(N + Jk)

)= ρtriv(G)

⇒n∏k=1

(N + k − 1) = ρtriv(G)

⇒n∏k=1

(N + k − 1)−1 = (ρtriv(G))−1 = ρtriv(G−1) = ρtriv(WgU

).

Note that every f ∈ C[Sn] can be written as f =∑σ∈Sn

f(σ)δσ.

Therefore

ρtriv(f) = ρtriv

(∑σ∈Sn

f(σ)δσ

)=∑σ∈Sn

f(σ)ρtriv(δσ)

=∑σ∈Sn

f(σ),

22

Page 28: Moments of Random Matrices and Weingarten Functions

and thusn∏k=1

(N + k − 1)−1 =∑σ∈Sn

WgU(σ,N).

This proves part of Proposition 2.4, i.e.∑σ∈Sn

WgU(σ,N) =1

N(N + 1) · · · (N + n− 1)

when n ≤ N .

If n > N , then WgUN is the pseudo-inverse of GUN and as functions in C[Sn], we haveGWgUG = G.Therefore

ρtriv(G) = ρtriv(GWgUG

)= ρtriv(G)ρtriv

(WgU

)ρtriv(G) (since ρtriv is a representation)

= (ρtriv(G))2ρtriv(WgU

)(since ρtriv is 1-dimensional)

⇒ ρtriv(WgU

)= (ρtriv(G))−1 (since ρtriv(G) 6= 0)

=n∏k=1

(N + k − 1)−1,

which is same as before when n ≤ N . This proves Proposition 2.4 completely.

2.6.3 Asymptotics of WgU

Consider the length function | · | on Sn, where |σ| (σ ∈ Sn) is the minimal non-negativeinteger l such that σ can be written as a product of l transpositions. In the paper [9], theauthors showed the asymptotic estimate

WgU(σ,N) = O(N−n−|σ|).

We will show this result using Theorem 2.7 (Jucys).

When n ≤ N , G is invertible with G−1 = WgU , therefore

WgU =n∏k=1

(N + Jk)−1

= N−n(1 +N−1J1)−1 · · · (1 +N−1Jn)−1.

Since ‖N−1Jk‖ < 1 for all k ≤ n,

NnWgU =

(∞∑k1=0

(−1)k1(N−1J1)k1

)· · ·

(∞∑

kn=0

(−1)kn(N−1Jn)kn

)

=∞∑l=0

(−N)−l∑

k1,...,kn≥0k1+···+kn=l

Jk11 · · · Jknn .

23

Page 29: Moments of Random Matrices and Weingarten Functions

Note that Jk11 · · · Jknn is a linear combination of permutations of length at most k1+· · ·+kn.Therefore for any σ ∈ Sn, σ does not appear in any of the sums∑

k1,...,kn≥0k1+···+kn=l

Jk11 · · · Jknn

when l < |σ|. This implies NnWgU(σ,N) = O(N−|σ|).

Now for any σ ∈ Sn, one knows that |σ| = n−#(σ) (see, e.g. Proposition 23.11 in [18]),thus

WgU(σ,N) = O(N−2n+#(σ)),

which implies the asymptotic decay

WgU(σ,N) ∼ 1

N2n−#(σ)as N →∞.

2.7 Additional properties of unitary Weingarten func-

tions

In this section, we discuss some additional properties and examples involving unitary Wein-garten functions. The proofs will be omitted but references will be given for those who areinterested.

2.7.1 Explicit formulas for G and WgU

By the isotypic decomposition, the left (or right) regular representation of C[Sn] can bedecomposed as ⊕

λ`n

(dimλ)V λ,

where V λ is the irreducible representation associated with λ and dimλ denotes its dimension.

Therefore when the unitary Weingarten function WgU was first introduced as a functionon Sn (see, e.g. Theorem 2.1 in [5] or Proposition 2.3 in [9]), it was defined as a sum in termsof irreducible characters of Sn over the partitions of n. We shall briefly review the definitionhere.

First, recall that if f and g are two functions in C[Sn], we denote by ∗ the classicalconvolution operation

(f ∗ g)(π) =∑σ∈Sn

f(σ)g(σ−1π) =∑τ∈Sn

f(πτ−1)g(τ).

24

Page 30: Moments of Random Matrices and Weingarten Functions

Let δe ∈ C[Sn] be the function defined by δe(π) =

1, π = e

0, π 6= e. It can be easily checked

that f ∗ δe = δe ∗ f = f for all f ∈ C[Sn]. The inverse function of f with respect to ∗, if itexists, is denoted by f (−1) and it satisfies f ∗ f (−1) = f (−1) ∗ f = δe. We have the followingdefinition.

Let Z (C[Sn]) be the center of C[Sn]:

Z (C[Sn]) = h ∈ C[Sn] | h ∗ f = f ∗ h (f ∈ C[Sn]) .

For the unitary group UN , we define the element GU(·, N) in Z (C[Sn]) by

GU(σ,N) = N#(σ), σ ∈ Sn.

Note that we previously denoted GU(·, N) simply by G. Now if λ = (λ1, λ2, . . . , λr) isa partition of n, we write l(λ) for the length r of λ, then G can be expanded in terms ofirreducible characters χλ of Sn as follows:

G =1

n!

∑λ`n

fλCλ(N)χλ, (2.2)

where fλ is the dimension of the irreducible representation associated with λ and Cλ(N) isthe polynomial in N given by

Cλ(N) =

l(λ)∏i=1

λi∏j=1

(N + j − i).

The unitary Weingarten function WgU(·, N) on Sn is defined by

WgU(·, N) = WgU =1

n!

∑λ`n

Cλ(N)χλ, (2.3)

summed over all partitions λ of n. It is the pseudo-inverse element of G, i.e. the uniqueelement in Z (C[Sn]) satisfying

G ∗WgU ∗G = G.

25

Page 31: Moments of Random Matrices and Weingarten Functions

In particular, unless N ∈ 0,±1,±2, . . . ,±(n− 1), functions G and WgU are inversesof each other and satisfy

G ∗WgU = WgU ∗G = δe.

Example 2.26. Consider S3. First note that for each partition λ, the irreducible characterχλ depends only on the conjugacy class, thus the unitary Weingarten function WgU has thisproperty as well.

Since S3 has three conjugacy classes: the class of the identity e, the class of the trans-position (1, 2), and the class of the single cycle (1, 2, 3). These three conjugacy classescorrespond to the three partitions (1, 1, 1), (2, 1), and (3) of the number 3, with associatedYoung diagrams

, , and .

Therefore

WgU([1, 1, 1], N) = WgU(e,N),

WgU([2, 1], N) = WgU((1, 2), N) = WgU((1, 3), N) = WgU((2, 3), N), and

WgU([3], N) = WgU((1, 2, 3), N) = WgU((1, 3, 2), N).

To compute these values, it is enough to evaluate WgU at the representative elementse, (1, 2), and (1, 2, 3) using equation (2.3). First, we compute the character table of S3using the Murnaghan-Nakayama Rule (see, e.g. [21] for more details). The method gives acombinatorial way of computing the character table of any symmetric group Sn. It has thefollowing steps:

1. Since the characters of a group are constant on its conjugacy classes, we index thecolumns of the character table by the three partitions of 3. Moreover, by Theorem 2.21,there are precisely as many irreducible characters as conjugacy classes, so we can alsoindex the irreducible characters by the partitions. We index the rows of the charactertable by the associated Young diagrams:

(1, 1, 1) (2, 1) (3)

.

26

Page 32: Moments of Random Matrices and Weingarten Functions

2. We now calculate the entry in row λ (λ denotes a Young diagram) and column µ (µdenotes a partition). Let µ1, µ2, . . . be the parts of µ in decreasing order. Drawing λ asa Young diagram, define a filling of λ with content µ to be a way of writing a numberin each square of λ such that the numbers are weakly increasing along each row (tothe right) and each column (downwards) and there are exactly µi squares labeled i foreach i.

3. Consider all fillings of λ with content µ such that for each label i, the squares labeledi form a connected skew tableau that does not contain a 2× 2 square. (A skew tableauis connected if the graph formed by connecting horizontally or vertically adjacentsquares is connected.) Such a tableau is called a border-strip tableau, with each labelrepresenting a border-strip. For the purpose of illustrating how this method works,suppose we are trying to calculate the entry for λ = (3, 2) and µ = (2, 2, 1) in thecharacter table of S5, then

1 1 32 2 ,

1 2 31 2 , and

1 2 21 3

are the only three border-strip tableaux that are fillings of λ = (3, 2) with contentµ = (2, 2, 1).

4. For each label in the border-strip tableau, define the height of the corresponding border-strip to be one less than the number of rows of the border-strip. We then define theweight of the border-strip tableau to be (−1)s where s is the sum of the heights of theborder-strips that compose the tableau. Finally, the entry in the character table is thesum of all the weights of the possible border-strip tableaux. For example, the threeborder-strip tableaux of λ = (3, 2) with µ = (2, 2, 1), as shown above, have weights 1,1, and -1 respectively, for a total weight of 1. Therefore the corresponding entry in thecharacter table of S5 is 1.

Using this method, we can easily compute the character table of S3:

(1, 1, 1) (2, 1) (3)

1 -1 1

2 0 -1

1 1 1

.

Now for S3, the three irreducible representations, as shown earlier, are

λ1 = (1, 1, 1) = , λ2 = (2, 1) = , and λ3 = (3) = .

27

Page 33: Moments of Random Matrices and Weingarten Functions

The corresponding dimensions can be calculated using the hook-length formula. We havefλ1 = 1, fλ2 = 2, and fλ3 = 1. Moreover,

Cλ1(N) = N(N − 1)(N − 2),

Cλ2(N) = N(N + 1)(N − 1),

Cλ3(N) = N(N + 1)(N + 2).

Therefore by equation (2.3), we have

1. WgU ([1, 1, 1], N)= WgU(e,N)

=1

6

1

N(N − 1)(N − 2)+

4

N(N + 1)(N − 1)+

1

N(N + 1)(N + 2)

=

N2 − 1

N(N2 − 1)(N2 − 4),

2. WgU ([2, 1], N)= WgU((1, 2), N)

=1

6

−1

N(N − 1)(N − 2)+

1

N(N + 1)(N + 2)

=

−1

(N2 − 1)(N2 − 4), and

3. WgU ([3], N)= WgU((1, 2, 3), N)

=1

6

1

N(N − 1)(N − 2)+

−2

N(N + 1)(N − 1)+

1

N(N + 1)(N + 2)

=

2

N(N2 − 1)(N2 − 4).

2.7.2 Character expansion of WgU

It is known that the setχλλ`n of irreducible characters of Sn forms a basis of the center

Z (C[Sn]) of the group algebra C[Sn].

Definition 2.27. If f ∈ Z (C[Sn]) is a central function, then the expression

f =∑λ`n

f(λ)χλ

of f with respect to the character basis of Z (C[Sn]) is called the character expansion of f(see, e.g. [19] for more details).

28

Page 34: Moments of Random Matrices and Weingarten Functions

Definition 2.28. 1. Let be a square in a Young diagram λ ` n. The content of ,denoted c(), is defined to be the column index of minus the row index of .

2. Given a Young diagram λ ` n, let Hλ denote the product of hook-lengths of λ.

3. Let sλ(1N) =

1

∏∈λ

(N + c()).

In the paper [19], the author proved the following character expansion of WgU ∈ C[Sn].We will only state the theorem. The proof, and some other properties, can be found insection 3 of [19].

Theorem 2.29. (Theorem 3.2 in [19])The character expansion of WgU ∈ C[Sn] is

WgU =∑λ`n

1

H2λsλ(1

N)χλ. (2.4)

It can be easily checked that, using this theorem, evaluating WgU at any σ ∈ S3 wouldproduce the same result as the previous example.

29

Page 35: Moments of Random Matrices and Weingarten Functions

Chapter 3

Integration over the Unitary Groupand Applications

The main goal of this chapter is to discuss how Theorem 1.5 can be used to solve variousproblems in random matrix theory.

Recall that in Theorem 1.5, a formula is given so that general matrix integrals

E[Ui1j1 · · ·UinjnUi′1j′1 · · ·Ui′nj′n

](3.1)

can be calculated as a sum of unitary Weingarten functions over the symmetric group Sn,where U is an N ×N Haar-distributed unitary random matrix, E denotes expectation withrespect to the Haar measure, and n ≤ N .

Expressions of the form (3.1) appear very naturally in random matrix theory. The reasonfor this is that quite many random matrix ensembles are invariant under unitary conjugation,i.e. X is unitarily invariant if the joint distribution of its entries is unchanged when weconjugate X by an independent unitary matrix. As a result, expressions similar to

E [Tr (X1Uε1X2U

ε2 · · ·XnUεn)] (3.2)

are quite common in random matrix theory, where X1, . . . , Xn are N ×N random matricesand ε1, . . . , εn ∈ 1, ∗.

We will show by examples that the calculation of (3.2) can be reduced to the calculationof (3.1). First, we introduce some notations.

Notation 3.1. Let Sn be the symmetric group on 1, . . . , n, n ≥ 1.

1. Let π be a permutation in Sn, then for any n-tuple X = (X1, . . . , Xn) of N × Ncomplex matrices

Trπ(X ) = Trπ(X1, . . . , Xn) :=∏

C∈C(π)

Tr

(∏j∈C

Xj

),

where C(π) is the set of all the disjoint cycles of π (including fixed points).

30

Page 36: Moments of Random Matrices and Weingarten Functions

2. Let γn denote the cyclic permutation

γn = (1, 2, . . . , n) ∈ Sn

of order n.

3. LetMN(C) be the algebra of N×N complex matrices, we use Ea,b1≤a,b≤N as a basiswhere

(Ea,b)ij = δ(a,b),(i,j) = δa,iδb,j.

In other words, Ea,b is the N × N basis matrix whose entries are all 0 except for theentry at row a and column b, which is 1. It has the property that

Tr(XEa,b) =N∑i=1

(XEa,b)ii = (XEa,b)bb = Xba

for every X ∈MN(C).

3.1 Unitarily invariant random matrices

Definition 3.2. Let X = (X1, . . . , Xn) be an n-tuple of N ×N random matrices, we say Xis invariant under unitary conjugation if (X1, . . . , Xn) and (UX1U

∗, . . . , UXnU∗) are identi-

cally distributed for any independent N ×N unitary matrix U .

For two sequences i = (i1, . . . , in) and i ′ = (i′1, . . . , i′n) of positive integers and for a

permutation σ ∈ Sn, we put

δσ(i , i ′) =n∏k=1

δiki′σ(k) .

Under this notation, Theorem 1.5 becomes

E[Ui1j1 · · ·UinjnUi′1j′1 · · ·Ui′nj′n

]=∑

α,β∈Sn

δα(i , i ′)δβ(j , j ′)WgU(βα−1, N), (3.3)

where U is an N ×N Haar-distributed unitary random matrix and n ≤ N .

Now, suppose W is an N ×N Hermitian (i.e. W = W ∗) random matrix that is invariantunder unitary conjugation. For expressions of the form

E [Wi1j1Wi2j2 · · ·Winjn ]

where n ≤ N , we expect to have a formula that is closely related to (3.3). This is shown inthe following theorem.

31

Page 37: Moments of Random Matrices and Weingarten Functions

Theorem 3.3. (Theorem 3.1 in [8])Let W be as announced and n ≤ N , then

E [Wi1j1Wi2j2 · · ·Winjn ] =∑

α,β∈Sn

δα(i, j)WgU(βα−1, N)E [Trβ(W)] . (3.4)

Proof. For the remainder of this section, unless otherwise specified, all matrices are assumedto have size N ×N .

We will give two proofs. The first proof is a direct computation using the assumptionthat W is invariant under unitary conjugation.

Let U be a Haar-distributed unitary random matrix that is independent from W and letV = UWU∗, then

E [Wi1j1Wi2j2 · · ·Winjn ] = E [Vi1j1Vi2j2 · · ·Vinjn ] .

Note that for each k where 1 ≤ k ≤ n, we have

E [Vikjk ] = E [(UWU∗)ikjk ]

=N∑

lk,mk=1

E[UiklkWlkmkUjkmk

].

Therefore

E [Vi1j1Vi2j2 · · ·Vinjn ]

=N∑

l1,...,ln,m1,...,mn=1

E[Ui1l1 · · ·UinlnUj1m1 · · ·Ujnmn

]E [Wl1m1 · · ·Wlnmn ]

=N∑

l1,...,ln,m1,...,mn=1

∑α,β∈Sn

δα(i , j )δβ(l ,m)WgU(βα−1, N)E [Wl1m1 · · ·Wlnmn ]

=∑

α,β∈Sn

δα(i , j )WgU(βα−1, N)N∑

l1,...,ln=1

E[Wl1lβ−1(1)

· · ·Wlnlβ−1(n)

]=∑

α,β∈Sn

δα(i , j )WgU(βα−1, N)E [Trβ−1(W )] , where W = (W, . . . ,W︸ ︷︷ ︸n

)

=∑

α,β∈Sn

δα(i , j )WgU(βα−1, N)E [Trβ(W )] (see Proposition 3.4 below).

Alternatively, we can apply the spectral theorem for unitarily invariant random matriceswhich states that every Hermitian random matrix W that is unitarily invariant has the samedistribution as UDU∗, where

32

Page 38: Moments of Random Matrices and Weingarten Functions

1. U is a Haar-distributed unitary random matrix,

2. D is a diagonal random matrix whose eigenvalues have the same distribution as thoseof W , and

3. U and D are independent.

This is a useful theorem in random matrix theory and we refer to the appendix sectionof [6] for a proof.

Now, every matrix entry Wij has the same distribution as

N∑r=1

UirDrrUjr,

where U and D are as described above. It follows that

E [Wi1j1Wi2j2 . . .Winjn ]

= E

[(N∑

r1=1

Ui1r1Dr1r1Uj1r1

)(N∑

r2=1

Ui2r2Dr2r2Uj2r2

)· · ·

(N∑

rn=1

UinrnDrnrnUjnrn

)]

=N∑

r1,r2,...,rn=1

E [Dr1r1Dr2r2 · · ·Drnrn ]E[Ui1r1Ui2r2 · · ·UinrnUj1r1Uj2r2 · · ·Ujnrn

]=

N∑r1,r2,...,rn=1

E [Dr1r1Dr2r2 · · ·Drnrn ]∑

α,β∈Sn

δα(i , j )δβ(r , r)WgU(βα−1, N)

=∑

α,β∈Sn

δα(i , j )WgU(βα−1, N)N∑

r1,r2,...,rn=1

δβ(r , r)E [Dr1r1Dr2r2 · · ·Drnrn ]

=∑

α,β∈Sn

δα(i , j )WgU(βα−1, N)E

Trβ(D, . . . , D︸ ︷︷ ︸n

)

=∑

α,β∈Sn

δα(i , j )WgU(βα−1, N)E

Trβ(UDU∗, . . . , UDU∗︸ ︷︷ ︸n

)

=∑

α,β∈Sn

δα(i , j )WgU(βα−1, N)E [Trβ(W )] , where W = (W, . . . ,W︸ ︷︷ ︸n

).

Proposition 3.4. Note that

1. WgU is in the center of C[Sn] (see, e.g. section 2.7 or [5]), i.e. it commutes with anyfunction f defined on Sn, and

33

Page 39: Moments of Random Matrices and Weingarten Functions

2. when W = (W, . . . ,W︸ ︷︷ ︸n

) (i.e. W1 = W2 = · · · = Wn = W ), Trβ(W) = Trβ−1(W) for

all β ∈ Sn.

Therefore formula (3.4) can be written as a double convolution over Sn as

E [Wi1j1Wi2j2 . . .Winjn ] =∑

α,β∈Sn

δα(i, j)WgU(βα−1, N)E [Trβ(W)]

=∑

α,β∈Sn

E [Trβ−1(W)] WgU(βα−1, N)δα(i, j)

=E [Tr(W)] ∗WgU ∗ δ(i, j)

(e).

Example 3.5. For each 1 ≤ i ≤ N and n ≥ 1, suppose i1 = i2 = · · · = in = i = j1 = j2 =· · · = jn, then δσ(i , j ) = 1 for all σ ∈ Sn and thus

E [W nii ] =

∑α,β∈Sn

WgU(βα−1, N)E [Trβ(W )]

=∑α∈Sn

WgU(α,N)∑β∈Sn

E [Trβ(W )]

=1

N(N + 1) · · · (N + n− 1)

∑β∈Sn

E [Trβ(W )] (by Proposition 2.4).

1. When n = 1, we have the trivial identity E[Wii] =1

NE [Tr(W )] = E[tr(W )].

2. When n = 2, E[W 2ii] =

E[(Tr(W ))2] + E[Tr(W 2)]

N(N + 1).

3. When n = 3, E[W 3ii] =

E[(Tr(W ))3] + 3E[Tr(W 2)Tr(W )] + 2E[Tr(W 3)]

N(N + 1)(N + 2).

3.2 Expectation of products of matrices

Let U be an N × N Haar-distributed unitary random matrix and bii∈N, Bjj∈N be se-quences of N×N complex matrices. In this section, we wish to find a formula for expressionsof the form

E[UB1U∗b1UB2U

∗b2 · · ·UBn−1U∗bn−1UBnU

∗], (3.5)

where n ∈ N.

Note that (3.5) looks very similar to a general version of the expectation computed inExample 1.8. We will use the same approach and first compute the ijth entry of (3.5).

34

Page 40: Moments of Random Matrices and Weingarten Functions

We haveE[UB1U

∗b1UB2U∗b2 · · ·UBn−1U

∗bn−1UBnU∗]ij

= E [Tr(UB1U∗b1UB2U

∗b2 · · ·UBn−1U∗bn−1UBnU

∗bn)] , where bn = Ej,i is a basis matrix

=N∑

i1,...,in,i′1,...,i′n,

j1,...,jn,j′1,...,j′n=1

E[Ui1j1(B1)j1j′1(U

∗)j′1i′1(b1)i′1i2 · · ·Uinjn(Bn)jnj′n(U∗)j′ni′n(bn)i′ni1]

=N∑

i1,...,in,i′1,...,i′n,

j1,...,jn,j′1,...,j′n=1

(B1)j1j′1(b1)i′1i2 · · · (Bn)jnj′n(bn)i′ni1E[Ui1j1 · · ·UinjnUi′1j′1 · · ·Ui′nj′n

]

=N∑

i1,...,in,i′1,...,i′n,

j1,...,jn,j′1,...,j′n=1

(B1)j1j′1(b1)i′1i2 · · · (Bn)jnj′n(bn)i′ni1

×∑

α,β∈Sn

δi1i′α(1) · · · δini′α(n)δj1j′β(1) · · · δjnj′β(n)WgU(βα−1, N)

=∑

α,β∈Sn

WgU(βα−1, N) ×N∑

j1,...,jn,j′1,...,j

′n=1

(B1)j1j′1 · · · (Bn)jnj′nδj1j′β(1) · · · δjnj′β(n)

×N∑

i1,...,in,i′1,...,i

′n=1

(b1)i′1i2 · · · (bn)i′ni1δi1i′α(1) · · · δini′α(n)

=∑

α,β∈Sn

WgU(βα−1, N) ×N∑

j′1,...,j′n=1

(B1)j′β(1)

j′1· · · (Bn)j′

β(n)j′n ×

N∑i′1,...,i

′n=1

(b1)i′1i′α(2) · · · (bn)i′ni′α(1)

=∑

α,β∈Sn

WgU(βα−1, N)×N∑

j′1,...,j′n=1

(B1)j′1j′β−1(1)· · · (Bn)j′nj′β−1(n)

×N∑

i′1,...,i′n=1

(b1)i′1i′αγn(1)· · · (bn)i′ni′αγn(n)

=∑

α,β∈Sn

WgU(βα−1, N)Trβ−1(B1, . . . , Bn)Trαγn(b1, . . . , bn).

Notation 3.6. Let Trπ(b, . . . , b︸ ︷︷ ︸n−1

, Ej,i) = (Trπ(b, . . . , b︸ ︷︷ ︸n−1

, 1))ij, where π is a permutation in Sn.

Roughly speaking, we do not apply Tr to the cycle of π that contains the element n, andthus Trπ(b, . . . , b︸ ︷︷ ︸

n−1

, 1) is a matrix.

For example, suppose π ∈ S6 such that π = (1, 3, 4)(2)(5, 6), then

Trπ(b1, . . . , b5, 1) = Tr(b1b3b4)Tr(b2) b5b6.

35

Page 41: Moments of Random Matrices and Weingarten Functions

Theorem 3.7. Using Notation 3.6, we have a formula for (3.5):

E[UB1U∗b1UB2U

∗b2 · · ·UBn−1U∗bn−1UBnU

∗]

=∑

α,β∈Sn

WgU(βα−1, N)Trβ−1(B1, . . . , Bn)Trαγn(b1, . . . , bn−1, 1). (3.6)

Proposition 3.8. By the same reasoning as in Proposition 3.4, formula (3.6) can also bewritten as a double convolution over Sn as∑α,β∈Sn

Trβ−1(B1, . . . , Bn)WgU(βα−1, N)Trαγn(b1, . . . , bn−1, 1) =

Tr(B) ∗WgU ∗ Tr(b)

(γn),

where B = (B1, . . . , Bn), b = (b1, . . . , bn−1, 1), and(

Tr(b))

(π) = Trπ(b1, . . . , bn−1, 1).

Example 3.9. Suppose now that bi = b for all i ∈ N and Bj = B for all j ∈ N, then (3.5)becomes

E[UBU∗(bUBU∗)n−1

], (3.7)

where n ≥ 1.

As shown in [5], WgU(σ,N) is a rational function of N for every σ ∈ Sn (this fact shouldbe clear from the definitions given in the previous chapter as well). This and Theorem 3.7

together imply that (3.7) can be written in the formn−1∑k=0

akbk, where ak are coefficients and

b0 = IN .

Note that Example 1.8 corresponds to the case n = 1, and in this example we will calcu-late (3.7) for the case n = 2 using the table of values in appendix B for unitary Weingartenfunctions.

Since WgU([1, 1], N) =1

N2 − 1and WgU([2], N) =

−1

N(N2 − 1), by Theorem 3.7, we have

E[UBU∗bUBU∗]

=∑α,β∈S2

WgU(βα−1, N)Trβ−1(B,B)Trαγ2(b, 1)

=∑β∈S2

WgU(β,N)Trβ−1(B,B)b +∑β∈S2

WgU(βγ2, N)Trβ−1(B,B)Tr(b)IN

=(Tr(B))2

N2 − 1b − Tr(B2)

N(N2 − 1)b − (Tr(B))2Tr(b)

N(N2 − 1)IN +

Tr(B2)Tr(b)

N2 − 1IN

=

Tr(B2)Tr(b)

N2 − 1− (Tr(B))2Tr(b)

N(N2 − 1)

IN +

(Tr(B))2

N2 − 1− Tr(B2)

N(N2 − 1)

b.

36

Page 42: Moments of Random Matrices and Weingarten Functions

Therefore E[UBU∗bUBU∗] =1∑

k=0

akbk = a0IN + a1b, where

a0 =NTr(B2)Tr(b)− (Tr(B))2Tr(b)

N(N2 − 1)and a1 =

N(Tr(B))2 − Tr(B2)

N(N2 − 1).

Example 3.10. If B is a rank-one projection, i.e. all entries of B are 0 except for one ofthe entries on the main diagonal, which is 1, then

Trπ(B) = Trπ(B, . . . , B︸ ︷︷ ︸n

) = 1 for all π ∈ Sn

and thus

E[UBU∗(bUBU∗)n−1

]=∑

α,β∈Sn

WgU(βα−1, N)Trβ−1(B)Trαγn(b, . . . , b︸ ︷︷ ︸n−1

, 1)

=∑α∈Sn

∑β∈Sn

WgU(β,N)Trα(b, . . . , b︸ ︷︷ ︸n−1

, 1)

=n∏k=1

(N + k − 1)−1

∑α∈Sn

Trα(b, . . . , b︸ ︷︷ ︸n−1

, 1)

(by Proposition 2.4)

=n∏k=1

(N + k − 1)−1n−1∑k=0

∑l1+···+lr=n−1−k

Tr(bl1) · · ·Tr(blr)

bk.

To make the above expression simpler, it would require further investigation into thecycle structures of permutations of Sn.

3.3 Matricial cumulants

Definition 3.11. 1. Let X be a set of random matrices, then expressions of the form

E [Trπ(X1, . . . , Xn)]

where Xi ∈ X and π ∈ Sn, are called generalized moments of order n of the set X .

2. Let X and B be two sets of random matrices, then the mixed generalized moments ofthe sets X and B are the generalized moments of the set X ∪ B. Note that they canbe computed from expressions of the form

E [Trπ(B1X1, . . . , BnXn)] ,

37

Page 43: Moments of Random Matrices and Weingarten Functions

where Bi ∈ B ∪ IN, Xi ∈ X ∪ IN, and π ∈ Sn.

Motivated by Propositions 3.4 and 3.8, we will discuss the following questions in thissection:

1. Do we have a convolution formula for moments of the form E [Trπ(B1X1, . . . , BnXn)],and thus mixed generalized moments, as well?

2. If so, what are the conditions on the matricial models X and B?

It turns out that for two independent n-tuples ofN×N random matrices X = (X1, . . . , Xn)and B = (B1, . . . , Bn), if one of them, say X , has a distribution which is invariant underunitary conjugation, then the moment E [Trπ(B1X1, . . . , BnXn)] for any π ∈ Sn can be writ-ten as a convolution over the symmetric group Sn between the generalized moments of Bwith some matricial cumulant function CX , as defined by the authors in [3].

Definition 3.12. For n ≤ N , let X = (X1, . . . , Xn) be any n-tuple of N × N randommatrices, the nth UN -cumulant function

CX : Sn → C, π 7→ CX (π)

is defined by the relation

CX := E[Tr(X )] ∗(N#)(−1)

,

where N# is the function in C[Sn] such that(N#)

(π) = N#(π) for π ∈ Sn and(N#)(−1)

isthe inverse of N# with respect to the classical convolution operation ∗.

The UN -cumulants of X are the CX (π) for single cycles π of Sn.

For simplicity, we will refer to UN -cumulant functions and UN -cumulants simply as cu-mulant functions and cumulants from now on. Moreover, when X1 = X2 = · · · = Xn = X,we will use the notation CX for C(X,...,X).

Example 3.13. 1. When n = 1, the only element in S1 is e = (1).

Therefore(N#)

(e) = N ,(N#)(−1)

(e) =1

Nand thus

CX(e) = E[Tre(X)] ·(N#)(−1)

(e) = E[tr(X)].

2. When n = 2, then S2 = e, (1, 2) and

N# : e 7→ N2, (1, 2) 7→ N.

38

Page 44: Moments of Random Matrices and Weingarten Functions

Therefore(N#)(−1)

: e 7→ 1

N2 − 1, (1, 2) 7→ − 1

N(N2 − 1)and it can be checked that

N# ∗(N#)(−1)

=(N#)(−1) ∗N# = δe.

We can calculate the cumulant functions as

C(X1,X2)(e) =∑σ∈S2

E [Trσ(X1, X2)] ·(N#)(−1)

(σ−1e)

= E [Tre(X1, X2)] ·(N#)(−1)

(e) + E[Tr(1,2)(X1, X2)

]·(N#)(−1)

((1, 2))

=E [Tr(X1)Tr(X2)]

N2 − 1− E [Tr(X1X2)]

N(N2 − 1)

=NE [Tr(X1)Tr(X2)]− E [Tr(X1X2)]

N(N2 − 1),

C(X1,X2)((1, 2)) =∑σ∈S2

E [Trσ(X1, X2)] ·(N#)(−1)

(σ−1(1, 2))

= E [Tre(X1, X2)] ·(N#)(−1)

((1, 2)) + E[Tr(1,2)(X1, X2)

]·(N#)(−1)

(e)

= −E [Tr(X1)Tr(X2)]

N(N2 − 1)+

E [Tr(X1X2)]

N2 − 1

=−E [Tr(X1)Tr(X2)] +NE [Tr(X1X2)]

N(N2 − 1).

Remark 3.14. Note that we previously defined the function N# ∈ C[Sn] as G where G(σ) =N#(σ) and, as shown earlier, G is invertible with G−1 = WgU when n ≤ N .

Therefore the cumulant functions can be equivalently defined as

CX (π) =E[Tr(X )] ∗WgU

(π),

where X = (X1, . . . , Xn) and π ∈ Sn.

For consistency, we will use this notation from now on.

Proposition 3.15. Some basic properties of the cumulant functions.

1. For each π ∈ Sn, (X1, . . . , Xn) 7→ C(X1,...,Xn)(π) is n-linear.That is, for each i where 1 ≤ i ≤ n, if Xi = Yi + αZi where α ∈ C, then

C(X1,...,Xi,...,Xn)(π) = C(X1,...,Yi,...,Xn)(π) + C(X1,...,αZi,...,Xn)(π).

39

Page 45: Moments of Random Matrices and Weingarten Functions

2. Since E[Tr(IN)] (π) = E [Trπ(IN , . . . , IN)] = N#(π) = G(π),the generalized moments of X can be found from the cumulant function by the inverseformula:

E[Tr(X)] = E[Tr(X)] ∗WgU ∗G = CX ∗ E[Tr(IN)].

3. Since Trσ(UX1U∗, . . . , UXnU

∗) = Trσ(X1, . . . , Xn) for any unitary matrix U ,

C(UX1U∗,...,UXnU∗)(π) = C(X1,...,Xn)(π).

4. Since WgU(π,N) depends only on the conjugacy class of π,

CX(π) =∑σ∈Sn

E [Trσ(X)] WgU(σ−1π,N)

has this property as well.

Therefore the cumulants CX(π) of a matrix X for single cycles π of Sn are all equal.We will denote by Cn(X) this common value and call it cumulant of order n of thematrix X. In particular, as computed in the previous example,

C1(X) = E[tr(X)], and

C2(X) =−E [Tr(X1)Tr(X2)] +NE [Tr(X1X2)]

N(N2 − 1)

=N

N2 − 1

E[tr(X2)

]− E [tr(X)]2

.

Lemma 3.16. Let X = (X1, . . . , Xn) be an n-tuple of N × N random matrices that areunitarily invariant and let B = (B1, . . . , Bn) be an n-tuple of N × N random matricesindependent with X, then

E [Tre(B1X1, . . . , BnXn)] = E [Tr(B)] ∗ CX (e) = CB ∗ [Tr(X)] (e).

Proof. For any independent N ×N Haar-distributed unitary random matrix U , we have

E [Tre(B1X1, . . . , BnXn)]

= E

[n∏i=1

Tr(BiXi)

]

= E

[n∏i=1

Tr(BiUXiU∗)

]

=N∑

i1,...,in,i′1,...,i′n,

j1,...,jn,j′1,...,j′n=1

E[(B1)i′1i1Ui1j1(X1)j1j′1Ui′1j′1 · · · (Bn)i′ninUinjn(Xn)jnj′nUi′nj′n

]

40

Page 46: Moments of Random Matrices and Weingarten Functions

=N∑

i1,...,in,i′1,...,i′n,

j1,...,jn,j′1,...,j′n=1

E[(B1)i′1i1 · · · (Bn)i′nin

]E[(X1)j1j′1 · · · (Xn)jnj′n

]E[Ui1j1 · · ·UinjnUi′1j′1 · · ·Ui′nj′n

].

Apply Theorem 1.5 and rearrange the terms, a similar calculation as the derivation offormula (3.6) gives us

E [Tre(B1X1, . . . , BnXn)]

=∑

α,β∈Sn

WgU(βα−1, N)E [Trα(B)]E [Trβ−1(X )]

=∑α∈Sn

E [Trα(B)]∑β∈Sn

E [Trβ(X )] WgU(β−1α−1, N)

=∑α∈Sn

E [Trα(B)]CX (α−1)

= E [Tr(B)] ∗ CX (e).

Note that this is equal toE [Tr(B)] ∗WgU ∗ E [Tr(X )]

(e)

= CB ∗ [Tr(X )] (e),

since WgU is a central function on Sn.

Let us now consider the mixed generalized moments with any π in Sn. Recall that inNotation 3.1, we defined the basis Ea,b1≤a,b≤N of MN(C). It has the following additionalproperties.

Proposition 3.17. For all permutations α, β in Sn,

Trα

(Eaβ(1),b1 , . . . , Eaβ(n),bn

)= Trβα (Ea1,b1 , . . . , Ean,bn) .

Proof. By definitions, Trα

(Eaβ(1),b1 , . . . , Eaβ(n),bn

)=

N∑i1,...,in=1

(Eaβ(1),b1

)i1iα(1)

· · ·(Eaβ(n),bn

)iniα(n)

=N∑

i1,...,in=1

(δaβ(1),i1δb1,iα(1)

)· · ·(δaβ(n),inδbn,iα(n)

)=

N∑i1,...,in=1

(δa1,iβ−1(1)

δb1,iα(1)

)· · ·(δan,iβ−1(n)

δbn,iα(n)

)=

N∑i1,...,in=1

(Ea1,b1)iβ−1(1)iα(1)· · · (Ean,bn)iβ−1(n)iα(n)

41

Page 47: Moments of Random Matrices and Weingarten Functions

=N∑

i1,...,in=1

(Ea1,b1)i1iβα(1) · · · (Ean,bn)iniβα(n)

= Trβα (Ea1,b1 , . . . , Ean,bn) .

Proposition 3.18. For all permutations π in Sn,

Trπ (Ea1,b1X1, . . . , Ean,bnXn) = (X1)b1aπ(1) · · · (Xn)bnaπ(n) .

Proof. Note that for each i, multiplying Eai,bi on the left of Xi, roughly speaking, replacesthe athi row of Xi by its bthi row and all other rows vanish. Therefore (Eai,biXi)jijπ(i) is non-

zero only when ji = ai, and it is equal to (Xi)biaπ(i) .Thus

Trπ (Ea1,b1X1, . . . , Ean,bnXn)

=N∑

j1,...,jn=1

(Ea1,b1X1)j1jπ(1) · · · (Ean,bnXn)jnjπ(n)

= (X1)b1aπ(1) · · · (Xn)bnaπ(n) .

Lemma 3.19. Now for all permutations π in Sn, we have

E [Trπ (Ea1,b1X1, . . . , Ean,bnXn)] = E [Tr (Ea1,b1 , . . . , Ean,bn)] ∗ CX (π).

Proof. By the previous two propositions,

E [Trπ (Ea1,b1X1, . . . , Ean,bnXn)]

= E

[n∏i=1

(Xi)biaπ(i)

]

= E

[n∏i=1

Tr(Eaπ(i),biXi

)]= E

[Tre

(Eaπ(1),b1X1, . . . , Eaπ(n),bnXn

)]=E[Tr(Eaπ(1),b1 , . . . , Eaπ(n),bn

)]∗ CX

(e)

=∑σ∈Sn

E[Trσ−1

(Eaπ(1),b1 , . . . , Eaπ(n),bn

)]CX(σ)

=∑σ∈Sn

E [Trπσ−1 (Ea1,b1 , . . . , Ean,bn)]CX(σ)

= E [Tr (Ea1,b1 , . . . , Ean,bn)] ∗ CX (π).

42

Page 48: Moments of Random Matrices and Weingarten Functions

By n-linearity and using the fact that WgU is a central function on Sn, we have provedthe following theorem.

Theorem 3.20. Let X and B be two independent sets of N ×N random matrices such thatthe distribution of X is unitarily invariant. Then for any n ≤ N , let X = (X1, . . . , Xn) bean n-tuple in X and B = (B1, . . . , Bn) be an n-tuple in B, we have

E [Trπ(B1X1, . . . , BnXn)] = E [Tr(B)] ∗ CX (π) = CB ∗ [Tr(X)] (π).

This implies the following convolution relation.

Corollary 3.21. With the hypothesis of Theorem 3.20,

C(X1B1,...,XnBn) = CX ∗ CB.

Proof.

C(X1B1,...,XnBn) = E [Trπ(B1X1, . . . , BnXn)] ∗WgU

= E [Tr(B)] ∗ CX ∗WgU

= CX ∗ CB.

43

Page 49: Moments of Random Matrices and Weingarten Functions

Chapter 4

Conclusion and Future Work

Throughout this report, Theorem 1.5 was used repeatedly. One of the applications was thatit allowed us to derive a formula for expressions of the form (3.7):

E[UBU∗(bUBU∗)n−1

]where n ≥ 1 and U , b, B are as described.

We now briefly discuss why we would be interested in such expressions.

4.1 The Cauchy transform

Let C+ = z ∈ C | Im(z) > 0 denote the complex upper half plane and C− = z ∈ C | Im(z) < 0denote the complex lower half plane. Let µ be a probability measure on R and for z 6∈ R let

Gµ(z) =

∫R

1

z − tdµ(t).

Gµ is the Cauchy transform of the measure µ and it is analytic on C+ with range containedin C−.

Suppose µ is compactly supported, denote r := sup |t| | t ∈ supp(µ). Then for |z| > rwe can expand

1

z − t=

1

z

(1

1− t/z

)=

1

z

∞∑n=0

tn

zn

=∞∑n=0

tn

zn+1, ∀ t ∈ supp(µ).

44

Page 50: Moments of Random Matrices and Weingarten Functions

The convergence is uniform in t ∈ supp(µ), therefore we can integrate the series term byterm against dµ(t) and obtain

Gµ(z) =∞∑n=0

αnzn+1

, |z| > r,

where αn :=∫R t

ndµ(t) is the nth moment of µ for n ≥ 0.

Consider now the formal power series

M(z) := 1 +∞∑n=1

αnzn.

M(z) is the moment-generating series of µ and it is closely related to the Cauchy trans-form via the relation

Gµ(z) =1

zM

(1

z

).

It is advantageous to consider the Cauchy transform because it has nice analytic prop-erties and provides an effective way of recovering the corresponding probability measure µconcretely via the Stieltjes inversion formula (see Remark 2.20 in [18] for more details).

4.2 The operator-valued Cauchy transform

In this section, we will use some concepts from free probability theory. For a brief introduc-tion of the subject, please see appendix A or [18].

First, note that if X is a (classical) random variable distributed according to µ, then the(classical) Cauchy transform can be expressed as

Gµ(z) = E[(z −X)−1

]; z /∈ R,

where E denotes expectation with respect to µ.

Now, if X is a self-adjoint operator on a Hilbert space, then the operator-valued analyticfunction RX(z) = (zI −X)−1, where I denotes the identity operator, is called the resolventof the operator X. Following Voiculescu [22], a more general notion of the resolvent of Xcan be defined as follows: for an arbitrary operator b on the same Hilbert space as X, b canbe written as

b = Re(b) + iIm(b),

45

Page 51: Moments of Random Matrices and Weingarten Functions

where Re(b) =b+ b∗

2and Im(b) =

b− b∗

2i. It has been noted by Voiculescu in [22] that

RX(b) = (b−X)−1, Im(b) > 0

is an analytic map so that Im (RX(b)) < 0. Note that an operator is said to be positive if itsspectrum consists of non-negative real numbers.

Let A be a von Neumann operator algebra with the normal faithful trace τ , and let B bea von Neumann subalgebra of A. A conditional expectation τ(·|B) is a weakly continuouslinear map A → B satisfying the following properties:

1. τ(1A|B) = 1, and

2. if b1, b2 ∈ B, then τ(b1ab2|B) = b1τ(a|B)b2.

In the paper [2], the author showed that if two self-adjoint operators a, b ∈ A are free (see,e.g. appendix A or [18] for definitions), then the following identity holds for their resolvents:

τ (Ra+b(z)|a) = Ra(ω(z)).

In other words,τ((z1A − (a+ b))−1|a

)= (ω(z)1A − a)−1, (4.1)

where τ(·|a) denotes the conditional expectation on the subalgebra generated by a and ω(z)is an analytic function from C+ to C+.

Let AN and BN be N × N Hermitian deterministic matrices, UN be an N × N Haar-distributed unitary random matrix, and put XN = AN + UNBNU

∗N . The resolvent of XN is

RXN (z) = (zIN −XN)−1 and the resolvents of AN and BN are defined similarly.

It is known that AN and UNBNU∗N are asymptotically free as N → ∞ (see, e.g. ap-

pendix A or [18]). Therefore we wish to investigate if (4.1) holds for random matrices in anapproximate sense.

Note that in XN = AN +UNBNU∗N , the AN part is deterministic and the UNBNU

∗N part

is random. Therefore projecting XN onto the algebra generated by AN simply means takingexpectation in the random part and thus

E[(zIN − (AN + UNBNU

∗N))−1

]= (ωN(z)− AN)−1 (4.2)

as N → ∞, where E denotes expectation with respect to the Haar measure and ωN(z) ∈MN(C) is an N ×N deterministic matrix.

The right-hand side of (4.2) is equal to RAN (ωN(z)) which can be considered as a resol-vent of AN , and note that (4.2) holds only when N →∞.

46

Page 52: Moments of Random Matrices and Weingarten Functions

The inception of this project was motivated by the following question: for a given N ,how far is E

[(zIN − (AN + UNBNU

∗N))−1

]from being a resolvent of AN?

Observe that if bN = zIN−AN and z ∈ C\R, then bN is invertible by the spectral theorem(since AN is Hermitian). Moreover, if |z| is large enough, then ‖b−1N ‖ = ‖(zIN − AN)−1‖ <

1

‖BN‖.

Therefore

E[(zIN − (AN + UNBNU

∗N))−1

]= E

[(bN − UNBNU

∗N)−1

]can be considered as an operator-valued Cauchy transform and it has a power series expansion

E[(bN − UNBNU

∗N)−1

]= E

[(bN(IN − b−1N UNBNU

∗N))−1

](since bN is invertible)

= E[(IN − b−1N UNBNU

∗N)−1b−1N

]= E

[(IN − b−1N UNBNU

∗N)−1

]b−1N (since b−1N is deterministic)

= E

[∞∑n=0

(b−1N UNBNU∗N)n

]b−1N

(since ‖b−1N ‖ <

1

‖BN‖=

1

‖UNBNU∗N‖

)=∞∑n=0

E[(b−1N UNBNU

∗N)n]b−1N

= b−1N +∞∑n=1

b−1N E[UNBNU

∗N(b−1N UNBNU

∗N)n−1

]︸ ︷︷ ︸(3.7)

b−1N ,

which is why we are interested in expressions of the form (3.7).

Alternatively, from (4.2), we can define

ωN(z) := E[(zIN − (AN + UNBNU

∗N))−1

]−1+ AN .

It is known that (see [1] for more details):

1. ωN(z) belongs to the algebra generated by IN and AN ;

2. ωN(z)→ ω(z)IN as N →∞, where ω(z) is the function from (4.1);

3. ωN(z) tends, in a certain sense, to a scalar multiple of IN .

Therefore we can study the distance from ωN(z) to C[IN ]. In other words,

minα∈C‖αIN − ωN(z)‖. (4.3)

We observe that, as discussed above, for |z| large enough and z ∈ C\R, zIN − AN isinvertible and ‖(zIN − AN)−1‖ is small.

47

Page 53: Moments of Random Matrices and Weingarten Functions

Therefore letting bN = (zIN − AN)−1 instead provides us with another power seriesexpansion

E[(zIN − (AN + UNBNU

∗N))−1

]=∞∑n=0

(zIN − AN)−1E[(UNBNU

∗N(zIN − AN)−1)n

]=∞∑n=0

bNE [(UNBNU∗NbN)n] .

In the future, we would like to have a better understanding of the matrix-valued expec-tations (also called matricial moments) E [(UNBNU

∗NbN)n], and hopefully it will provide us

a good enough understanding of E[(b−1N − UNBNU

∗N)−1

]in order to make good estimates

of (4.3).

48

Page 54: Moments of Random Matrices and Weingarten Functions

Appendix A

Free Probability Theory

In this appendix section, we briefly overview some of the definitions and notations from freeprobability theory that are relevant to this report.

Definition A.1. A non-commutative probability space (A, ϕ) consists of a unital algebra Aover C and a unital linear functional

ϕ : A → C; ϕ(1A) = 1.

The elements a ∈ A are called non-commutative random variables in (A, ϕ).

Definition A.2. Let (A, ϕ) be a non-commutative probability space and let I be a fixedindex set.

1. Let, for each i ∈ I, Ai ⊂ A be a unital subalgebra. We say that (Ai)i∈I are freelyindependent with respect to ϕ (or simply free) if

ϕ(a1 · · · ak) = 0

whenever we have the following:

• k is a positive integer,

• aj ∈ Ai(j) (i(j) ∈ I) for all j = 1, . . . , k,

• ϕ(aj) = 0 for all j = 1, . . . , k,

• neighboring elements are from different subalgebras, i.e. i(1) 6= i(2), i(2) 6=i(3), . . . , i(k − 1) 6= i(k).

2. If the unital algebras Ai := alg(1A, ai) generated by elements ai ∈ A (i ∈ I) are freelyindependent, then (ai)i∈I are called freely independent random variables.

49

Page 55: Moments of Random Matrices and Weingarten Functions

Definition A.3. Let, for each N ∈ N, (AN , ϕN) be a non-commutative probability space.Let I be an index set and consider for each i ∈ I and each N ∈ N random variablesa(N)i ∈ AN . Let I = I1 ∪ · · · ∪ Im be a decomposition of I into m disjoint subsets. We say

that a(N)i | i ∈ I1

, . . . ,

a(N)i | i ∈ Im

are asymptotically free (as N →∞) if

(a(N)i

)i∈I

converges in distribution towards (ai)i∈I for

some random variables ai ∈ A (i ∈ I) in some non-commutative probability space (A, ϕ),and if the limits ai | i ∈ I1 , . . . , ai | i ∈ Im are free in (A, ϕ).

In particular, consider two sequences ANN∈N and BNN∈N of N×N random matricessuch that for each N ∈ N, AN and BN are defined on the same probability space (ΩN , PN).Denote by EN the expectation with respect to PN . We say AN and BN are asymptoticallyfree if AN , BN ∈ (MN(ΩN),EN [tr(·)]) converge in distribution to some elements a, b (de-

noted AN , BNdistr−−→ a, b) such that a, b are free in some non-commutative probability space

(A, ϕ).

The following is a remarkable theorem by Voiculescu.

Theorem A.4. Let ANN∈N and BNN∈N be two sequences of N ×N deterministic ma-trices such that AN converges in distribution (with respect to tr) as N →∞, and such thatBN converges in distribution (with respect to tr) as N → ∞. Furthermore, let UNN∈N bea sequence of N × N Haar-distributed unitary random matrices. Then AN and UNBNU

∗N

are asymptotically free.

50

Page 56: Moments of Random Matrices and Weingarten Functions

Appendix B

Tables of Values

We finish this report by giving some values of the Weingarten functions for the unitary groupUN and the orthogonal group ON .

The table for the unitary group, taken from [5], contains values of the unitary Weingartenfunctions up to n = 3 and the table for the orthogonal group, taken from [9], contains valuesof the orthogonal Weingarten functions up to n = 4. It is also mentioned in [9] that in orderto obtain the appropriate results for the symplectic group, one should replace in the formu-las for the orthogonal group N by −N . The table for values of the orthogonal Weingartenfunctions up to n = 6 can be also found in [7].

B.1 Unitary Weingarten functions

WgU([1], N) =1

N,

WgU([1, 1], N) =1

N2 − 1,

WgU([2], N) =−1

N(N2 − 1),

WgU([1, 1, 1], N) =N2 − 2

N(N2 − 1)(N2 − 4),

WgU([2, 1], N) =−1

(N2 − 1)(N2 − 4),

WgU([3], N) =2

N(N2 − 1)(N2 − 4).

51

Page 57: Moments of Random Matrices and Weingarten Functions

B.2 Orthogonal Weingarten functions

WgO([1], N) =1

N,

WgO([1, 1], N) =N + 1

N(N − 1)(N + 2),

WgO([2], N) =−1

N(N − 1)(N + 2),

WgO([1, 1, 1], N) =N2 + 3N − 2

N(N − 1)(N − 2)(N + 2)(N + 4),

WgO([2, 1], N) =−1

N(N − 1)(N − 2)(N + 4),

WgO([3], N) =2

N(N − 1)(N − 2)(N + 2)(N + 4),

WgO([1, 1, 1, 1], N) =(N + 3)(N2 + 6N + 1)

N(N − 1)(N − 3)(N + 1)(N + 2)(N + 4)(N + 6),

WgO([2, 1, 1], N) =−N3 − 6N2 − 3N + 6

N(N − 1)(N − 2)(N − 3)(N + 1)(N + 2)(N + 4)(N + 6),

WgO([2, 2], N) =N2 + 5N + 18

N(N − 1)(N − 2)(N − 3)(N + 1)(N + 2)(N + 4)(N + 6),

WgO([3, 1], N) =2

(N − 1)(N − 2)(N − 3)(N + 1)(N + 2)(N + 6),

WgO([4], N) =−5N − 6

N(N − 1)(N − 2)(N − 3)(N + 1)(N + 2)(N + 4)(N + 6).

52

Page 58: Moments of Random Matrices and Weingarten Functions

Bibliography

[1] S. Belinschi, H. Bercovici, M. Capitaine, and M. Fevrier, Outliers in the spectrum oflarge deformed unitarily invariant models, preprint: http://arxiv.org/abs/1207.5443.

[2] P. Biane, Processes with free increments, Mathematische Zeitschrift 227(1) (1998), 143-174.

[3] M. Capitaine and M. Casalis, Cumulants for Random Matrices as Convolutions on theSymmetric Group, Probab. Theory Relat. Fields 136 (2006), 19-36.

[4] M. Capitaine and M. Casalis, Cumulants for Random Matrices as Convolutions on theSymmetric Group, II, J. Theor. Probab. 20 (2007), 505-533.

[5] B. Collins, Moments and Cumulants of Polynomial random variables on unitary groups,the Itzykson Zuber integral and free probability, Int. Math. Res. Not. 17 (2003), 953-982.

[6] B. Collins and C. Male, The strong asymptotic freeness of Haar and deterministic ma-trices, preprint: http://arxiv.org/abs/1105.4345.

[7] B. Collins and S. Matsumoto, On some properties of orthogonal Weingarten functions,J. Math. Phys. 50 (2009), no.11, 113516.

[8] B. Collins, S. Matsumoto, and N. Saad, Integration of invariant matrices and applicationto statistics, preprint: http://arxiv.org/abs/1205.0956.

[9] B. Collins and P. Sniady, Integration with respect to the Haar measure on unitary,orthogonal and symplectic group, Comm. Math. Phys. 264 (2006), no.3, 773-795.

[10] D. Dummit and R. Foote, Abstract Algebra, John Wiley & Sons Inc, 2003.

[11] W. Fulton and J. Harris, Representation Theory: A First Course, Springer-Verlag NewYork Inc, New York, 1991.

[12] R. Goodman and N. Wallach, Representations and Invariants of the Classical Groups,Cambridge University Press, Cambridge, 1998.

[13] A. A. Jucys, Symmetric polynomials and the center of the symmetric group ring, Rep.Mathematical Phys. 5 (1974), no.1, 107-112.

[14] V. Kargin, Subordination of the resolvent for a sum of random matrices, preprint:http://arxiv.org/abs/1109.5818.

53

Page 59: Moments of Random Matrices and Weingarten Functions

[15] S. Matsumoto and J. Novak, Jucys-Murphy Elements and Unitary Matrix Integrals, Int.Math. Res. Not. 2 (2012), 362-397.

[16] J. A. Mingo, M. Popa, and E. I. Redelmeier, Real second order freeness and Haarorthogonal matrices, J. Math. Phys. 54, 051701 (2013).

[17] J. A. Mingo, P. Sniady, and R. Speicher, Second order freeness and fluctuations ofrandom matrices: II. Unitary random matrices, Adv. in Math. 209 (2007), 212-240.

[18] A. Nica and R. Speicher, Lectures on the Combinatorics of Free Probability, volume335 of London Mathematical Society Lecture Note Series. Cambridge University Press,Cambridge, 2006.

[19] J. Novak, Jucys-Murphy elements and the Weingarten function, Banach Cent. Publ. 89(2010), 231-235.

[20] A. Okounkov and A. Vershik, A New Approach to the Representation Thoery of theSymmetric Groups, Selecta Mathematica, New Series 2, (1996), no.4, 581-605.

[21] R. P. Stanley, Enumerative Combinatorics: Volume 2, Cambridge Studies in AdvancedMathematics. Cambridge University Press, Cambridge, 2001.

[22] D. V. Voiculescu, The coalgebra of the free difference quotient and free probability, Int.Math. Res. Not. 2 (2000), 79-106.

[23] D. Weingarten, Asymptotic behavior of group integrals in the limit of infinite rank, J.Math. Phys. 19 (1978), 999-1001.

[24] P. Zinn-Justin, Jucys-Murphy Elements and Weingarten Matrices, Lett. Math. Phys.91 (2010), 119-127.

54


Recommended