+ All Categories
Home > Documents > Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples...

Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples...

Date post: 07-Jun-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
60
Representations of Finite Groups Lecture notes October 6, 2015 Contents Preface 2 1 Representations 3 1.1 Definition and examples ....................... 3 1.2 Constructions ............................. 6 1.3 Decomposition into irreducible representations .......... 9 1.4 Schur’s Lemma ............................ 12 2 Characters 14 2.1 Basic properties ............................ 14 2.2 Schur’s orthogonality relations ................... 16 2.3 Consequences of the orthogonality relations ............ 18 2.4 Character tables ........................... 21 3 The symmetric group 23 3.1 Idempotents in the group ring .................... 23 3.2 Young diagrams ............................ 25 3.3 Young tableaus and primitive idempotents ............. 28 4 Restriction and induction 32 4.1 Modules over a ring ......................... 32 4.2 Induced representations ....................... 33 4.3 Characters of induced representations ............... 35 4.4 Mackey’s irreducibility criterion ................... 37 5 Category theory 40 5.1 Categories ............................... 40 5.2 Adjunctions .............................. 43 5.3 Tannaka duality ........................... 45 A Useful linear algebra 50 A.1 Linear maps .............................. 50 A.2 Direct sums .............................. 51 A.3 Tensor products ............................ 53 A.4 Inner products ............................ 57 Index 59 1
Transcript
Page 1: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Representations of Finite Groups

Lecture notes

October 6, 2015

Contents

Preface 2

1 Representations 31.1 Definition and examples . . . . . . . . . . . . . . . . . . . . . . . 31.2 Constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3 Decomposition into irreducible representations . . . . . . . . . . 91.4 Schur’s Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Characters 142.1 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.2 Schur’s orthogonality relations . . . . . . . . . . . . . . . . . . . 162.3 Consequences of the orthogonality relations . . . . . . . . . . . . 182.4 Character tables . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3 The symmetric group 233.1 Idempotents in the group ring . . . . . . . . . . . . . . . . . . . . 233.2 Young diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.3 Young tableaus and primitive idempotents . . . . . . . . . . . . . 28

4 Restriction and induction 324.1 Modules over a ring . . . . . . . . . . . . . . . . . . . . . . . . . 324.2 Induced representations . . . . . . . . . . . . . . . . . . . . . . . 334.3 Characters of induced representations . . . . . . . . . . . . . . . 354.4 Mackey’s irreducibility criterion . . . . . . . . . . . . . . . . . . . 37

5 Category theory 405.1 Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.2 Adjunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435.3 Tannaka duality . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

A Useful linear algebra 50A.1 Linear maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50A.2 Direct sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51A.3 Tensor products . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53A.4 Inner products . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Index 59

1

Page 2: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Preface

These notes are based on an 8-lecture course taught at Radboud UniversityNijmegen by Prof. Ieke Moerdijk in the spring of 2015. The course treated thebasics of the theory of representations of finite groups, in particular the corre-spondence between representations and characters, the representation theory ofthe symmetric group, and induced representations. Since the topic involves alot of linear algebra, the notes include an appendix that summarizes some im-portant concepts from linear algebra. Furthermore, a section on category theoryis included, to put the theory of representations in a broader perspective.

A first draft of these lecture notes was written by Jan Schoone. The noteshave been polished and expanded by Frank Roumen.

2

Page 3: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

1 Representations

1.1 Definition and examples

Representation theory is about studying a group by looking at the ways in whichit acts on vector spaces. Since vector spaces and linear maps, or matrices, arewell understood, this often makes analysing the group easier. Many conceptsfrom linear algebra have analogues in the theory of representations. For a reviewof some useful linear algebra, see Appendix A.

A group representation is similar to an action of the group on a set, but wereplace the set by a vector space and require the action to be linear. Recallthat there are two equivalent definitions of a group action: it can be definedas a map G × X → X with certain properties, or as a group homomorphismG→ SX . Here SX is the symmetric group on X, which consists of all bijectionsfrom X to itself. Similarly there are three versions of the definition of a grouprepresentation. In the following, let G be a group and V a vector space, overeither the real or the complex number field.

Version 1. A representation is an action of G on V by linear maps. This meansthat it is a map

G× V → V, (g, v) 7→ g · vwith the following properties:

• g · (−) : V → V is linear, which means that g · (v +w) = g · v + g ·wand g · (λv) = λg · v for all v, w ∈ V and λ in R or C.

• e · v = v and (gh) · v = g · (h · v).

Version 2. A representation is a group homomorphism ρ : G→ GL(V ). This isthe same as Version 1, because such a group homomorphism and an actiondetermine each other via ρ(g)(v) = g · v. The homomorphism propertytranslates into the requirements for an action under this correspondence.

Version 3. A representation is a group homomorphism ρ : G → GLn(R) orG → GLn(C). If V is finite-dimensional with dimension n, then such arepresentation can be obtained from Version 2. Pick a basis {e1, . . . , en}for V , and express ρ(g) in terms of this basis as an n × n-matrix. Thisversion is not exactly equivalent to the other two versions, because herewe had to choose a basis. If we choose another basis {e′1, . . . , e′n}, then weobtain different matrices ρ′(g). However, there is a connection betweenthe matrices ρ(g) and ρ′(g): there exists one single invertible matrix Asuch that for all g ∈ G we have ρ(g) = Aρ′(g)A−1.

Convention. In this course we will mainly be interested in representations offinite groups on finite-dimensional vector spaces. Therefore, we will assume thatall groups we encounter are finite and all vector spaces are finite-dimensional,unless otherwise stated. Also, the theory of representations is often easier overthe complex numbers than over the real numbers, so we assume that all vectorspaces are over C unless otherwise stated.

3

Page 4: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Examples 1.1.

1. Recall that the cyclic group Cn consists of the elements e, r, r2, . . . , rn−1,with the relation rn = e. There is a representation of this group on thevector space C2, given by

ρ : Cn → GL2(C), ρ(rk) =

[cos(

2πkn

)− sin

(2πkn

)sin(

2πkn

)cos(

2πkn

) ]Since this assignment satisfies ρ(e) = I and ρ(gh) = ρ(g)ρ(h), it is indeeda representation in Version 3. It is called the standard representation ofCn.

It is also possible to express the same representation using Version 1. Inthis case we get the following action of Cn on C2:

rk ·[xy

]=

[cos(

2πkn

)− sin

(2πkn

)sin(

2πkn

)cos(

2πkn

) ] [xy

]That is, rk acts on a vector by rotating it over an angle 2πk

n . This coincideswith the geometric interpretation of the group Cn as the rotations overthese angles for k = 0, 1, . . . , n− 1.

2. In a similar way we can define a standard representation of the dihedralgroup Dn. This group is generated by a rotation r and a reflection s,under the relations rn = e, s2 = e, and rs = sr−1. Geometrically, rrepresents rotation over the angle 2π

n and s represents the reflection inthe x-axis. The standard representation is defined by mapping r and sto the matrices corresponding to these geometrical operations. Precisely,this representation is defined on generators by

ρ : Dn → GL2(C)

r 7→[

cos(

2πkn

)− sin

(2πkn

)sin(

2πkn

)cos(

2πkn

) ]s 7→

[1 00 −1

]This gives rise to a well-defined representation, because the matrices re-spect the relations between generators: we have ρ(r)n = I, ρ(s)2 = I, andρ(r)ρ(s) = ρ(s)ρ(s)−1, as can be easily verified by matrix multiplications.

3. Many groups can be regarded as groups of symmetries of some geometri-cal object embedded in a vector space V . Then the group elements arecertain linear transformations on V . For example, the tetrahedron groupconsists of all linear transformations on R3 that map a tetrahedron placedsymmetrically around the origin onto itself. In this case the group G ofsymmetries is a subgroup of GL(V ). The inclusion G → GL(V ) is thenautomatically a group homomorphism and hence a representation of G.

4

Page 5: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

4. There is a representation of the symmetric group Sn on the n-dimensionalspace Cn. Write the standard basis of Cn as e1, . . . , en, and define therepresentation ρ on basis vectors via ρ(σ)(ei) = eσ(i). That is, a permuta-tion in Sn acts on a vector by permuting the basis vectors. This is calledthe standard representation of the symmetric group Sn.

5. For any group G there is a trivial representation on the vector space C.This is the homomorphism ρ(g) = I that sends every group element tothe identity matrix. In terms of actions, it is given by g · x = x for allg ∈ G and x ∈ C. Actually there is a trivial representation of G on anyvector space V , defined by g · v = v, but the trivial representation on C isencountered most often.

6. For any group G we can also define a special representation, called theregular representation of G. The underlying vector space is C[G], whichis the vector space generated by the basis {eg | g ∈ G}. An arbitraryelement of C[G] looks like

∑g∈G λgeg for certain scalars λg ∈ C. The

group G acts on the basis vectors by g · eh = egh, and this action on basisvectors extends uniquely to a representation of G on C[G].

7. Let X be a set carrying an action of the group G. This induces a rep-resentation on the vector space C[X] with basis vectors ex for x ∈ X.The representation acts on these basis vectors as g · ex = eg·x. Two of theabove examples are special cases of this construction:

• Any group G acts on itself by multiplication on the left. This actiongives rise to the regular representation.

• The group Sn acts on the set {1, . . . , n} by permuting the numbers.The resulting representation is the standard representation of Sn onC[{1, . . . , n}] ∼= Cn.

We will often denote a representation ρ : G → GL(V ) using the notation(V, ρ), leaving the group implicit. If no confusion is possible, we sometimes writejust V instead of (V, ρ). The idea behind this notation is that a representationin Version 1 can be regarded as a vector space with a group action. In algebrait is common to omit the extra structure from the notation and focus on theunderlying set. For example, a group (G, ·) is often just written as G. Similarly,the group action in a representation is an additional structure on the underlyingvector space, so we abbreviate (V, ρ) to V .

When working with maps between algebraic structures, the most interestingmaps are the functions that preserve all structure. For groups, these are thehomomorphisms, and for vector spaces these are linear maps. Since we regardrepresentations as vector spaces with the additional structure of a group action,the natural maps between representations are linear maps that respect the groupaction. We will only consider maps between representations of the same group.

Definition 1.2. Let (V, ρ) and (W,σ) be representations of one group G. Anintertwiner from V to W is a linear map φ : V → W for which φ(g · v) =

5

Page 6: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

g · φ(v). If φ is a bijection, then it is called an equivalence or isomorphism ofrepresentations. In this case the representations (V, ρ) and (W,σ) are said to beequivalent .

The above definition is in line with Version 1 of the definition of a represen-tation. We can rewrite it to obtain an equivalent definition in Version 2. Thecondition for an intertwiner says that φ(ρ(g)(v)) = σ(g)(φ(v)), so an intertwinercan also be defined as a linear map φ : V →W such that φ ◦ ρ(g) = σ(g) ◦φ forall g ∈ G. Replacing the linear maps with matrices gives a definition in Version3: an intertwiner is a matrix A such that for all g we have Aρ(g) = σ(g)A.

The representations of a fixed group G, together with the intertwiners be-tween them, form a mathematical structure called a category. This category isdenoted RepG. In Chapter 5 we will say more about categories.

Example 1.3. The cyclic group Cn has a single generator r. The standardrepresentation on C2 is defined on this generator by

ρ(r) =

[cos(

2πn

)− sin

(2πn

)sin(

2πn

)cos(

2πn

) ]Define another representation of Cn by

σ(r) =

[e2πi/n 0

0 e−2πi/n

]We claim that these two representations are equivalent. To see this, let A bethe matrix

[1 i1 −i

]. To check that this matrix is an intertwiner from ρ to σ, we

have to show that Aρ(g) = σ(g)A for all g ∈ Cn. Since Cn is generated by r, itsuffices to check this for g = r. A quick calculation gives

Aρ(g) =

[cos(

2πn

)+ i sin

(2πn

)− sin

(2πn

)+ i cos

(2πn

)cos(

2πn

)− i sin

(2πn

)− sin

(2πn

)− i cos

(2πn

) ]=

[e2πi/n ie2πi/n

e−2πi/n −ie2πi/n

]= σ(g)A.

Furthermore the matrix A is invertible, therefore it is an equivalence from ρ toσ.

1.2 Constructions

There are several constructions of new representations from old ones. Most ofthem are derived from constructions on vector spaces.

Direct sum. Let V and W be two representations of the same group G. Theirdirect sum V ⊕W as vector spaces can be made into a representation by lettingG act pointwise:

g · (v, w) = (g · v, g · w)

The properties of the direct sum of vector spaces from Appendix A.2 also holdfor the direct sum of representations.

6

Page 7: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Tensor product. Using tensor products of vector spaces and linear mapsas described in Appendix A.3, we can define the tensor product of two rep-resentations. Suppose that (V, ρ) and (W,σ) are representations of G. Then(V ⊗W,ρ ⊗ σ) is again a representation, where the tensor product of ρ and σis defined by

(ρ⊗ σ)(g) = ρ(g)⊗ σ(g).

Since ρ(g) : V → V and σ(g) : W → W are both linear maps, their tensorproduct as linear maps indeed gives a linear map V ⊗W → V ⊗W , and it iseasy to check that it yields a representation. It can also be written as an action:

g · (v ⊗ w) = (g · v)⊗ (g · w)

This defines the representation on all generating vectors v ⊗ w. It extendsuniquely to an assignment on all of V ⊗W .

The tensor product of representations has the same properties as the tensorproduct of vector spaces. We have the following analogue of Proposition A.8,replacing linear maps by intertwiners.

Proposition 1.4. There is a one-to-one correspondence between bilinear in-tertwiners ϕ : U ⊕ V → W and linear intertwiners ψ : U ⊗ V → W . Onthe generating vectors of the tensor product, this correspondence is given byψ(u⊗ v) = ϕ(u, v).

From this it follows that the tensor product of representations is also com-mutative and associative. It has the trivial representation C as tensor unit, anddistributes over direct sums. The proofs are almost the same as in the case ofvector spaces. The only required extra step is checking that the isomorphismsobtained in the proofs are intertwiners.

Dual representation. We will use dual spaces and maps (see Appendix A.1)to define dual representations. If (V, ρ) is a representation, then the dual repre-sentation is (V ∗, ρ∗), where V ∗ is the dual space of V and ρ∗(g) = ρ(g−1)∗. Asan action, the dual representation is defined by (g · φ)(v) = φ(g−1 · v). This isagain a representation because:

ρ∗(gh) = ρ((gh)−1)∗ = ρ(h−1g−1)∗

= (ρ(h−1)ρ(g−1))∗ = ρ(g−1)∗ρ(h−1)∗ = ρ∗(g)ρ∗(h)

Note that we have to use g−1 instead of g in the definition, since otherwise ρ∗

need not preserve multiplication, and hence we need not obtain a representation.

Linear Maps. If V and W are vector spaces, then the space of linear mapsHom(V,W ) carries a representation of G whenever both V and W do. Supposethat (V, ρ) and (W,σ) are representations of G. The resulting representation onHom(V,W ) is denoted τ = Hom(ρ, σ), and given by

τ(g)(ϕ) = σ(g) ◦ ϕ ◦ ρ(g−1)

7

Page 8: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

for ϕ ∈ Hom(V,W ). In other words, to obtain the action from g on a linearmap ϕ : V → W , we first let g−1 act on an element of V , then we apply thefunction ϕ, and finally we act with g on the resulting vector in W :

V

V W

W

ρ(g−1)

ϕ

σ(g)

g·ϕ

In terms of actions, this says that

(g · ϕ)(v) = g · ϕ(g−1 · v).

This construction is a generalization of the dual representation. If we takethe trivial representation C for W , then the representation Hom(V,C) coincideswith V ∗. This is the reason why we include ρ(g−1) in the definition.

We have the following analogues of Propositions A.10 and A.11, replacingthe vector spaces with representations.

Proposition 1.5. Let (U, ρ), (V, σ) and (W, τ) be representations of G.

1. The representations Hom(U ⊗ V,W ) and Hom(U,Hom(V,W )) are equiv-alent.

2. The representations V ∗ ⊗W and Hom(V,W ) are equivalent.

The proofs are the same as for vector spaces, but additionally we have tocheck that the isomorphisms obtained in the proofs are intertwiners.

Invariant subspaces. Let (V, ρ) be a representation of a group G. Sometimesit is useful to restrict ρ to a subspace of V to obtain a smaller representation.This only works if the subspace is closed under the action of the group. A sub-space U ⊆ V is called invariant if g ·u ∈ U for every u ∈ U . The representation(U, ρ|U ) is then called a subrepresentation of (V, ρ).

Change of group. Given a group homomorphism ϕ : H → G, it is pos-sible to transform representations of G into representations of H. If (V, ρ) isa representation of G, then ρ ◦ ϕ : H → GL(V ) is a representation of H.This representation is written as ϕ∗(V, ρ). In Version 1, its action is given byh · v = ϕ(h) · v.

Change of vector space. Similarly, a group homomorphism ψ : GL(V ) →GL(W ) can be used to transform a representation (V, ρ) of G into the repre-sentation (W,ψ ◦ ρ) of G. This representation is denoted ψ∗(V, ρ). As a specialcase, if α : V → W is a linear isomorphism, then ψ(A) = α ◦ A ◦ α−1 defines ahomomorphism GL(V )→ GL(W ).

8

Page 9: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

1.3 Decomposition into irreducible representations

Given a finite group, we can obtain much information about the group by de-scribing all of its representations, up to equivalence. Finding all representationsis a hard problem in general. It helps if we can find certain “building blocks” forthe representations, and then show that any representation can be decomposedinto these building blocks. If the building blocks are easy to describe, this willalso make the problem of finding all representations more achievable. We willuse irreducible representations as building blocks.

Definition 1.6. A non-zero representation (V, ρ) is called irreducible if its onlyinvariant subspaces are 0 and V itself.

Example 1.7. Consider the representation of C4 on C2 determined by

ρ(r) =

[0 −11 0

]We would like to know whether ρ is irreducible.

Suppose that U ⊆ C2 is invariant and 0 6= U 6= C2. Then U must be one-dimensional, say that it is spanned by the single vector u = (u1, u2). Since U isinvariant, we have ρ(r)(u) = λu for some λ ∈ C. This gives the equation[

0 −11 0

] [u1

u2

]= λ

[u1

u2

]This amounts to saying that λ is an eigenvalue of the matrix

[0 −11 0

]. The

eigenvalues are λ = ±i. The eigenvector (1,−i) belonging to the eigenvalue ispans an invariant subspace, so ρ is not irreducible.

Irreducible representations are suitable as building blocks, since they can-not be decomposed into smaller representations. We will first show that everyrepresentation of a group can be decomposed as a direct sum of irreduciblerepresentations. Later we will develop techniques to describe all irreduciblerepresentations. This will give a complete description of all representations of agiven group.

To prove that every representation is a direct sum of irreducibles, we will usea common technique in the theory of representations, called averaging. It canbe applied to several objects from linear algebra, to make them behave betterwith respect to the group action. We will describe the averaging technique forvectors, linear maps, and inner products.

Averaging vectors. Let (V, ρ) be a representation of G. We saw that themost interesting subspaces of V are the invariant subspaces. Similarly the mostinteresting vectors in V are the invariant vectors. A vector v ∈ V is said to beinvariant if g · v = v for all g ∈ G. We shall write the collection of all invariantvectors as V G.

9

Page 10: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

If v ∈ V is not invariant, then we can turn it into an invariant vector byaveraging it. The average of v is defined as

Av(v) =1

#G

∑g∈G

g · v.

The vector Av(v) is indeed invariant, because for any h ∈ G we get

h ·Av(v) =1

#G

∑g∈G

h · (g · v) =1

#G

∑g∈G

(hg) · v =1

#G

∑g′∈G

g′ · v = Av(v).

For the third equality sign, we used the reindexing substitution g′ = hg. Since gruns over all group elements, so does g′, so the sums are indeed equal. Observethat averaging an invariant vector v gives back the vector itself, because we sumover #G copies of v and then divide by #G. Thus averaging is a projectionmap Av : V → V G onto the invariant vectors.

Averaging linear maps. Let (V, ρ) and (W,σ) be two representations of G,and let ϕ : V →W be a linear map between the underlying vector spaces. Themap ϕ need not be an intertwiner, but we can always make it into an intertwinerby averaging:

Av(ϕ) : V →W, Av(ϕ)(v) =1

#G

∑g∈G

g · ϕ(g−1 · v)

In fact, averaging a linear map is a special case of averaging a vector. Thisis because a linear map ϕ : V →W is a vector in the space Hom(V,W ), and wesaw earlier that this vector space carries an action of G given by (g · ϕ)(v) =g · ϕ(g−1 · v). Therefore, if we take the average of ϕ as a vector in Hom(V,W ),we get

Av(ϕ)(v) =1

#G

∑g∈G

(g · ϕ)(v) =1

#G

∑g∈G

g · ϕ(g−1 · v),

and this is exactly the same as what we get by taking the average of ϕ as alinear map.

We know that averaging a vector gives an invariant vector. An invariantvector in Hom(V,W ) is a map ϕ : V → W satisfying (g · ϕ)(v) = ϕ(v) forall v ∈ V , which happens if and only if g · ϕ(g−1 · v) = ϕ(v). This is in turnequivalent to ϕ(g−1 · v) = g−1 ·ϕ(v) for all g, or equivalently ϕ(g · v) = g ·ϕ(v).In other words, ϕ is an intertwiner. Thus we have proven that invariant vectorsin Hom(V,W ) are the same as intertwiners, and that averaging a linear mapalways gives an intertwiner. Furthermore, again because averaging a map is aspecial case of averaging a vector, the average of an intertwiner is the intertwineritself.

10

Page 11: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Averaging inner products. Take a representation (V, ρ) of G and let 〈−,−〉be an inner product on V . This inner product is said to be invariant whenever〈g · v, g · w〉 = 〈v, w〉 for each g ∈ G and all v, w ∈ V . If the inner product isnot invariant, then it can be easily converted into an invariant inner product〈−,−〉′ by averaging:

〈v, w〉′ =1

#G

∑g∈G〈g · v, g · w〉

This is again an inner product, because all properties required for an innerproduct are preserved by the construction: 〈−,−〉′ is sesquilinear because theoriginal inner product 〈−,−〉 is sesquilinear, it is skew-symmetric because 〈−,−〉is, and it is non-degenerate because 〈−,−〉 is. Furthermore, it is straightforwardto show that the new inner product is invariant, and that the average of aninvariant inner product is the inner product itself.

If V is a vector space equipped with an inner product 〈−,−〉, then everysubspace U ⊆ V has an orthogonal complement U⊥ with respect to the innerproduct. Recall that U⊥ = {v ∈ V | 〈u, v〉 = 0 for all u ∈ U}. If (V, ρ) is arepresentation of G and U is an invariant subspace, then its orthogonal comple-ment need not be invariant again. However, if the inner product is invariant,then the complement is in fact again invariant. This will be used to prove thefollowing result.

Lemma 1.8. Let (V, ρ) be a representation of G, and let U ⊆ V be an invariantsubspace. Then there exists an invariant subspace U ′ of V such that V = U⊕U ′.

Proof. Take any inner product on V and take its average to obtain an invariantinner product 〈−,−〉. Then let U ′ be the orthogonal complement of U withrespect to 〈−,−〉. To check that U ′ is invariant, take any v ∈ U ′. Then for eachu ∈ U we have:

〈u, g · v〉 = 〈gg−1 · u, g · v〉 = 〈g−1 · u, v〉 = 0

The second equality sign holds because the inner product is invariant, and thelast one because v ∈ U ′ = U⊥ and U is invariant. Therefore U ′ is invariant.

Now we can prove the main result alluded to in the beginning of this section:every representation decomposes into irreducibles.

Theorem 1.9. Any representation (V, ρ) can be decomposed as a direct sumV = U1 ⊕ · · · ⊕ Un, where each Ui is an irreducible subrepresentation of V .

Proof. From the above lemma it follows that if V is not irreducible, then it canbe written as V = V1⊕ V2, where the dimension of V1 and V2 is strictly smallerthan the dimension of V . Repeating this process with V1 and V2 et cetera, Vcan in the end be decomposed into a direct sum U1 ⊕ · · · ⊕ Un, in such a waythat the process cannot be repeated anymore with any of the components. Inthis situation, each Ui is irreducible.

Note that this proof only works for finite-dimensional vector spaces.

11

Page 12: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

1.4 Schur’s Lemma

We will now start developing ways to find all irreducible representations of agiven group. First we will prove Schur’s Lemma, which says that intertwinersbetween irreducible representations are of a very simple form. We will use thislemma to describe the irreducible representations of abelian groups.

Lemma 1.10 (Schur). Let (V, ρ) and (W,σ) be two irreducible representationsof G.

1. Any intertwiner ϕ : V → W is either 0 or an equivalence of representa-tions.

2. Any intertwiner ϕ : V → V is a scalar multiple of the identity: ϕ = λ idfor some λ ∈ C.

Proof.

1. The kernel ker(ϕ) is an invariant subspace of V , because if ϕ(v) = 0, thenϕ(g · v) = g · ϕ(v) = g · 0 = 0. Since V is irreducible, ker(ϕ) is either 0or V . If ker(ϕ) = V , then ϕ is the zero map and the assertion is proven.Otherwise ker(ϕ) = 0, so ϕ is injective. The image im(ϕ) is an invariantsubspace of W , since if w = ϕ(v), then g ·w = g ·ϕ(v) = ϕ(g · v) ∈ im(ϕ).Hence im(ϕ) is either 0 or all of W . The image cannot be zero, since ϕis injective. Thus im(ϕ) = W , so ϕ is injective and surjective, hence anequivalence of representations.

2. Let λ be an eigenvalue of ϕ with corresponding eigenspace U ⊆ V . Then Uis invariant, because for all u ∈ U we have ϕ(g·u) = g·ϕ(u) = g·λu = λg·u,hence g ·u ∈ U . Consequently U is 0 or V . It cannot be 0 since eigenvalueshave a non-zero eigenspace. Thus U = V , which means that ϕ(v) = λvfor all v ∈ V . This shows that ϕ is a scalar multiple of the identity.

Remark. Schur’s Lemma is sometimes phrased in the following way: let V andW be two irreducible representations of G.

1. If V and W are not equivalent, then any intertwiner from V to W is zero.

2. If V and W are equivalent, then any two intertwiners from V to W arescalar multiples of each other.

As an exercise, check that this is equivalent to the phrasing above.

In the proof of the second part, we used that every operator on a finite-dimensional vector space has an eigenvalue. This holds only for complex vectorspaces, so here we implicitly used the assumption that we work with finite-dimensional complex vector spaces.

Even though Schur’s Lemma is not hard to prove, it is fundamental in thetheory of representations and has many useful consequences. As a first applica-tion, we will use it to find all irreducible representations of abelian groups.

12

Page 13: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Proposition 1.11. All irreducible representations of an abelian group are one-dimensional.

Proof. Let (V, ρ) be an irreducible representation of the abelian group G. Forany fixed g ∈ G, the map ρ(g) : V → V is an intertwiner:

ρ(g) ◦ ρ(h) = ρ(gh) = ρ(hg) = ρ(h) ◦ ρ(g)

By Schur’s Lemma, ρ(g) = λg id for some scalar λg. Pick any non-zero vector vin V . We will show that the subspace Cv spanned by v is an invariant subspaceof V . An element in this subspace is of the form αv for some α ∈ C, and wehave ρ(g)(αv) = λgαv ∈ Cv. Hence Cv is invariant and non-zero, so it is equalto the full space V . But Cv is one-dimensional, and hence V is one-dimensional,as required.

Example 1.12. We will determine all irreducible representations of the groupC4. Such a representation ρ is one-dimensional, so it amounts to a homomor-phism ρ : C4 → C×. Since r4 = e, ρ must satisy ρ(r)4 = 1. Hence ρ(r) can beeither 1, i, −1, or −i. Since the representation is determined by its value onr, these four representations are all irreducible representations of C4. (Why arethese inequivalent?)

A second application of Schur’s Lemma is the result that the regular repre-sentation C[G] of a group G contains all irreducible representations as compo-nents. The proof of this fact also requires the following lemma, whose proof isleft as an exercise.

Lemma 1.13. For every surjective intertwiner p : (V, ρ)→ (W,σ) there existsa section s : (W,σ) → (V, ρ) (meaning that p ◦ s = id) that is an intertwineritself.

Proposition 1.14. Every irreducible representation of G is equivalent to asubrepresentation of the regular representation C[G].

Proof. Let (V, ρ) be an irreducible representation and pick any non-zero vectorv ∈ V . Consider the function G → V that maps a group element g to g · v.Since group elements of G form a basis for C[G], this function can be extendeduniquely to a linear map ϕ : C[G]→ V . Explicitly, this linear map looks like

ϕ

(∑g

λgeg

)=∑g

λg(g · v).

The map ϕ is an intertwiner, because on basis vectors it satisfies

ϕ(h · eg) = ϕ(ehg) = hg · v = hϕ(eg).

Its image im(ϕ) is an invariant subspace of V , so since V is irreducible and ϕ isnon-zero, ϕ must be surjective. Lemma 1.13 gives a section ψ : V → C[G] thatis an intertwiner. Since V is irreducible, ψ is an isomorphism onto its image,which is a subrepresentation of the regular representation.

13

Page 14: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

2 Characters

2.1 Basic properties

Our main goal is to classify all representations of a given group up to equiva-lence. For this we need techniques to check that two given representations areequivalent, to find all irreducible representations, and to decompose any repre-sentation into irreducibles. It will turn out that we can accomplish all of thisby looking at the traces of the linear maps involved in a representation.

Recall that the trace of a square matrix is defined as the sum of its entrieson the diagonal, that is tr(A) =

∑i aii. We can also define the trace of a linear

map ϕ : V → V : pick a basis for V , write ϕ as a matrix A with respect tothis basis, and define tr(ϕ) = tr(A). This definition is only sensible if it doesnot depend on the choice of basis. To prove that this is the case, we need thefollowing crucial property of the trace, called the cyclic property :

tr(AB) = tr(BA)

This property can be proven by simply expanding the matrix multiplicationson both sides. If B is an invertible matrix, the property implies that tr(A) =tr(BAB−1). To show that the definition of tr(ϕ) is basis-independent, let A andA′ be two matrix representations of ϕ. Then A′ = BAB−1 for some invertiblematrix B, so their traces are equal.

Taking the trace of a representation gives the object that we will study inthis chapter.

Definition 2.1. The character of a representation ρ : G → GL(V ) is thefunction

χρ : G→ C, g 7→ tr(ρ(g)).

If ρ is understood, this function is sometimes written as χV .

The cyclic property of the trace entails that the characters of equivalentrepresentations are the same. This is because if (V, ρ) and (W,σ) are equiva-lent, then there exists an invertible intertwiner ϕ : V → W . The intertwiningproperty ϕ ◦ ρ(g) = σ(g) ◦ ϕ is equivalent to σ(g) = ϕ ◦ ρ(g) ◦ ϕ−1, so by thecyclic property of the trace, χρ = χσ.

One of the main results of this chapter will be that the converse also holds:if two representations have the same character, then they are equivalent. Hencethe character completely characterizes a representation up to equivalence, asthe name “character” suggests. This gives a very easy criterion to determinewhether two representations are equivalent: instead of trying to find an inter-twiner or proving that none exists, it is enough to calculate the traces of thematrices in the representation and checking if these are equal.

Before proving this main result, we will establish some basic properties ofcharacters. These are derived from the corresponding properties of the traceoperation.

Proposition 2.2. Let (V, ρ) and (W,σ) be representations of G.

14

Page 15: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

1. χV⊕W (g) = χV (g) + χW (g)

2. χV⊗W (g) = χV (g) · χW (g)

3. χV ∗(g) = χV (g−1) = χV (g)

4. χHom(V,W )(g) = χV (g)χW (g)

Proof.

1. The matrices for (ρ ⊕ σ)(g) are obtained by putting those for ρ(g) andσ(g) as blocks on the diagonal. Therefore

χV⊕W (g) = tr((ρ⊕ σ)(g)) = tr(ρ(g)) + tr(σ(g)) = χV (g) + χW (g).

2. If A and B are two matrices, then the formula for the Kronecker productis (A ⊗ B)(i,k)(j,l) = AijBkl. This gives a formula for the trace of thetensor product:

tr(A⊗B) =∑i,k

AiiBkk =

(∑i

Aii

)(∑k

Bkk

)= tr(A) tr(B)

This shows that χV⊗W (g) = χV (g)χW (g).

3. We will first prove that χV ∗(g) = χV (g−1). Recall that the dual repre-sentation is defined by ρ∗(g) = ρ(g−1)∗. The matrix corresponding to thedual ρ(g−1)∗ is the transpose of the matrix corresponding to ρ(g−1). Sinceevery matrix has the same trace as its transpose, we get

χV ∗(g) = tr(ρ∗(g)) = tr(ρ(g−1)T) = tr(ρ(g−1)) = χV (g−1).

For the second equality, let 〈−,−〉 be an invariant inner product on V .We will first show that ρ(g−1) = ρ(g)†, where the dagger indicates thehermitian adjoint of the operator with respect to the inner product. ByProposition A.13, it suffices to show that the inner products with all vec-tors are equal. This holds because of the following computation:

〈v, ρ(g−1)(w)〉 = 〈ρ(g)(v), ρ(g)ρ(g−1)(w)〉 (inner product is invariant)

= 〈ρ(g)(v), w〉 (ρ is a homomorphism)

= 〈v, ρ(g)†(w)〉 (hermitian adjoint)

Since this holds for all v and w, we have ρ(g−1) = ρ(g)†, hence

χρ(g−1) = tr(ρ(g−1)) = tr(ρ(g)†) = tr(ρ(g)

T) = tr(ρ(g)) = χρ(g).

4. This follows from Hom(V,W ) ∼= V ∗ ⊗W and the properties 2 and 3.

15

Page 16: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

2.2 Schur’s orthogonality relations

We wish to use characters as a tool to determine whether representations areequivalent, to find all irreducible representations of a group, and to decomposearbitrary representations into irreducibles. This will be achieved by provingcertain equalities for characters, called Schur’s orthogonality relations. To statethese relations in an abstract way, we need the framework of class functions.

Recall that two elements g, h in a group G are conjugate if there existsan element a ∈ G such that h = aga−1. In this case we write g ∼ h. Therelation ∼ is an equivalence relation, and an equivalence class of ∼ is called aconjugacy class. The conjugacy class of an element g is written as (g) = {h ∈G | there exists a such that h = aga−1}. The collection of all conjugacy classesin G is denoted (G).

Definition 2.3. A class function on a group G is a function ϕ : G→ C that isinvariant on conjugacy classes, i.e. ϕ(hgh−1) = ϕ(g) for all g, h ∈ G.

A class function is the same as a function ϕ : (G)→ C. Characters are exam-ples of class functions, since χρ(hgh

−1) = tr(ρ(hgh−1)) = tr(ρ(h)ρ(g)ρ(h)−1) =tr(ρ(g)) = χρ(g). The set of all class functions on G forms a vector space C̀ (G)under pointwise operations. Endow this vector space with the following innerproduct:

〈ϕ,ψ〉 =1

#G

∑g∈G

ϕ(g)ψ(g)

All important properties of characters will follow from the next result.

Theorem 2.4. Let G be a finite group. The characters χV of the (inequivalent)irreducible representations V of G form an orthonormal basis for the space ofclass functions C̀ (G).

We only have to take characters of inequivalent representations, since equiv-alent representations give the same characters.

The proof of this theorem consists of two parts. First we will prove that thecharacters χV are orthonormal. This means that

〈χV , χW 〉 =1

#G

∑g∈G

χV (g)χW (g) = 0

whenever V and W are irreducible and not equivalent, and that

〈χV , χV 〉 =1

#G

∑g∈G|χV (g)|2 = 1

for every irreducible representation V . These two formulas are called Schur’sorthogonality relations. They can be compactly phrased as “Characters of irre-ducible representations are orthogonal and normalized”. From general facts inlinear algebra it follows that orthogonal vectors are always linearly independent.

16

Page 17: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

The second step to complete the proof that the χV form a basis is to show thatthey span the whole space C̀ (G).

The proof of Schur’s orthogonality relations requires Schur’s Lemma and thefollowing fact.

Proposition 2.5 (Dimension formula). For any representation (V, ρ) of G, thedimension of the space of invariant vectors is

dim(V G) =1

#G

∑g∈G

χV (g).

Proof. Consider the averaging map

Av : V → V, v 7→ 1

#G

∑g∈G

g · v.

The average of any vector is invariant, and the average of an invariant vectoris the vector itself, so Av2 = Av. This implies that the eigenvalues of theoperator Av are all 0 or 1. Choose a basis in which the matrix for Av isdiagonal; then it has only zeroes and ones on the diagonal. The trace of thismatrix is the number of ones, and that is also the dimension of the image, sotr(Av) = dim(im(Av)). Since Av maps onto V G, dim(im(Av)) = dim(V G).Furthermore, tr(Av) = 1

#G

∑g χV (g), from which the result follows.

Corollary 2.6. Let V and W be representations of G. The dimension of thespace of intertwiners from V to W is 〈χV , χW 〉.

Proof. By part 4 of Proposition 2.2 and the previous proposition we have

〈χV , χW 〉 =1

#G

∑g∈G

χV (g)χW (g)

=1

#G

∑g∈G

χHom(V,W )(g)

= dim(Hom(V,W )G).

The invariant vectors in Hom(V,W ) are precisely the intertwiners from V toW , which gives the result.

Schur’s Lemma tells us that the space of intertwiners between two inequiva-lent irreducible representations is zero, and that the space of intertwiners froman irreducible representation to itself is one-dimensional. This gives the follow-ing two consequences of Corollary 2.6.

Corollary 2.7. If V and W are inequivalent irreducible representations of G,then 〈χV , χW 〉 = 0.

Corollary 2.8. If V is an irreducible representation of G, then 〈χV , χV 〉 = 1.

17

Page 18: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

This finishes the proof of the orthogonality relations, and hence we knowthat the characters of irreducible representations are linearly independent. Nowwe will finish the proof of Theorem 2.4.

Proposition 2.9. The class functions χV , where V runs over the irreduciblerepresentations of G, span the vector space C̀ (G).

Proof. Since the characters χV are orthonormal, it suffices to show that thecondition 〈f, χV 〉 = 0 implies f = 0, for any class function f : G→ C.

For any representation (V, ρ), we can define an intertwiner

fV : V → V, v 7→ 1

#G

∑g∈G

f(g)g · v.

This is indeed an intertwiner, because

fV (h · v) =1

#G

∑g∈G

f(g)gh · v =1

#G

∑k∈G

f(hkh−1)hk · v

=1

#G

∑k∈G

f(k)hk · v = h · fV (v),

where we used the substitution k = h−1gh and the fact that f is a class function.Suppose that the representation V is irreducible. Then, by Schur’s Lemma,

fV = λ id for some scalar λ. We can find the scalar λ by computing the traceof fV in two different ways. Since fV = λ id, we obtain tr(fV ) = λ dimV . Onthe other hand, the definition of fV gives tr(fV ) = 1

#G

∑g f(g)χV (g) = 〈f, χV 〉,

which is zero by assumption. Therefore λ = 0 and hence fV is the zero function.Since any representation can be decomposed into irreducible representations,

and fV⊕W = fV + fW , the function fV is in fact zero for any representationV . In particular, this holds for the regular representation V = C[G] with basis{eg | g ∈ G}. Thus we have fV (eg) = 0 for every g ∈ G. Applying this to the

unit of the group gives∑g f(g)eg = 0, hence f(g) = 0 for all g, so f = 0.

2.3 Consequences of the orthogonality relations

Theorem 2.4 has many consequences that are useful to obtain properties ofrepresentations using their characters.

Corollary 2.10. The number of irreducible representations of G is equal to thenumber of conjugacy classes in G.

Proof. Since class functions can be seen as functions from (G) to C, the dimen-sion of C̀ (G) is the number of conjugacy classes. But C̀ (G) has characters ofirreducible representations as basis, so its dimension is also equal to the numberof irreducibles.

To state the other corollaries we will simplify the notation a bit. In thefollowing, fix a group G. We will implicitly assume that all representations arerepresentations of this group G. Call its irreducible representations V1, . . . , Vkand write the corresponding characters as χ1, . . . , χk.

18

Page 19: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Corollary 2.11. If V is any representation of G, then the irreducible represen-tation Vi occurs 〈χi, χV 〉 times in the decomposition of V .

Proof. Write the decomposition of V as V = n1V1⊕· · ·⊕nkVk. Then Vi occursni times in the decomposition of V . By Schur’s orthogonality relations we have

〈χi, χV 〉 = 〈χi, n1χ1 + · · ·+ nkχk〉 = n1〈χi, χ1〉+ · · ·nk〈χi, χk〉 = ni.

Corollary 2.12. Two representations V and W are equivalent if and only ifχV = χW .

Proof. Equivalent representations have the same character because charactersare class functions. Conversely, if χV = χW , then every irreducible representa-tion occurs the same number of times in V and W . Therefore V and W havethe same decomposition into irreducibles and hence they are equivalent.

Corollary 2.13. A representation V is irreducible if and only if 〈χV , χV 〉 = 1.

Proof. Decompose V as n1V1⊕· · ·⊕nkVk into irreducibles. By the orthogonalityrelations,

〈χV , χV 〉 = 〈n1χ1 + · · ·+ nkχk, n1χ1 + · · ·+ nkχk〉 = n21 + · · ·+ n2

k.

This is equal to 1 if and only if exactly one of the ni is 1 and the rest is 0, whichhappens precisely if V is irreducible.

As an application, we will compute the decomposition of the regular rep-resentation into irreducibles. We have already seen in Proposition 1.14 that itcontains all irreducibles as subrepresentations, but now we will prove that thenumber of times an irreducible representation occurs is equal to its dimension.

Corollary 2.14. Let V1, . . . , Vk be a list of all irreducible representations of G.Then the regular representation of G decomposes as

C[G] = (dimV1)V1 ⊕ · · · ⊕ (dimVk)Vk.

Proof. We will first calculate the character of the regular representation. WriteG = {g1, . . . , gn}. A group element g acts on a basis vector egi via g · egi = eggi .Therefore the matrix entries of ρ(g) are

(ρ(g))ij =

{1 if gi = ggj0 otherwise

This gives the following entries on the diagonal:

(ρ(g))ii =

{1 if g = e0 otherwise

19

Page 20: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Hence the trace of ρ(e) is #G, and the trace of ρ(g) is zero for g 6= e.We will use this character to find the decomposition with Corollary 2.11.

The number of times that Vi occurs is

〈χi, χC[G]〉 =1

#G

∑g∈G

χi(g)χC[G](g) =1

#Gχi(e)#G = tr(idVi) = dimVi,

which gives the desired decomposition.

Corollary 2.15. Let V1, . . . , Vk be a list of all irreducible representations of G.Then

k∑i=1

(dimVi)2 = #G.

Proof. We will prove this by computing the inner product 〈χC[G], χC[G]〉 in twodifferent ways. Since the character of the regular representation is χC[G](e) =#G and χC[G](g) = 0 for g 6= e, we get

〈χC[G], χC[G]〉 =1

#G

∑g∈G|χC[G](g)|2 =

1

#G(#G)2 = #G.

On the other hand, the decomposition of C[G] into irreducibles gives

〈χC[G], χC[G]〉 =

k∑i=1

(dimVi)2,

which proves the result.

The above formula is useful to find the dimensions of the irreducible repre-sentations. It is often easier to use if we already know the dimensions of someof them. In particular, it is good to know the number of one-dimensional rep-resentations. This number can be found using the abelianization of the groupG. Recall that the commutator subgroup of G is the subgroup [G,G] generatedby all commutators ghg−1h−1 for g, h ∈ G. This is always a normal subgroup.The abelianization of G is then the quotient Gab = G/[G,G].

Proposition 2.16. The number of one-dimensional representations of G is#Gab.

Proof. If ρ : G → GL(V ) is a one-dimensional representation, then [G,G] ⊆ker(ρ) since GL(V ) is abelian. It follows that ρ factors uniquely through theabelianization Gab. In other words, ρ can be decomposed as ρ = ρab ◦ π, whereπ is the quotient map π : G→ Gab = G/[G,G]:

G

Gab

GL(V )

π

ρ

ρab

20

Page 21: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Hence one-dimensional representations of G correspond to one-dimensional rep-resentations of Gab. Since Gab is abelian, its number of one-dimensional repre-sentations is equal to #Gab.

2.4 Character tables

Since all representations are determined by their characters, and any represen-tation can be decomposed into irreducible ones, we know everything about therepresentations of a group once we know the characters of its irreducible rep-resentations. This information can be neatly organized in a table, called thecharacter table of the group. The rows of the character table are labeled bycharacters of irreducible representations, and the columns by conjugacy classesin the group. Thus a typical character table looks like:

(g1) · · · (gk)χ1

...χk

Here g1, . . . , gk are representatives of the conjugacy classes of the group, andχ1, . . . , χk are the characters of the irreducible representations V1, . . . , Vk. Sincecharacters are constant on conjugacy classes, this completely determines allcharacters.

We can find the character table of a given group G by using the facts from theprevious section. For small groups, it is usually enough to follow this procedurefor finding the characters:

1. Find the conjugacy classes of the group. The number of irreducible rep-resentations is equal to the number of classes.

2. The characters of the one-dimensional representations, which are alwaysirreducible, can be found either by a direct calculation or by using Propo-sition 2.16.

3. The size of the other irreducible representations can be found by using theformula

∑i(dimVi)

2 = #G, where we sum over all irreducible represen-tations.

4. Schur’s orthogonality relations give equations that the unknown represen-tations have to satisfy.

Example 2.17. We will construct the character table of the dihedral group D3.The conjugacy classes of D3 are (e) = {e}, (r) = {r, r2}, and (s) = {s, rs, r2s}.

To find the one-dimensional representations, we will use that such a repre-sentation ρ is determined by its values on the generators r and s. It shouldsatisfy ρ(r)3 = 1, ρ(s)2 = 1, and ρ(r)ρ(s) = ρ(s)ρ(r)−1. From these equa-tions it follows that ρ(r) = 1 and ρ(s) ∈ {±1}, so there are two 1-dimensionalrepresentations.

21

Page 22: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Alternatively, we could have used Proposition 2.16 to find the one-dimensionalrepresentations. The abelianization of D3 is D3/[D3, D3] ∼= C2, so there aretwo one-dimensional representations. Since the trivial representation and therepresentation determined by ρ(r) = 1, ρ(s) = −1 are both well-defined repre-sentations, it follows that these are all one-dimensional representations.

Next we will find the number of irreducible representations and their di-mensions. The group D3 has three conjugacy classes, hence three irreduciblerepresentations. Two of these are 1-dimensional. Call the dimension of the lastone x, then the formula

∑i(dimVi)

2 = #G gives 12 + 12 + x2 = 6, so x = 2.So far we know the characters χ1, χ2 of the 1-dimensional representations,

and we know that χ3(e) = dimV3 = 2 for the remaining character χ3. Thus thecharacter table is of the form

(e) (r) (s)χ1 1 1 1χ2 1 1 -1χ3 2 a b

where the numbers a and b are still unknown. These can be found using Schur’sorthogonality relations:

〈χ1, χ3〉 =1

#D3

∑g∈D3

χ1(g)χ3(g) =1

6(2 + 2a+ 3b) = 0

〈χ2, χ3〉 =1

6(2 + 2a− 3b) = 0

Note that we have to sum over all elements of the group and not just therepresentatives from the conjugacy classes, so when reading the numbers fromthe character table, we have to multiply by the number of elements in eachconjugacy class. Solving these equations for a and b gives a = −1, b = 0, whichcompletes the character table.

22

Page 23: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

3 The symmetric group

3.1 Idempotents in the group ring

The goal of this chapter is to describe all irreducible representations of thesymmetric group Sn. In particular, we will define an explicit bijection betweenthe conjugacy classes of Sn and its irreducible representations. This requires aconnection between representations and ideals in a certain ring, so we will startby presenting this connection.

Let G be a group, and let C[G] be the vector space with basis G. Then eachelement of C[G] is a formal linear combination∑

g∈Gαgg.

The vector space C[G] also forms a ring, whose multiplication is determined onbasis vectors by g · h = gh. Explicitly, the product of two elements

∑g αgg and∑

g βgg is ∑g∈G

αgg

(∑h∈G

βhh

)=∑g∈G

∑h∈G

αgβh(gh) =∑k∈G

γkk

whereγk =

∑g,h: gh=k

αgβh =∑g∈G

αgβg−1k.

The ring C[G] is called the group ring.The group ring can be used to translate representation theoretic terms into

ring theoretic terms. For example:

1. Subrepresentations of the regular representation C[G] are the same as leftideals in the group ring. This is because a subrepresentation V ⊆ C[G] isby definition closed under left multiplication by any g ∈ G, hence also byany element of the group ring C[G].

2. Every subrepresentation V of C[G] has a complement V ⊥ that is alsoa subrepresentation. In terms of ideals, this says that every left idealI ⊆ C[G] has a complementary ideal I ′ with I+I ′ = C[G] and I∩I ′ = {0}.

3. It follows that the unit 1 ∈ C[G] can be written uniquely as 1 = e + e′,where e ∈ I and e′ ∈ I ′. To avoid trivial cases, we will assume from nowon that e, e′ 6= 1.

Lemma 3.1. Let I be left ideal in the group ring C[G] and let I ′ be its com-plementary ideal. Write 1 ∈ C[G] as 1 = e + e′, where e ∈ I and e′ ∈ I ′.Then:

1. e and e′ are idempotent: e2 = e and (e′)2 = e′.

23

Page 24: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

2. e and e′ are disjoint: ee′ = e′e = 0.

3. The ideals I and I ′ are generated by e and e′, respectively: I = (e) andI ′ = (e′). The notation (e) stands for (e) = C[G]e = {xe | x ∈ C[G]}.

Proof. Every element in C[G] can be decomposed uniquely as x+y, where x ∈ Iand y ∈ I ′. Because 1 = e+ e′ we have e = e2 + ee′. Here e2 ∈ I and ee′ ∈ I ′,since I and I ′ are left ideals. At the same time we have e = e + 0 with e ∈ Iand 0 ∈ I ′. Because the decompositions are unique, it follows that e2 = e andee′ = 0.

The above argument is symmetric in e and e′, so 1 and 2 have been proven.If x ∈ I, then x = x · 1 = x(e+ e′) = xe+xe′. But also x = x+ 0, so xe = x

and xe′ = 0. This shows that I ⊆ (e). The reverse inclusion is obvious, andhence 3 holds.

Corollary 3.2. There is a one-to-one correspondence between left ideals in thegroup ring C[G] and idempotents e ∈ C[G]. In fact, if e is the idempotentbelonging to a left ideal I, then I = (e) and

x ∈ I ⇐⇒ x = xe

for any x ∈ C[G].

When I, considered as a representation, is not irreducible, then we canwrite I = J ⊕ J ′ for certain left ideals J, J ′. Let e be an idempotent for whichI = (e). Then e can be split as e = f + f ′ for f ∈ J , f ′ ∈ J ′. Thesesatisfy f = fe = f(f + f ′) = f2 + ff ′, so f2 = f and ff ′ = 0. Thusany idempotent generating a reducible representation can be decomposed into“smaller” disjoint idempotents. This gives the following characterization ofirreducible representations in terms of idempotents in the group ring.

Corollary 3.3. The representation I = (e) is irreducible if and only if e cannotbe split into disjoint idempotents. This means that if e = f + f ′ with f2 = f ,(f ′)2 = f ′, and ff ′ = f ′f = 0, then either f = 0 or f ′ = 0.

An idempotent element e with the property from the corollary is called aprimitive idempotent.

Corollary 3.4. In C[G], the unit 1 can be written as 1 = e1+· · ·+ek, where theei are pairwise disjoint primitive idempotents. Here “pairwise disjoint” meansthat eiej = 0 whenever i 6= j.

We have seen how to characterize subrepresentations of the regular repre-sentation in terms of idempotents in the group ring. We also know how tocharacterize the irreducible representations among these. Now we will continuewith intertwiners.

Proposition 3.5. Let I = (e) and I ′ = (e′) be subrepresentations of the regularrepresentation C[G]. Intertwiners ϕ : I → I ′ correspond to x0 ∈ I ′ with theproperty that ex0e

′ = x0.

24

Page 25: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Proof. Given an intertwiner ϕ, define x0 = ϕ(e). We will show that this x0

satisfies the required property. Since x0 ∈ I ′, we have x0e′ = x0 by Corollary 3.2.

Furthermore, ϕ is an intertwiner and e is idempotent, so x0 = ϕ(e) = ϕ(e2) =eϕ(e) = ex0. Combining these two facts gives ex0e

′ = x0.Conversely, given x0, define an intertwiner ϕ by ϕ(y) = yx0. This is clearly

an intertwiner, and from ex0 = x0 it follows that both constructions are inverses.

Corollary 3.6. A representation I = (e) is irreducible if and only if for eachx0 with ex0e

′ = x0 we have x0 = λe for some scalar λ ∈ C.

Proof. The representation I is irreducible if and only if every intertwiner I → I ′

is an isomorphism onto its image. Such an isomorphism is a scalar multiplicationby Schur’s Lemma.

Corollary 3.7. Two irreducible representations I = (e) and I ′ = (e′) areequivalent if and only if there exists x0 6= 0 such that ex0e

′ = x0.

3.2 Young diagrams

In this section we will construct an explicit bijection between conjugacy classesand irreducible representations of Sn. Two permutations in Sn are conjugate ifand only if they have the same cycle type. Cycle types are partitions of n as asum n = n1 + · · ·+nk, where the numbers ni represent the lengths of the cycles.In such a partition n = n1 + · · ·+ nk, we may assume that n1 ≥ n2 ≥ · · · ≥ nk.Then the partitions can be drawn as a diagram with k rows, where the ith rowconsists of ni squares. For example, the partition 10 = 5 + 3 + 1 + 1 becomesthe diagram

Such a diagram is called a Young diagram.If we put the numbers 1, . . . , n in the squares of a Young diagram, we call

it a Young tableau. For example, we can make the above diagram into a Youngtableau

5 1 6 4 3

2 7 8

10

9

The group Sn acts on the collection of Young tableaus by permuting the num-bers in the squares. That is, if p ∈ Sn and T is a Young tableau, then p · Tchanges the number i in T into p(i).

25

Page 26: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Call two tableaus T and T ′ equivalent if T = p · T ′ for some permutation p.This happens if and only if T and T ′ have the same underlying diagram. Weknow that there is a one-to-one correspondence between:

• Irreducible representations of Sn;

• Conjugacy classes of elements in Sn;

• Equivalence classes of Young tableaus;

• Young diagrams.

Furthermore, in the previous section we saw that there is an equivalence betweenirreducible representations of Sn and equivalence classes of primitive idempo-tents in the group ring C[Sn]. We wish to construct an explicit bijection fromequivalence classes of tableaus to equivalence classes of primitive idempotents,which will establish the goal of this chapter.

Construction of primitive idempotent. Given a tableau T , we define twosubgroups of Sn:

P = PT = {p ∈ Sn | p fixes the rows of T}

Q = QT = {q ∈ Sn | q fixes the columns of T}

Note that, if p ∈ P, then p need not fix every element in every row; it only mapseach element to a number in the same row. Observe that PrT = rPT r

−1 andQrT = rQT r

−1.Using the subgroups P and Q, define two elements of the group algebra

C[Sn]:

P = PT =∑p∈PT

p

Q = QT =∑q∈QT

sgn(q)q

Finally, define e = eT = PTQT .

Claim. The element e is idempotent up to scalar multiplication. This meansthat e2 = λe for some λ ∈ C, λ 6= 0. Then e

λ is idempotent, and this gives thebijection from tableaus to primitive idempotents.

The proof of this claim is long and will be postponed to the next section.Here we will look at an example of the above construction.

Example 3.8. We wish to construct the irreducible representation of S3 asso-ciated to the diagram

26

Page 27: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

First we have to fill in numbers in the diagram to obtain a tableau. Equivalenttableaus will turn out to give equivalent representations, so it does not matterwhich Young tableau we choose. Let T be the tableau

1 2

3

The subgroup PT ⊆ S3 consists of those permutations that keep all entriesin the same row. For the tableau T , this means that 3 stays at its place, while1 and 2 may be swapped or not. Therefore

PT = {id, (1 2)}.

Similarly,QT = {id, (1 3)}.

From this we compute

PT =∑p∈PT

p = id + (1 2) ∈ C[Sn]

QT =∑q∈QT

sgn(q)q = id− (1 3) ∈ C[Sn]

eT = PTQT = id + (1 2)− (1 3)− (1 2)(1 3) = id + (1 2)− (1 3)− (1 3 2)

The ideal (eT ) = C[S3] is a representation of S3. We can find an explicitdescription of this representation by finding a basis for (eT ). Write e = eT anddefine

f = (1 3)e = (1 3) + (1 2 3)− id− (2 3) ∈ (e).

If we let any permutation in Sn act on e, then it always gives a linear combinationof e and f . Furthermore e and f are linearly independent, so together theyform a basis for (e). To describe the representation, it suffices to write downthe action of (1 2) and (1 3) on the basis vectors, since S3 is generated by thesetwo transpositions. These actions are given by:

(1 2) · e = e (1 3) · e = f(1 2) · f = −e− f (1 3) · f = e

For example, the equation (1 2) ·f = −e−f can be verified by the computation

(1 2)·f = (1 2)((1 3)+(1 2 3)−id−(2 3)) = (1 3 2)+(2 3)−(1 2)−(1 2 3) = −e−f

and similarly for the other equations.We can check directly that this representation is irreducible using Corol-

lary 2.13. First we rewrite the representation in terms of matrices, as a mapρ : S3 → GL2(C):

ρ((1 2)) =

[1 −10 −1

], ρ((1 3)) =

[0 11 0

].

27

Page 28: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Since ρ is a homomorphism and (1 3)(1 2) = (1 2 3), it follows that

ρ((1 2 3)) =

[0 −11 −1

]Hence the character of ρ is determined by

χρ(id) = 2, χρ((1 2)) = 0, χρ((1 2 3)) = −1.

Thus ρ is irreducible because

〈χρ, χρ〉 =1

#S3

∑σ∈S3

|χρ(σ)|2 =1

6(22 + 3 · 0 + 2 · (−1)2) = 1.

3.3 Young tableaus and primitive idempotents

In this section we will prove all claims made in the previous section. We will showthat the element eT ∈ C[Sn] assigned to a tableau T is idempotent up to scalarmultiplication. Furthermore we will show that the resulting representation (eT )is irreducible, or, equivalently, that eT is a primitive idempotent. Finally wewill show that the construction of eT gives a bijection from equivalence classesof tableaus to equivalence classes of primitive idempotents. This means that Tand T ′ are equivalent if and only if (eT ) and (eT ′) are.

We start with a lemma about decompositions of permutations with respectto any tableau T .

Lemma 3.9.

1. An r ∈ Sn can be written as pq = r for some p ∈ PT , q ∈ QT in at mostone way.

2. This can be done precisely when the following condition holds: for any twonumbers i and j in T with i 6= j, if i and j are in the same row of T , theni and j are not in the same column of rT .

Proof.

1. Suppose that r = pq = p′q′. Then (p′)−1p = (q′)−1q, and this is apermutation that fixes rows as well as columns. Hence it is the identity,so p = p′ and q = q′.

2. First suppose that r = pq with p ∈ PT and q ∈ QT . Also let i and j benumbers in the same row of T . Then i and j are also in the same row ofpT , hence in distinct columns. Since pqp−1 ∈ QpT and r = pqp−1p, thisshows that i and j are also not in the same column of rT .

Conversely, suppose that r ∈ Sn has the property that any i, j in the samerow of T occur in distinct columns of rT , for i 6= j. Look at the numbersin the first column of rT . They occur in different rows of T . Let p1 be

28

Page 29: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

any permutation in PT moving these numbers to the far left. In this way,p1T and rT have the same numbers in the first column, although they arepossibly in a different order.

Now look at the numbers in the second column of rT . They occur in dis-tinct rows of T , or equivalently in distinct rows of p1T , but not in the firstcolumn of p1T . Let p2 ∈ Pp1T = PT be any permutation moving them tothe second column of p1T while keeping the first column untouched. Nowp1p2T has the same first two columns as rT .

Continue in this way. In the end we find p1, . . . , pk so that pkpk−1 · · · p2p1Tand rT have the same numbers in each column. Let p = pk · · · p1 ∈ PT ,and let q ∈ QpT rearrange the columns in such a way that qpT = rT ; thenqp = r. Write q = pq′p−1 for some q′ ∈ QT . Then r = qp = pq′.

Order on Young tableaus. We will define a partial order on the collectionof all Young diagrams. Let D and D′ be Young diagrams. Then D is nothingbut a sequence of numbers m1 ≥ m2 ≥ · · · ≥ mk with m1 + · · · + mk = n,where mi is the length of the ith row. Similarly D′ can be seen as a sequence ofnumbers m′1 ≥ m′2 ≥ · · · ≥ m′l. Say that D > D′ if and only if mi > m′i for thefirst i where mi 6= m′i. We will use the same notation for tableaus, and writeT > T ′ if they are ordered in this way when viewed as diagrams. Thus for anypermutations r, s ∈ Sn, T > T ′ if and only if rT > sT .

Lemma 3.10. If T and T ′ are tableaus such that any two i, j in the same rowof T occur in different columns of T ′, then T ≤ T ′.

Proof. The numbers in the first row of T occur in different columns of T ′, som1 ≤ m′1. If m1 < m′1 then we are done. Otherwise m1 = m′1; in that case letq′ ∈ QT ′ ⊆ Sn bring these numbers to the first row of q′T ′. Then T and q′T ′

have the same numbers in their first rows, and T ≤ T ′ if and only if T ≤ q′T ′.Now delete the first rows from T and q′T ′ and repeat the argument.

Lemma 3.11. Let T and T ′ be tableaus, and suppose there are i 6= j in thesame row of T and the same column of T ′. (This occurs e.g. if T > T ′.) ThenQT ′PT = 0.

Proof. Write Q′ = QT ′ and P = PT . Let τ be the permutation (i j) in P∩Q′.Then P = τP = Pτ and Q′ = −τQ′ = −Q′τ , so

Q′P = −(Q′τ)P = −Q′(τP ) = −Q′P,

whence Q′P = 0.

Now let us return to the elements e = eT ∈ C[Sn] assigned to tableaus T :

e = eT = PTQT =∑

p∈PT ,q∈QT

sgn(q)pq

Observe the following facts about the element e:

29

Page 30: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

1. The permutations pq ∈ Sn occuring in the above sum are all different, byLemma 3.9.

2. If T > T ′, then e′e = 0, by Lemmas 3.10 and 3.11.

3. For a tableau T and p0 ∈ P, q0 ∈ Q,

p0eq0 = sgn(q0)e.

We take up this last property, for a fixed tableau T and associated P , Q,and e, and show that it is characteristic of the element e, in the following sense.

Lemma 3.12. Let x ∈ C[Sn] have the property that

p0xq0 = sgn(q0)x for all p0 ∈ P, q0 ∈ Q.

Then, writing x =∑s xss, we have x = xide.

Proof. Write

x =∑s∈Sn

xss =∑s∈PQ

xss+∑s/∈PQ

xss = A+B,

and notice that the assumption holds for A and B separately:

p0Aq0 = sgn(q0)A, p0Bq0 = sgn(q0)B,

for any p0 ∈ P, q0 ∈ Q. For A, this gives∑p,q

xpqpq =∑p,q

sgn(q0)xpqp0pqq0,

so xp0pqq0 = sgn(q0)xpq for any p0, p, q0, q. In particular, xid = sgn(q)xpq,whence

A =∑p,q

xpqpq =∑p,q

sgn(q)xidpq = xide.

Next, we show that B = 0. If s /∈ PQ then there are i 6= j in the same rowof T and the same column of sT , by Lemma 3.9. Let τ = (i j) ∈ PT ∩QsT , soτ = sσs−1 for some σ ∈ QT . Then x = −τxσ by assumption, so xs = −xτsσfor this particular s. But τ = sσs−1, so τsσ = s, hence xs = 0. This holds forany s /∈ PQ, so B = 0.

We are now ready to prove the claim made before.

Proposition 3.13. For any tableau T , the associated element e = eT satisfiese2 = λe for some non-zero λ ∈ C.

30

Page 31: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Proof. The element e2 clearly satisfies the hypothesis of Lemma 3.12, so e2 = λe,where λ is the coordinate of e2 ∈ C[Sn] at the unit. To calculate λ, considerthe right multiplication map

Re : C[Sn]→ C[Sn], x 7→ xe.

Then Re projects C[Sn] onto the ideal I = (e), on which the map Re is rightmultiplication by λ. Hence the trace of the map Re is

tr(Re) = λ dim(I) = λ dim((e)).

On the other hand, we can easily compute the trace of Re, since for a basisvector s ∈ C[Sn],

Re(s) = se =∑

sgn(q)spq,

and spq = s if and only if p = q = id, so tr(Re) = #Sn = n!. Therefore

λ =n!

dim((e)).

So now we have found our idempotent e/λ assigned to a tableau T . Itremains to show that:

1. (e) = (e/λ) is irreducible.

2. For tableaus T and T ′, the representations (e) and (e′) are equivalent ifand only if T and T ′ are.

Proof.

1. We know that any intertwiner ϕ : (e) → (e) must be of the form ϕ(y) =yx0 for some x0 with (e/λ)x0(e/λ) = x0, or in other words ex0e = λ2x0.But such an x0 satisfies the hypothesis of Lemma 3.12, so x0 is a multipleof e, and hence ϕ is an isomorphism. This proves that (e) is irreducible.

2. Suppose that T and T ′ are equivalent. Then T = sT ′ for some permutations, so eT = seT ′s

−1. But then (eT ) and (seT ′s−1) are obviously equivalent.

For the other direction, assume that T and T ′ have different shapes, saythat T > T ′. Let ϕ : (e′) → (e) be an intertwiner. Then ϕ is of theform ϕ(y) = yx0 for some x0 ∈ C[Sn] with e′x0e = λλ′x0. We claim thatx0 = 0. Write x0 =

∑s xss. Then e′x0e =

∑s xse

′se. We already knowfrom Lemma 3.11 that e′e = 0 because T > T ′. But then also T > s−1T ′

for each s, so (s−1e′s)e = 0, and hence e′se = 0. Thus e′x0e = 0, andhence x0 = 0 because λ and λ′ are non-zero.

31

Page 32: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

4 Restriction and induction

4.1 Modules over a ring

Modules over a ring are a generalization of representations of a group. Someconstructions on representations are most naturally formulated in the languageof modules, so we will give a brief introduction to modules here.

Definition 4.1. Let R be a ring. A module over R, sometimes called an R-module, is an abelian group M (write + for the group operation) together witha linear action of the ring R on M . This means that there is a map

R×M →M, (r,m) 7→ r ·m

with the following properties:

• r · (m+ n) = r ·m+ r · n

• (r + s) ·m = r ·m+ s ·m

• (rs) ·m = r · (s ·m)

• 1m = m

A homomorphism of R-modules is a group homomorphism ϕ : M → N thatis compatible with the action of R in the sense that ϕ(r ·m) = r · ϕ(m).

Examples 4.2.

1. If R is a field, then a module over R is exactly a vector space over R.Homomorphisms of modules over a field are linear maps between the vectorspaces. Thus modules generalize vector spaces to the setting of rings.

2. Let G be a group. A module over the group ring C[G] is the same as arepresentation of G. Indeed, a map G×V → V can be extended linearly toa map C[G]×V → V in a unique way, and conversely, a map C[G]×V → Vcan be restricted to obtain a representation of G. The homomorphisms ofC[G]-modules are precisely the intertwiners.

3. If I is a left ideal in the ring R, then it is a module over R, because it isclosed under addition and left multiplication with elements in R.

Modules over different rings can be related by means of ring homomorphisms.Let f : S → R be a ring homomorphism. Then any R-module M can betransformed into an S-module denoted f∗(M). The underlying abelian groupis again M , and the action of S is given by s ·m = f(s) ·m. This constructionis called restriction along f . A common special case is when S is a subringof R, and f is the inclusion homomorphism. In this case, f∗(M) is the actualrestriction of the R-action to an S-action.

32

Page 33: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

In the other direction, given a ring homomorphism f : S → R and an S-module N , it is possible to construct an R-module denoted R ⊗S N or f!(N).The construction is similar to the tensor product of representations. Elementsof R⊗S N are linear combinations of the form

n∑i=1

kiri ⊗ xi,

where ki ∈ Z, ri ∈ R, and xi ∈ N . The tensor symbol ⊗ is again a formal symbolto indicate the generators of the tensor product. The elements of R ⊗S N aresubject to the following relations:

(r + r′)⊗ x = r ⊗ x+ r′ ⊗ x

r ⊗ (x+ x′) = r ⊗ x+ r ⊗ x′

(rf(s))⊗ x = r ⊗ (s · x)

In words, the tensor operation ⊗ is linear in both variables separately, andit is compatible with the action of S on N .

Because R⊗SN was defined in terms of linear combinations with coefficientsin Z, it is an abelian group. We make it into an R-module by defining t·(r⊗x) =(tr)⊗ x, for r, t ∈ R and x ∈ N . The R-module R⊗S N is characterized by thefollowing analogue of Propositions A.10 and 1.5.

Proposition 4.3. There is a one-to-one correspondence between:

• Homomorphisms of S-modules N → f∗(M);

• Homomorphisms of R-modules R⊗S N →M .

Proof. If α : N → f∗(M) is a homomorphism of S-modules, define a homomor-phism β : R ⊗S N → M on generators by β(r ⊗ x) = r · α(n). The map β iswell-defined if α respects the S-action, since

β(rf(s)⊗ x) = rf(s) · α(x) = rα(s · x) = β(r ⊗ s · x).

It is easy to check that β preserves the R-action.For the other direction, given β : R ⊗S N → M , define α : N → f∗(M) by

α(x) = β(1⊗x). Then α is a homomorphism of S-modules, and the constructionare mutually inverse.

4.2 Induced representations

If f : H → G is a group homomorphism and V is a representation of G, then wecan obtain a representation of H by restriction along f . The underlying vectorspace of the restriction is again V , and the action of H on V is h · v = f(h)v.The resulting representation is written as f∗(V ), or as ResGH V , or simply ResVif the homomorphism f is clear from context. This is in fact a special case of

33

Page 34: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

the construction in the previous section, since representations of G are the sameas C[G]-modules.

In the same way, we can specialize the construction of the tensor product ofmodules to the case of representations. This provides a method to construct G-representations from H-representations, given a group homomorphism f : H →G. The resulting G-representation is called the induced representation, and itis given by

f!(W ) = C[G]⊗C[H] W

for a H-representation W . Since f!(W ) is a C[G]-module, it is indeed a repre-sentation of G. The induced representation f!(W ) is often written as IndGHWor IndW . It is especially interesting in the case where H is a subgroup of G,and f is the inclusion from H into G, since then it enables us to produce repre-sentations of a large group from representations of a smaller group, which areoften easier to find.

The fundamental mapping property from Proposition 4.3 immediately trans-lates into a bijective correspondence between intertwiners of G-representationsf!(W )→ V and intertwiners of H-representations W → f∗(V ). This correspon-dence can also be written as

Hom(IndW,V )H ∼= Hom(W,ResV )G.

Here we used that the invariant vectors in a Hom-representation are exactly theintertwiners. By the dimension formula, we obtain the following formula for thecharacters.

Proposition 4.4 (Frobenius reciprocity). Let f : H → G be a group homomor-phism, V a representation of G, and W a representation of H. Then

〈χIndW , χV 〉G = 〈χW , χResV 〉H .

The subscripts G and H are used here to emphasize that we use the innerproduct on C̀ (G) on the left, and the one on C̀ (H) on the right.

Example 4.5. Let f : C3 → D3 be the inclusion homomorphism, and define aone-dimensional representation (W,ρ) of C3 by ρ(r) = ω = e2πi/3. This givesan induced representation V = IndW of D3. We wish to find this inducedrepresentation. For this we can compute its decomposition into irreduciblerepresentations of D3. Let V1, V2, V3 be the irreducible representations of D3,and recall that its character table is:

(e) (r) (s)χ1 1 1 1χ2 1 1 -1χ3 2 −1 0

34

Page 35: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

The representation Vi occurs 〈χV , χi〉D3 times in V , which equals 〈χW , χResVi〉C3

by Frobenius reciprocity. For i = 1, 2 this is 0 by orthogonality. For i = 3 wecompute this number using the character table:

〈χW , χResV3〉 =

1

3

(2− ω − ω2

)= 1.

Thus the induced representation V is V3, the standard representation of D3.

Frobenius reciprocity is also useful to establish basic properties of inducedrepresentations. As an example application, we will show that induction istransitive.

Proposition 4.6. If K ⊆ H ⊆ G, then IndGKW∼= IndGH(IndHKW ).

Proof. We will show that each irreducible representation of G occurs the samenumber of times in IndGKW and in IndGH(IndHKW ); then it will follow that thesetwo representations are equivalent. Let V be an irreducible representation ofG. It occurs 〈χV , χIndGH(IndHKW )〉 times in IndGH(IndHKW ), which is equal to

〈χResHK(ResGH V ), χW 〉, as can be seen by applying Frobenius reciprocity twice.

Since restriction is transitive, this is the same as 〈χResGK V , χW 〉, which is in

turn equal to 〈χV , χIndGKW 〉 by Frobenius reciprocity. Hence V occurs the samenumber of times in both representations.

4.3 Characters of induced representations

As we noticed before, induced representations are especially interesting to obtainrepresentations of a large group from representations from a subgroup. Let Hbe a subgroup of G, and let i : H → G be the inclusion homomorphism. Inthis section we will take a closer look at the induced representations obtainedin this setting. In particular, we will give a new description of these inducedrepresentations, and compute their characters.

It will be convenient to choose a set R ⊆ G of representatives for the setG/H of left cosets. More precisely, G/H = {ξH | ξ ∈ R}, and every coset gHoccurs as ξH for exactly one ξ ∈ R. Thus, for every g ∈ G there is exactly oneξ ∈ R such that g−1ξ ∈ H. Of course, we may represent the coset H itself bythe unit e ∈ G, and take e to be an element of R. So we will assume that e ∈ Rfrom now on.

For a representation W of H, any element of the G-representation IndW =i!(W ) is a linear combination of elements of the form g ⊗ w, where g ∈ G isidentified with the corresponding basis vector of C[G]. Since i is an inclusion,the relations for the tensor product simplify to

g ⊗ (w + w′) = g ⊗ w + g ⊗ w′,

gh⊗ w = g ⊗ (h · w)

for g ∈ G, h ∈ H, and w,w′ ∈W .

35

Page 36: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

We can rewrite a generating tensor g ⊗ w in terms of the representatives inR. Let ξ be the unique element in R for which gH = ξH; then

g ⊗ w = g(g−1ξ)(g−1ξ)−1 ⊗ w = gg−1ξ ⊗ (g−1ξ)−1w = ξ ⊗ (g−1ξ)−1w.

Therefore any element of IndW = C[G] ⊗C[H] W is a linear combination ofelements of the form ξ ⊗ w′ for ξ ∈ R, w′ ∈W .

This can be used to obtain an alternative description of the induced repre-sentation C[G]⊗C[H] W . Consider the vector space

⊕ξ∈RW . Elements of this

vector space are sequences of vectors in W , indexed by representatives in R.Write the sequence (0, . . . , 0, w, 0, . . . , 0), with the only non-zero element at theposition indexed by ξ ∈ R, as (ξ, w). In this way all elements of

⊕ξ∈RW are

linear combinations of elements of the form (ξ, w).We can turn

⊕ξ∈RW into a representation of G in the following way. The

group G acts on the set of left cosets G/H, via g · (g′H) = (gg′)H. Hence Galso acts on the set R. To distinguish this action from the product in G, wedenote it by ?. Explicitly, g ? ξ is the unique ξ′ for which ξ′H = gξH. In otherwords, the equation

(g ? ξ)−1gξ ∈ H

determines g ? ξ ∈ R completely. Define the action of G on⊕

ξ∈RW as follows:

g · (ξ, w) = (g ? ξ, (g ? ξ)−1gξw)

Proposition 4.7. The representation⊕

ξ∈RW is equivalent to the inducedrepresentation C[G]⊗C[H] W .

Proof. Define a map

σ :⊕ξ∈R

W → C[G]⊗C[H] W, (ξ, w) 7→ ξ ⊗ w.

Then σ is an intertwiner of G-representations, because

σ(g ·(ξ, w)) = σ(g?ξ, (g?ξ)−1gξw) = (g?ξ)⊗(g?ξ)−1gξw = gξ⊗w = g ·σ(ξ, w).

We will define an inverse τ for σ. For g ⊗ w ∈ C[G] ⊗C[H] W , let τ(g ⊗ w) be(ξ, (g−1ξ)−1w), where ξ is the unique element of R with gH = ξH. To checkthat τ is well-defined, note that gh has the same representative in R as g, sinceghH = gH. Call this representative ξ, then

τ(gh⊗ w) = (ξ, ((gh)−1ξ)−1w) = (ξ, ξ−1ghw) = (ξ, (g−1ξ)−1hw) = τ(g ⊗ hw).

Furthermore, τ is the inverse of σ because of the following two calculations.

τσ(ξ, w) = τ(ξ ⊗ w) = (ξ, (ξ−1ξ)−1w) = (ξ, w)

στ(g ⊗ w) = σ(ξ, (g−1ξ)−1w) = ξ ⊗ (g−1ξ)−1w = ξ(g−1ξ)−1 ⊗ w = g ⊗ w

36

Page 37: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

The above description can be used to compute the character of an inducedrepresentation.

Proposition 4.8. If H ⊆ G, then the character of the induced representationIndGHW is given by

χIndW (g) =∑

ξ∈R: g?ξ=ξ

χW (ξ−1gξ).

Proof. If e1, . . . , en is a basis for W , then all elements of the form (ξ, ei) withξ ∈ R form a basis for IndW ∼=

⊕ξ∈RW . Taking the trace of ρIndW (g) in this

basis involves only those ξ ∈ R for which g ? ξ = ξ, because of the form of theaction of G on

⊕ξW . Therefore

χIndW (g) =∑

ξ∈R: g?ξ=ξ

χW ((g ? ξ)−1gξ) =∑

ξ∈R: g?ξ=ξ

χW (ξ−1gξ).

Remark. The expression χW (ξ−1gξ) in the character formula makes sense be-cause ξ−1gξ ∈ H. Be careful, even though χW is a class function, we cannotconclude something like “χW (ξ−1gξ) = χW (g)” since this involves elementsoutside the group H, hence it does not make sense.

4.4 Mackey’s irreducibility criterion

The formula for the character of an induced representation provides a tool toinvestigate when this induced representation is irreducible. As before, let Hbe a subgroup of G, let W be a representation of H, and let R be a set ofrepresentatives for the left cosets in G/H.

Character theory provides an easy criterion for irreducibility. The inducedrepresentation IndW is irreducible if and only if

〈χIndW , χIndW 〉G = 1.

By Frobenius reciprocity, this is equivalent to

〈χW , χRes IndW 〉H = 1.

According to Proposition 4.8, the character of Res IndW is given by

χRes IndW (h) = χIndW (h) =∑

ξ∈R: h?ξ=ξ

χW (ξ−1hξ).

Use this to expand the inner product:

〈χW , χRes IndW 〉 =1

#H

∑h∈H

∑ξ: h?ξ=ξ

χW (h)χW (ξ−1hξ)

37

Page 38: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

The condition h ? ξ = ξ holds if and only if ξ−1hξ ∈ H, which is in turnequivalent to h ∈ ξHξ−1. Thus we can rewrite the sum to obtain

〈χW , χRes IndW 〉 =1

#H

∑ξ∈R

∑h∈H∩ξHξ−1

χW (h)χW (ξ−1hξ)

= 〈χW , χW 〉+∑ξ 6=e

1

#H

∑h∈H∩ξHξ−1

χW (h)χW (ξ−1hξ).

The first term in this sum is always a positive integer. We will now analyzethe second term. For any ξ ∈ R with ξ 6= e, H ∩ ξHξ−1 is a subgroup of H.We can obtain representations of this group by restricting representations of H.There are two interesting homomorphisms from H∩ξHξ−1 into H, along whichwe can restrict. The first one is the inclusion

iξ : H ∩ ξHξ−1 → H, iξ(h) = h,

and the second one is the conjugation

cξ : H ∩ ξHξ−1 → H, cξ(h) = ξ−1hξ.

The representation W can be restricted along both homomorphisms to ob-tain representations i∗ξ(W ) and c∗ξ(W ) of H ∩ ξHξ−1. The inner product ofthese two representations is

〈i∗ξ(W ), c∗ξ(W )〉 =1

#(H ∩ ξHξ−1)

∑h∈H∩ξHξ−1

χW (h)χW (ξ−1hξ).

Combining this with the earlier computation yields

〈χIndW , χIndW 〉 = 〈χW , χW 〉+∑ξ 6=e

#(H ∩ ξHξ−1)

#H〈χi∗ξ(W ), χc∗ξ(W )〉.

We want to know when this expression is equal to 1. Since 〈χW , χW 〉 ≥ 1and all numbers in the sum are non-negative, this happens if and only if〈χi∗ξ(W ), χc∗ξ(W )〉 = 0 for each ξ 6= e. We capture this property in a definition.

Definition 4.9. Two representations V and V ′ of G are called disjoint if noirreducible representation of G occurs in both V and V ′.

In other words, the representations V and V ′ are disjoint if and only if allirreducible representations occuring in the decomposition of V are different fromall irreducible representations occuring in the decomposition of V ′. By Schur’sorthogonality relations, this is equivalent to 〈χV , χ′V 〉 = 0. Thus we have proventhe following criterion for irreducibility of induced representations.

Theorem 4.10 (Mackey). Let H be a subgroup of G, and let W be a represen-tation of H. Then the induced representation IndGHW is irreducible if and onlyif W itself is irreducible, and for each left coset ξH ∈ G/H with ξ /∈ H, therepresentations i∗ξ(W ) and c∗ξ(W ) of H ∩ ξHξ−1 are disjoint.

38

Page 39: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Example 4.11. Let (W,ρ) be the standard representation of Cn, i.e. W = Cand ρ(r) = e2πi/n. It induces a representation IndW of Dn, via the inclusionCn ⊆ Dn. We wish to know for which n this representation is irreducible. Therepresentation W itself is irreducible since it is one-dimensional, so the firstcondition of Mackey’s criterion is fulfilled. For the second condition, notice thatthe only non-trivial left coset of Cn in Dn is sCn, where s is the reflection in Dn.Since Cn is a normal subgroup of Dn, the representations i∗s(W ) and c∗s(W ) arerepresentations of Cn∩sCns−1 = Cn∩Cn = Cn. The restriction i∗s(W ) is simplyW . The restriction c∗s(W ) has underlying vector space C, and the correspondinghomomorphism ρs : Cn → C is determined by ρs(r) = ρ(s−1rs) = ρ(r−1) =e−2πi/n. Hence IndW is irreducible if and only if e2πi/n 6= e−2πi/n, which holdsif and only if n 6= 1, 2.

39

Page 40: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

5 Category theory

5.1 Categories

Algebra provides insight into mathematical structures by viewing these in anabstract way. That is, we are not interested in what the elements of a structurelook like, but in the relations between those elements. It is possible to achievean even higher level of abstraction by not considering single algebraic struc-tures, but focusing on the connections or homomorphisms between structures.This is the main idea behind category theory. Informally, a category is a collec-tion of objects, regarded as mathematical structures, together with morphismsconnecting the objects. Many aspects of algebra can be phrased in terms ofcategories. In this way, category theory gives a language that makes it easier tosee similarities and differences between various structures. Here we shall intro-duce several basic notions of this theory. In particular, we will concentrate onexamples of categorical concepts that are useful in representation theory.

Definition 5.1. A category C consists of:

• A class of objects Ob(C);

• For each two objects X and Y there is a set Hom(X,Y ), whose objectsare called morphisms from X to Y ;

• For eachX,Y, Z ∈ Ob(C) there is a composition operation ◦ : Hom(X,Y )×Hom(Y, Z)→ Hom(X,Z), written as (f, g) 7→ g ◦ f ;

• For each object X there is an identity morphism idX ∈ Hom(X,X).

These satisfy the following requirements:

• Composition is associative: for f ∈ Hom(W,X), g ∈ Hom(X,Y ), andh ∈ Hom(Y,Z), we have (h ◦ g) ◦ f = h ◦ (g ◦ f).

• The identity morphism behaves as a unit for composition: for any f ∈Hom(X,Y ) we have idY ◦ f = f = f ◦ idX .

We usually write X ∈ C instead of X ∈ Ob(C), and f : X → Y instead off ∈ Hom(X,Y ).

Examples 5.2.

1. The category Sets has sets as objects, and a morphism from a set Xto Y is a function f : X → Y . The composition operation is simplycomposition of functions. Checking that this operation is associative andthat the identity function is a unit is easy.

2. All well-known algebraic structures form a category with their correspond-ing homomorphisms. For example, there is a category Grp of groups to-gether with group homomorphisms, a category Ab of abelian groups andgroup homomorphisms, and a category Ring of rings with ring homomor-phisms. If we fix a field k, then we can form the category Vectk, withvector spaces over k as objects and linear maps as morphisms.

40

Page 41: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

3. Let G be a group. Representations of G are the objects in a categoryRepG. The morphisms in this category are intertwiners. As before, weonly consider complex representations.

4. In all of the above examples, morphisms are certain functions between theobjects, and the composition operation is actual function composition.This is not the case in all categories. For instance, we can view a group Gas a category in the following way. The category has one object, denotedby ∗. Morphisms from ∗ to ∗ are group elements g ∈ G, and compositionis given by group multiplication in G. The group axioms guarantee thatcomposition is associative and has a unit.

Categories themselves also form a mathematical structure, so we may askwhat the associated morphisms are. These are called functors.

Definition 5.3. Let C and D be categories. A functor from C to D is a map Fthat assigns to each object X of C an object F (X) of D, and to each morphismf : X → Y in C a morphism F (f) : F (X) → F (Y ). The assignment shouldsatisfy F (idX) = idF (X) and F (g ◦ f) = F (g) ◦ F (f).

Functors compose, and the identity map Id on a category is always a functor.Therefore there exists a category Cat with categories as objects and functorsas morphisms. Readers who are worried about set-theoretic size issues occuringwhen considering the category of categories may restrict themselves to categorieswhose class of objects forms a set.

Examples 5.4.

1. Every group has an underlying set, and every group homomorphism is inparticular a function between the underlying sets. Hence there is a functorU from Grp to Sets sending a group to its underlying set, and satisfyingU(f) = f on morphisms. This functor is called forgetful, since it doesnothing but forgetting the group structure.

There are many more examples of forgetful functors: the functor Ab →Grp forgetting that a group is abelian; the functor Vectk → Ab forget-ting the scalar multiplication of a vector space, but retaining the addition;and the functor RepG → VectC mapping a representation (V, ρ) to V .

2. We will define a functor (−)ab : Grp→ Ab, called ‘abelianization’. On anobject G in Grp, define Gab = G/[G,G], where [G,G] is the commutatorsubgroup of G. A homomorphism f : G → H gives rise to a map fab :G/[G,G] → H/[H,H], fab(g[G,G]) = f(g)[H,H]. This map is well-defined since a group homomorphism maps commutators to commutators.The assignment (−)ab clearly fulfills the conditions for a functor.

3. Let U be a vector space. Tensoring with U yields a functor U ⊗ (−) :VectC → VectC. If f : V → W is a morphism in VectC, then U ⊗ f :U ⊗ V → U ⊗W is defined by (U ⊗ f)(u ⊗ v) = u ⊗ f(v). This is easily

41

Page 42: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

checked to be a functor. In the special case where U is a representationof a group G, the tensor product U ⊗ V is again a representation of G,with action g · (u ⊗ v) = (g · u) ⊗ v. Thus tensoring becomes a functorU ⊗ (−) : VectC → RepG.

4. Again, let U be a vector space. The functor Hom(U,−) : VectC → VectCmaps an object V to the space of linear maps from U to V . For a lin-ear map f : V → W , define Hom(U, f) : Hom(U, V ) → Hom(U,W ) byHom(U, f)(ϕ) = f ◦ ϕ, as indicated in the following diagram:

U V

W

ϕ

fHom(U,f)(ϕ)

Just like in the previous example, this functor can also be seen as a functorVectC → RepG, provided U is a representation of G. The action of G onHom(U, V ) is given by (g ·ϕ)(u) = ϕ(g−1 ·u). This is a special case of theaction of G on a space of linear maps, where V is the trivial representation.

Some functors reverse the direction of the morphisms. More precisely, acontravariant functor is an assignment F that sends objects X to objects F (X),and morphisms f : X → Y to morphisms F (f) : F (Y ) → F (X), in sucha way that F (idX) = idF (X) and F (g ◦ f) = F (f) ◦ F (g). An “ordinary”functor is sometimes called a covariant functor to emphasize the distinctionwith contravariant ones.

Example 5.5. Restriction of a representation along a group homomorphism canbe described in the language of categories. There exists a contravariant functorRep : Grp → Cat that assigns to a group G the category RepG of complexrepresentations. For a homomorphism f : H → G we have to define a functorf∗ : RepG → RepH . Keep in mind that H and G are swapped, since we wantthe functor Rep to be contravariant, but that f∗ is covariant, since morphismsin Cat are covariant functors. The functor f∗ is defined on objects of RepG viaf∗(V, ρ) = (V, ρ ◦ f). Furthermore, f∗ maps an intertwiner ϕ : (V, ρ) → (W,σ)to itself, considered as an intertwiner ϕ : (V, ρ ◦ f)→ (W,σ ◦ f).

We have seen that morphisms between categories are functors. It is possibleto take the abstraction to an even higher level by defining morphisms betweenfunctors.

Definition 5.6. Suppose that F and G are both functors from C to D. Anatural transformation σ from F to G, notation σ : F ⇒ G, is a family ofmorphisms σX : F (X) → G(X) in D indexed by objects X ∈ C, subjectto the following requirement: for each morphism f : X → Y in C we haveσY ◦ F (f) = G(f) ◦ σX . In other words, the following diagram commutes:

42

Page 43: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

F (X) G(X)

F (Y ) G(Y )

σX

F (f) G(f)

σY

Natural transformations are the morphisms in the category of functors fromC to D. This category is written as Fun(C,D,).

Examples 5.7.

1. Composing the functor (−)ab : Grp → Ab with the forgetful functorAb → Grp yields a functor Grp → Grp, again called (−)ab. There isa natural transformation π : Id ⇒ (−)ab whose components πG : G →G/[G,G] are the projections onto the quotient group. To show that thistransformation is natural, we have to prove that the diagram

G G/[G,G]

H H/[H,H]

π

f fab

π

commutes. But this follows immediately from the definition of fab.

2. Let G be a group, considered as a category with one object. We will takea look at natural transformations from the identity functor Id : G→ G toitself. Such a transformation consists of a family of morphisms from ∗ to∗ indexed by objects of G, but since G has only one object as a category,this is the same as a single morphism from ∗ to ∗. Such a morphism is anelement g of G. The naturality condition tells that hg = gh for all h ∈ G.It follows that natural transformations from Id to Id are exactly elementsin the center of G.

5.2 Adjunctions

We have seen that there is a nice connection between tensor products and spacesof linear maps. For vector spaces this connection is elaborated in Proposi-tion A.10, and for representations in Proposition 1.5. We will rephrase this con-nection in terms of functors. A vector space U gives a tensor product functorF = U ⊗ (−) : VectC → VectC, and a Hom functor G = Hom(U,−) : VectC →VectC. There is a bijective correspondence between maps f : F (V ) → W andmaps g : V → G(W ), in which the maps f and g determine each other viathe equation f(u ⊗ v) = g(v)(u). Analogous situations for other functors areabundant in mathematics, and are called adjunctions. Loosely speaking, anadjunction consists of functors F : C→ D and G : D→ C, in such a way that

43

Page 44: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

there is a bijection between morphisms F (X)→ Y and morphisms X → F (Y ).This bijection is subject to a naturality condition, which looks complicated butis often trivial to check in practice.

Definition 5.8. A functor F : C → D is said to be left adjoint to a func-tor G : D → C if for each X ∈ C and Y ∈ D there is a bijection ΦX,Y :Hom(F (X), Y ) → Hom(X,G(Y )) that is natural in X and Y . The latter re-quirement means the following: if f : X ′ → X and g : Y → Y ′ are morphismsin C and D, respectively, then for all h : F (X)→ Y in D we have that

ΦX′,Y ′(g ◦ h ◦ F (f)) = G(g) ◦ ΦX,Y (h) ◦ f.

Instead of “F is left adjoint to G” one sometimes says “G is right adjoint to F”or “F and G form an adjunction between C and D”, and one writes F a G.

Not every functor has a left or a right adjoint, but whenever an adjointexists, it is unique. There are several equivalent definitions of an adjunction,but here we will only use the above one.

Examples 5.9.

1. We will show that the abelianization functor (−)ab : Grp → Ab is leftadjoint to the forgetful functor Ab → Grp. To this end, let A be anabelian group, and G an arbitrary group. We have to establish a bijectivecorrespondence between homomorphisms G/[G,G] → A and homomor-phisms G → A. In one direction, a morphism h : G → A gives rise toΦ(h) : G/[G,G] → A defined by Φ(h)(g[G,G]) = h(g). This map is well-defined since A is abelian. In the other direction, a map k : G/[G,G]→ Ainduces a map Φ−1(k) : G → A by composing with the canonical projec-tion, i.e. Φ−1(k)(g) = k(g[G,G]). These constructions are clearly mutuallyinverse, and the naturality requirement follows from the definition of Φ.

2. For a fixed vector space U there is an adjunction

VectC

VectC

Hom(U,−)U⊗(−) a

The proof that this is indeed an adjunction has been sketched above, thedetails are left to the reader.

3. The forgetful functor U : RepG → VectC has both a left and a rightadjoint:

RepG

VectC

U aaC[G]⊗(−) Hom(C[G],−)

44

Page 45: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

The functors are defined as in items 3 and 4 of Examples 5.4, with U =C[G].

4. Let f : H → G be a group homomorphism. Apply the functor Repfrom Example 5.5 to obtain a functor f∗ : RepG → RepH . The leftadjoint of f∗ is the induced representation functor V 7→ C[G] ⊗C[H] V ,and the right adjoint of f∗ is the coinduced representation functor V 7→HomC[H](C[G], V ). The space HomC[H](C[G], V ) consists of linear mapsf : C[G] → V for which f(ax) = af(x) for each a ∈ C[H] and x ∈ C[G].The statement that the induced representation is left adjoint to the re-striction functor f∗ is a categorical reformulation of Frobenius reciprocity.

Often these constructions are considered in the situation where H is asubgroup of G, and f is the inclusion. In the special case where H is theone-element subgroup of G, we recover the previous example.

5.3 Tannaka duality

For any group G, we can form its category of representations RepG. We nowturn to the reverse problem: is it possible to reconstruct a group from its cate-gory of representations? It turns out that this is indeed possible, using some ofthe categorical concepts we have developed before.

If G is a group, then the category of representations comes equipped with aforgetful functor U : RepG → VectC. We will show that G can be reconstructedfrom the collection of natural transformations from U to itself. Explicitly, sucha natural transformation σ : U ⇒ U consists of a natural family of linear mapsσ(V,ρ) : U(V ) → U(V ) indexed by representations (V, ρ). Sometimes σ(V,ρ) isabbreviated to σV . These linear maps need not be intertwiners, because we ap-plied the forgetful functor to the representations, so they are simply maps fromthe underlying vector space to itself. Naturality means that for each intertwinerϕ : V →W the following diagram commutes:

V V

W W

σV

ϕϕ

σW

The collection of natural transformations from U to itself will be denoted byEnd(U).

We will look at some basic properties of natural transformations in End(U).These transformations behave well with respect to subrepresentations and directsums of representations.

Lemma 5.10. Let σ : U ⇒ U be a natural transformation.

1. If V is a subrepresentation of W , then (σW )|V = σV . Here (σW )|V indi-cates the restriction of σW : U(W )→ U(W ) to U(V ) ⊆ U(W ).

45

Page 46: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

2. The transformation σ preserves direct sums, i.e. σV⊕W = σV ⊕ σW .

Proof.

1. Since V is a subrepresentation of W , the inclusion map i : V → W is anintertwiner. Hence the diagram

V V

W W

σV

ii

σW

commutes by naturality of σ. This means that σV (v) = σW (v) for eachv ∈ V , or equivalently (σW )|V = σV .

2. The projection map πV : V ⊕ W → V given by πV (v, w) = v is anintertwiner, so it fits in a commutative diagram

V ⊕W V ⊕W

V V

σV⊕W

πVπV

σV

Commutativity of this diagram says that πV (σV⊕W (v, w)) = σV (v), inother words, the first component of σV⊕W (v, w) is σV (v). Likewise oneproves that its second component is σW (w), which gives σV⊕W (v, w) =(σV (v), σW (w)). An alternative way of writing this is σV⊕W = σV ⊕σW .

All natural transformations preserve subrepresentations and direct sums,but unfortunately not all of them preserve tensor products. Therefore it isoften good to restrict the class of transformations under consideration to thetensor-preserving ones.

Definition 5.11. A natural transformation σ : U ⇒ U is monoidal if:

• σ preserves tensor products: σV⊗W = σV ⊗ σW .

• The value of σ on the trivial representation C is σC = idC.

Note that on the left-hand side in the first condition we take a tensor productof representations V and W , while on the right-hand side we take a tensorproduct of a linear map U(V ) → U(V ) with a map U(W ) → U(W ). Thesecond condition ensures that σ also preserves the empty tensor product, whichis the trivial representation. The collection of monoidal natural transformationsσ : U ⇒ U is denoted End⊗(U). We will now show that the group G can berecovered as the group of monoidal natural transformations from U to itself.

46

Page 47: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Theorem 5.12 (Tannaka). Let G be a finite group, and U : RepG → VectCthe corresponding forgetful functor. The set End⊗(U) of monoidal natural trans-formations σ : U ⇒ U forms a group under composition, and this group isisomorphic to G.

Proof. Define a map f : G → End⊗(U) by letting f(g) be the natural trans-formation with components f(g)V : U(V ) → U(V ), f(g)V (v) = g · v. In otherwords, if we write the representation V as (V, ρ), then the component f(g)(V,ρ)

is the map ρ(g). Then f(g) is a natural transformation, because any intertwinerϕ : V →W satisfies ϕ◦f(g)V = f(g)W ◦ϕ. From the definition of a tensor prod-uct of representations it follows that f(g) is always a monoidal transformation,so f is well-defined. Furthermore, the map f maps products to compositionsbecause ρ is a homomorphism. We shall prove that f is a bijection; then it willfollow that End⊗(U) is a group and that f is an isomorphism between G andEnd⊗(U).

To show that f is injective, suppose that f(g) = f(h). Then all componentsof the natural transformations f(g) and f(h) are equal, so for each represen-tation V , we have f(g)V = f(h)V . This holds in particular for the regularrepresentation V = C[G]. Writing out the definition of f then gives g · v = h · vfor every v ∈ C[G]. Plugging in the unit of the group for v shows that g = h,which establishes injectivity.

The proof of surjectivity is more involved and will be presented using asequence of claims. Let σ ∈ End⊗(U) be arbitrary; we seek a group elementg ∈ G such that f(g) = σ. First we define an element of the group ring instead:let a = σC[G](e) ∈ C[G], that is, a is the result of applying the component atthe regular representation σC[G] : C[G]→ C[G] to the unit e ∈ G ⊆ C[G].

Claim 1. a is nonzero.

Proof. Let C be the trivial representation and define an intertwiner ε : C[G]→C that maps every basis vector g to 1. Hence on a linear combination itacts as

ε(a1g1 + · · ·+ angn) = a1 + · · ·+ an.

Naturality of σ gives a commutative diagram

C[G] C[G]

C C

σC[G]

εε

id

where we used that σC = id, because σ is monoidal. Commutativity of thediagram implies that ε(σC[G](g)) = ε(g) = 1 for each g ∈ G. In particular,for g = e we get ε(a) = 1, which is only possible if a 6= 0.

47

Page 48: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Claim 2. The element a ∈ C[G] is actually an element of G. That means, if weexpress a in terms of basis vectors as a = a1g1 + · · ·+ angn, then exactlyone ai is one and the other coefficients are zero.

Proof. Define a map ∆ : C[G]→ C[G]⊗C[G] on basis vectors by ∆(g) = g⊗g.Extending this linearly to an arbitrary vector of C[G] gives

∆(a1g1 + · · ·+ angn) = a1(g1 ⊗ g1) + · · ·+ an(gn ⊗ gn).

Beware that we cannot conclude that ∆(x) = x ⊗ x for an arbitraryx ∈ C[G], since this latter equation also involves cross terms.

Since ∆ is an intertwiner, naturality of σ gives a commutative diagram

C[G]⊗ C[G] C[G]⊗ C[G]

C[G] C[G]σC[G]

∆∆

σC[G]⊗C[G]

Hence for any g ∈ G we have

∆(σC[G](g)) = σC[G]⊗C[G](g ⊗ g).

Because σ is monoidal,

∆(σC[G](g)) = σC[G](g)⊗ σC[G](g).

Plug in g = e and use a = σC[G](e) to obtain

∆(a) = a⊗ a.

Write a = a1g1 + · · ·+ angn, then this equation becomes

a1(g1 ⊗ g1) + · · ·+ an(gn ⊗ gn)

= a21(g1 ⊗ g1) + a1a2(g1 ⊗ g2) + · · ·+ a1an(g1 ⊗ gn)

+ · · ·+ ana1(gn ⊗ g1) + ana2(gn ⊗ g2) + · · ·+ a2

n(gn ⊗ gn).

Comparing coefficients at the basis vectors gi⊗gi gives a2i = ai for all i, so

each ai is either 0 or 1. Comparing coefficients at the other basis vectorsshows that no two different ai can be 1, so at most one ai is 1 and allothers are zero. But by our previous claim a is not identically zero, henceexactly one ai is 1.

Since we know that a is an element of G, we can now try to prove thatf(a) = σ.

48

Page 49: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Claim 3. The transformations f(a) and σ are equal at the regular representa-tion, i.e. f(a)C[G] = σC[G].

Proof. For any y ∈ C[G], let ry : C[G]→ C[G] be the map “right multiplicationby y”, given by ry(x) = xy for each x ∈ C[G]. Because this map is anintertwiner, naturality of σ gives σC[G]◦ry = ry◦σC[G], whence σC[G](xy) =σC[G](x)y for all x, y ∈ C[G]. It follows that

f(a)C[G](y) = ay = σC[G](e)y = σC[G](ey) = σC[G](y)

for every y ∈ C[G], and hence f(a)C[G] and σC[G] are equal.

Claim 4. If two natural transformations σ, τ : U ⇒ U are equal at the regularrepresentation, then σ = τ .

Proof. Every irreducible representation is a subrepresentation of the regularrepresentation by Proposition 1.14. Hence σ and τ are equal at all irre-ducible representations by part 1 of Lemma 5.10. But every representationis a direct sum of irreducibles, so part 2 of the same lemma shows thatthe transformations are equal everywhere.

The final two claims together prove that f is surjective.

49

Page 50: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

A Useful linear algebra

A.1 Linear maps

Most concepts in linear algebra have an abstract and a concrete version. Theabstract version uses vector spaces and linear maps, while the concrete versionuses vectors in a euclidean space Cn and matrices. Abstract version are usu-ally better for proving theorems, and concrete versions for doing computations.These two versions can be connected using bases. A basis for a vector space Vis a collection of vectors e1, . . . , en such that the following properties hold:

• Linear independence: if λ1e1 + · · ·+ λnen = 0, then λ1 = · · · = λn = 0.

• Completeness: every v ∈ V can be written as a linear combination v =λ1e1 + · · ·+ λnen.

Thus any vector in v can be expressed in terms of basis vectors in a unique way,and a choice of basis provides an isomorphism from V onto Cn. For an explicitdescription of this isomorphism, write a vector v ∈ V as a linear combinationv = λ1e1 + · · ·+ λnen; then the isomorphism maps v to (λ1, . . . , λn) ∈ Cn.

This shows that a basis connects vectors in an abstract vector space withconcrete vectors in Cn. Similarly bases connect linear maps with matrices. Letϕ : V → W be a linear map and take bases e1, . . . , en for V and f1, . . . , fm forW . Then the map ϕ is completely determined by the values ϕ(ei). Expandingthese values in the basis for W gives

ϕ(ei) = ai1f1 + · · ·+ aimfm.

The coefficients aij can be collected into an n×m matrix A = (aij), which char-acterizes the map ϕ. Thus a choice of basis provides an isomorphism betweenlinear maps and matrices.

We will look at several ways to construct new vector spaces from old ones.Each of these constructions can be applied to both vector spaces and linearmaps. For each construction we will describe an abstract version and a versionusing bases and matrices.

Our first construction is the vector space of linear maps. The collection ofall linear maps ϕ : V → W forms a vector space under pointwise operations.This vector space is written as Hom(V,W ). Bases for the spaces V and W givea basis for Hom(V,W ):

Proposition A.1. Let V and W be vector spaces with bases e1, . . . , en andf1, . . . , fm, respectively. Then the maps αij : V → W defined by αij(ei) = fjand αij(ei′) = 0 for i′ 6= i form a basis for Hom(V,W ).

Proof. The matrix representation of αij has a 1 on position (i, j) and zeroeseverywhere else. These matrices clearly form a basis for the space of all n×mmatrices.

50

Page 51: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

There is a special case of the construction of the vector space of linear mapsHom(V,W ), where we take the one-dimensional space C for W . The dual spaceof a vector space V consists of all linear maps from V to C and is denoted V ∗.Specializing Proposition A.1 to the dual space gives the following.

Proposition A.2. If V has basis e1, . . . , en, then the dual space V ∗ has basise∗1, . . . , e

∗n, where e∗i : V → C is defined on basis vectors by

e∗i (ej) =

{1 if i = j0 if i 6= j

Also linear maps have a dual. If f : V →W is linear, then it defines a dualmap f∗ : W ∗ → V ∗ by f∗(φ) = φ ◦ f for φ : W → C. Dualizing reverses theorder of composition, in the sense that (f ◦ g)∗ = g∗ ◦ f∗. In matrix form, thedual map corresponds to the transpose matrix:

Proposition A.3. Let ϕ : V → W be linear, e1, . . . , en a basis for V , andf1, . . . , fm a basis for W . If A is the matrix representing ϕ with respect to thesebases, then its transpose AT represents ϕ∗ : W ∗ → V ∗ with respect to the dualbases f∗1 , . . . , f

∗m and e∗1, . . . , e

∗n.

Proof. Since the matrix A = (aij) represents ϕ, we can write ϕ(ei) =∑j aijfj .

We wish to show that the entry at position (j, i) of ϕ∗ is aij . To achieve this, itsuffices to show that ϕ∗(f∗j ) =

∑i aije

∗i . Both sides of this equation are maps

from V to C, so we can show that they are equal by evaluating both at all basisvectors ei. For the left-hand side we obtain

ϕ∗(f∗j )(ei) = f∗j (ϕ(ei)) = f∗j

(∑k

aikfk

)=∑k

aikf∗j (fk) = aij ,

while the right-hand side becomes(∑k

akje∗k

)(ei) =

∑k

akje∗k(ei) = aij ,

proving the desired.

A.2 Direct sums

Let V and W be two vector spaces. Their direct sum V ⊕ W is the vectorspace of pairs (v, w) for v ∈ V and w ∈ W with pointwise operations. Thusthe direct sum is the same as the product V ×W . We write this vector spaceas a direct sum, since (v, w) = (v, 0) + (0, w), so each vector in V ⊕W can beexpressed uniquely as a sum of a vector in V and a vector in W . Another reasonfor writing the product as a direct sum is that we get a basis for V ⊕W bytaking the union of the bases for V and W , not by taking their product. Moreprecisely:

51

Page 52: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Proposition A.4. Let e1, . . . , en be a basis for V and f1, . . . , fm a basis forW . Then (e1, 0), . . . , (en, 0), (0, f1, ), . . . , (0, fm) is a basis for V ⊕ W . As aconsequence, dim(V ⊕W ) = dimV + dimW .

Two linear maps ϕ : V → V ′ and ψ : W → W ′ can be combined into onemap ϕ⊕ψ : V ⊕W → V ′ ⊕W ′, defined by (ϕ⊕ψ)(v, w) = (ϕ(v), ψ(w)). Thuswe can not only take the direct sum of vector spaces, but also of linear maps.If we write ϕ as a matrix A and ψ as a matrix B, then the matrix of ϕ ⊕ ψis obtained by putting A and B as blocks on the diagonal, yielding the matrix[A 0

0 B ].The direct sum interacts with the space of linear maps in a nice way.

Proposition A.5. Let U , V , and W be vector spaces.

1. The vector spaces Hom(U ⊕ V,W ) and Hom(U,W )⊕Hom(V,W ) are iso-morphic.

2. The vector spaces Hom(U, V ⊕W ) and Hom(U, V )⊕Hom(U,W ) are iso-morphic.

Proof. We will only prove the first point; the second one is similar. Given amap ϕ : U ⊕ V → W , form the maps ϕ1 : U → W and ϕ2 : V → W byϕ1(u) = ϕ(u, 0) and ϕ2(v) = ϕ(0, v). Then the function Φ : Hom(U ⊕ V,W )→Hom(U,W ) ⊕ Hom(V,W ) that sends ϕ to the pair (ϕ1, ϕ2) is linear. It hasan inverse Ψ : Hom(U,W ) ⊕ Hom(V,W ) → Hom(U ⊕ V,W ) that maps a pair(ϕ1, ϕ2) to the function (u, v) 7→ ϕ1(u) + ϕ2(v).

To show that Ψ is indeed an inverse, take a function ϕ : U ⊕ V → W .Applying Φ gives the pair (ϕ1, ϕ2), and applying Ψ to this pair gives the function(u, v) 7→ ϕ1(u) + ϕ2(v) = ϕ(u, 0) + ϕ(0, v) = ϕ((u, 0) + (0, v)) = ϕ(u, v), henceΨ◦Φ = id. For the other direction, start with a pair (ϕ1, ϕ2). Applying Ψ givesthe single function (u, v) 7→ ϕ1(u) + ϕ2(v). Applying Φ to this function givesa pair again. The first component of this pair is u 7→ ϕ1(u) + ϕ2(0) = ϕ1(u),and the second component is v 7→ ϕ1(0) +ϕ2(v) = ϕ2(v). Thus Ψ(Ψ(ϕ1, ϕ2)) =(ϕ1, ϕ2), completing the proof.

The matrix version of the above result states that we can combine severalmatrices into one large block matrix. For example, the proposition implies thatHom(V ⊕W,V ′⊕W ′) ∼= Hom(V, V ′)⊕Hom(W,V ′)⊕Hom(V,W ′)⊕Hom(W,W ′).We will describe the isomorphism Φ from right to left. Take linear maps

ϕ11 : V → V ′, ϕ12 : W → V ′,ϕ21 : V →W ′, ϕ22 : W →W ′

and represent ϕij by the matrix Aij . Then we obtain a map V ⊕W → V ′⊕W ′by putting these matrices as blocks in a larger matrix[

A11 A12

A21 A22

].

52

Page 53: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

This block acts on a vector (v, w) by mapping it to (A11v+A12w,A21v+A22w) ∈V ′ ⊕W ′. This is precisely what we get if we write (v, w) as a column vectorand apply block multiplication to it. Saying that the map Φ is an isomorphismmeans that any linear map between the direct sums can be decomposed intoblocks.

A.3 Tensor products

A map ϕ : U⊕V →W out of a direct sum is linear if and only if ϕ(u+u′, v+v′) =ϕ(u, v) + ϕ(u′, v′) and ϕ(λu, λv) = λϕ(u, v). In other words, it has to be linearin both variables simultaneously. One often encounters maps out of a directsum that are linear in each variable separately instead. Such a map is calledbilinear.

Definition A.6. If U , V , and W are vector spaces, then a bilinear map fromU ⊕ V to W is a map ϕ for which

ϕ(u+ u′, v) = ϕ(u, v) + ϕ(u′, v) ϕ(λu, v) = λϕ(u, v)ϕ(u, v + v′) = ϕ(u, v) + ϕ(u, v′) ϕ(u, λv) = λϕ(u, v)

Bilinear maps are often more difficult to work with than linear maps. There-fore it would be nice to have a procedure to convert bilinear maps into linearmaps. For this we will use a construction on vector spaces called the tensorproduct. For any two vector space U and V , we will construct a new vectorspace U⊗V , in such a way that bilinear maps U⊕V →W correspond bijectivelyto linear maps U ⊗ V →W .

There are two descriptions of the tensor product V ⊗W of two vector spacesV and W . We will first discuss an abstract definition. The vector space V ⊗Wis generated by all vectors of the form v⊗w, for v ∈ V and w ∈W . The tensorsymbol ⊗ in v ⊗w is just a formal symbol to denote a generating vector of thetensor product V ⊗W , so the vector v ⊗ w can also be thought of as the pair(v, w). Since the vector space V ⊗W is generated by the elements v ⊗ w, anarbitrary element of V ⊗W looks like

n∑i=1

λivi ⊗ wi

for certain λi ∈ C, vi ∈ V , and wi ∈ W . We stipulate that these elements aresubject to the following relations:

(v + v′)⊗ w = v ⊗ w + v′ ⊗ w (λv)⊗ w = λ(v ⊗ w)v ⊗ (w + w′) = v ⊗ w + v ⊗ w′ v ⊗ (λw) = λ(v ⊗ w)

These relations can be seen as “rules” to perform calculations with the vectorsin V ⊗W . They say that the symbol ⊗ behaves as a bilinear operation: it islinear in each variable separately.

The second, more concrete description of the tensor product involves a choiceof basis. Let e1, . . . , en be a basis for V and f1, . . . , fm a basis for W . Then

53

Page 54: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

V ⊗W is the vector space with basis vectors ei ⊗ fj . In particular, if V hasdimension n and W has dimension m, the tensor product V ⊗W has dimensionnm. This description can be used to give a more concrete picture of a vectorv⊗w, by expressing it in terms of basis vectors. First write v and w as a linearcombination of basis vectors:

v =

n∑i=1

λiei, w =

m∑j=1

µjfj

Then the relations for the tensor product give:

v ⊗ w =

(n∑i=1

λiei

)⊗

m∑j=1

µjfj

=

n∑i=1

λiei ⊗ m∑j=1

µjfj

=

n∑i=1

m∑j=1

(λiei)⊗ (µjfj)

=

n∑i=1

m∑j=1

λiµj(ei ⊗ fj)

and this is an expression of v ⊗ w in terms of basis vectors.

Example A.7. We take a look at the tensor product C2⊗C3. Since the dimen-sion of the tensor product is the product of the dimensions of the constituentspaces, C2⊗C3 is 6-dimensional, and hence it is isomorphic to C6. An arbitraryvector in C2 is of the form (α, β), and in C3 of the form (γ, δ, ε). We wish toexpress their tensor product (α, β)⊗ (γ, δ, ε) as a vector in C6. For this we canuse the above calculation. Write the standard bases of C2 and C3 as e1, e2 andf1, f2, f3 respectively. Then (α, β) = αe1 + βe2 and (γ, δ, ε) = γf1 + δf2 + εf3.Therefore

(α, β)⊗ (γ, δ, ε) = αγ(e1 ⊗ f1) + αδ(e1 ⊗ f2) + αε(e1 ⊗ f3)

+βγ(e2 ⊗ f1) + βδ(e2 ⊗ f2) + βε(e2 ⊗ f3)

= (αγ, αδ, αε, βγ, βδ, βε),

since the vectors ei ⊗ fj form the standard basis of C6.

Tensor products can be used to transform bilinear maps into linear maps,since bilinear maps from U ⊕ V correspond to linear maps from the tensorproduct U ⊗ V :

Proposition A.8. There is a one-to-one correspondence between bilinear mapsϕ : U ⊕ V →W and linear maps ψ : U ⊗ V →W . On the generating vectors ofthe tensor product, this correspondence is given by ψ(u⊗ v) = ϕ(u, v).

54

Page 55: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

This proposition holds because the generating relations for the tensor prod-uct amount to the same as the conditions for a bilinear map. It provides a wayof constructing linear maps out of a tensor product by finding a suitable bilin-ear map. Therefore we can use this result to establish some basic properties oftensor products.

Proposition A.9.

1. The tensor product is commutative: U ⊗ V ∼= V ⊗ U .

2. The tensor product is associative: U ⊗ (V ⊗W ) ∼= (U ⊗ V )⊗W .

3. The ground field is a unit for the tensor product: C⊗ V ∼= V ⊗ C ∼= V .

4. The tensor product distributes over the direct sum: U ⊗ (V ⊕W ) ∼= (U ⊗V )⊕ (U ⊗W ).

Proof. We will only prove commutativity here; the other properties can be ob-tained in a similar way. To produce a linear map U ⊗ V → V ⊗ U , we have tofind a bilinear map ϕ : U ⊕V → V ⊗U . For this we take ϕ(u, v) = v⊗u. Thenϕ is bilinear because

ϕ(u+ u′, v) = v ⊗ (u+ u′) = v ⊗ u+ v ⊗ u′ = ϕ(u, v) + ϕ(u′, v),

and similarly for the other variable and for scalar multiplications. Thereforeϕ gives a unique linear map ψ for which ψ(u ⊗ v) = ϕ(u, v) = v ⊗ u. Toshow that ψ is an isomorphism, it suffices to show that it has a linear inverseχ : V ⊗ U → U ⊗ V . This χ is obtained via a bilinear map in the same way asψ, so it satisfies χ(v⊗u) = u⊗ v. To check that χ is an inverse for ψ, note that

ψχ(v ⊗ u) = ψ(u⊗ v) = v ⊗ u.

Since the vectors v⊗u generate the space V ⊗U and the map ψχ is equal to idon all generators, it is equal to id everywhere. Similarly χψ = id, so ψ is indeedan isomorphism.

So far we have only considered tensor products of vector spaces. It is alsopossible to take the tensor product of two linear maps. Again there is an abstractdefinition and a definition in terms of matrices. Abstractly, if ϕ : V → V ′ andψ : W → W ′ are two linear maps, then their tensor product ϕ⊗ ψ : V ⊗W →V ′⊗W ′ is defined on generators by (ϕ⊗ψ)(v⊗w) = ϕ(v)⊗ψ(w). This extendsuniquely to a linear map defined on all vectors in V ⊗W .

If we write the maps ϕ and ψ as matrices, then the construction of theirtensor product is known as the Kronecker product. Suppose that dimV = n,dimV ′ = n′, dimW = m, and dimW ′ = m′. Then ϕ can be written as ann × n′ matrix A with indices Aij , and ψ as an m ×m′ matrix B with indicesBkl. The space V ⊗W has dimension mn and the space V ′⊗W ′ has dimensionm′n′, so ϕ ⊗ ψ is represented by an mn ×m′n′ matrix A ⊗ B. Its entries can

55

Page 56: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

be indexed by pairs (A ⊗ B)(i,k)(j,l), where 1 ≤ i ≤ n, 1 ≤ k ≤ m, 1 ≤ j ≤ n′,and 1 ≤ l ≤ m′. In this notation, the entries are given by

(A⊗B)(i,k)(j,l) = AijBkl.

This can be checked by applying both sides of the equation to all basis vectors;the computation is similar to the tensor product of two vectors. For example,the Kronecker product of two 2× 2 matrices becomes:

[a11 a12

a21 a22

]⊗[b11 b12

b21 b22

]=

a11b11 a11b12 a12b11 a12b12

a11b21 a11b22 a12b21 a12b22

a21b11 a21b12 a22b11 a22b12

a21b21 a21b22 a22b21 a22b22

We will now present two results that relate tensor products to spaces of

linear maps and dual spaces.

Proposition A.10. The vector spaces Hom(U⊗V,W ) and Hom(U,Hom(V,W ))are isomorphic.

Expanding the definitions, we see that a linear map from U to Hom(V,W )is the same as a bilinear map U ⊕ V → W . Thus the above result followsfrom Proposition A.8. It can be used to translate between maps from a tensorproduct and maps into a space of linear maps, so it is sometimes called theHom-tensor adjunction.

Proposition A.11. If W is finite-dimensional, then the vector spaces V ∗⊗Wand Hom(V,W ) are isomorphic.

Proof. We will provide two proofs. For the first proof, we will construct anexplicit isomorphism Φ : V ∗ ⊗ W → Hom(V,W ). There is a bilinear mapΦ′ : V ∗ ⊕W → Hom(V,W ) given by

Φ′(ϕ,w)(v) = ϕ(v)w.

This gives a unique Φ : V ∗⊗W → Hom(V,W ) for which Φ(ϕ⊗w)(v) = ϕ(v)w.To check that it is an isomorphism, we will first prove that it is surjective. Letψ ∈ Hom(V,W ) be any linear map and let e1, . . . , en be a basis for W . Expandψ in this basis as

ψ(v) =∑i

ψi(v)ei;

then∑i ψi ⊗ ei is a pre-image of ψ because

Φ

(∑i

ψi ⊗ ei

)(v) =

∑i

ψi(v)ei = ψ(v).

Since the spaces V ∗ ⊗W and Hom(V,W ) have the same dimension, it followsthat they are isomorphic.

Alternatively, we can use that if W is finite-dimensional, then it is isomorphicto Cn. Since Cn is the direct sum of n copies of C, Proposition A.5 and the factthat the tensor product distributes over direct sums give

Hom(V,Cn) ∼= Hom(V,C)n = (V ∗)n ∼= V ∗ ⊗ Cn.

56

Page 57: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

A.4 Inner products

Many vector spaces are equipped with an inner product. This is a map 〈−,−〉 :V × V → C subject to the following conditions.

• Sesquilinearity: 〈u, λv + µw〉 = λ〈u, v〉 + µ〈u,w〉 for all u, v, w ∈ V andλ, µ ∈ C.

• Skew-symmetry: 〈v, w〉 = 〈w, v〉 for all v, w ∈ V .

• Non-degeneracy: 〈v, v〉 ≥ 0 with equality if and only if v = 0.

The first two conditions together imply that the inner product is antilinear inthe first variable, that is, 〈λu + µv,w〉 = λ〈u,w〉 + µ〈v, w〉. A vector spacetogether with an inner product is called an inner product space.

When working with an inner product space, we would like other objects tointeract well with the inner product. For instance, we would like a basis for thespace to be orthonormal. An orthonormal basis for V is a basis e1, . . . , en suchthat 〈ei, ei〉 = 1 and 〈ei, ej〉 = 0 for all i 6= j. When expressing a vector v ∈ Vin terms of such an orthonormal basis, the coefficient for the basis vector ei is〈ei, v〉. In other words, every v ∈ V can be written as

v =

n∑i=1

〈ei, v〉ei.

Every inner product space possesses an orthonormal basis, since an arbitrarybasis can be made orthonormal by applying the Gram-Schmidt process.

The following result gives a quick way to prove that a given set of vectorsforms an orthonormal basis.

Proposition A.12. Let e1, . . . , en be a set of vectors in V satisfying the fol-lowing requirements:

• For all i we have 〈ei, ei〉 = 1.

• For all i 6= j we have 〈ei, ej〉 = 0.

• If 〈v, ei〉 = 0 for all i, then v = 0.

Then e1, . . . , en form an orthonormal basis for V .

Inner products and orthonormal bases also give an easy method to check iftwo linear maps are equal.

Proposition A.13. Let ϕ : V → W be a linear map, let e1, . . . , en be anorthonormal basis for V , and f1, . . . , fm an orthonormal basis for W . Thefollowing are equivalent:

1. ϕ = ψ.

2. 〈w,ϕ(v)〉 = 〈w,ψ(v)〉 for all v ∈ V and w ∈W .

57

Page 58: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

3. 〈fj , ϕ(ei)〉 = 〈fj , ψ(ei)〉 for all i and j.

Proof. The implications 1⇒ 2⇒ 3 are evident. For the remaining implication3⇒ 1, fix a number i and assume that 〈fj , ϕ(ei)〉 = 〈fj , ψ(ei)〉 for all j. Expressϕ as a matrix A = (aij) with respect to the bases, and express ψ as a matrixB = (bij). Then ϕ(ei) =

∑j aijfj and ψ(ei) =

∑j bijfj . Since our bases are

orthonormal, it follows that aij = 〈fj , ϕ(ei)〉 = 〈fj , ψ(ei)〉 = bij for all j. Sincei was arbitrary, all coefficients of A and B are equal, hence ϕ = ψ.

An inner product also gives rise to a bijection between maps V → W andmaps W → V . This bijection involves the construction of the hermitian adjointof an operator.

Proposition A.14. Let V and W be inner product spaces. For any linearϕ : V → W there exists a unique ψ : W → V such that 〈v, ψ(w)〉 = 〈ϕ(v), w〉for all v ∈ V and w ∈W .

Proof. Let e1, . . . , en be an orthonormal basis for V and define ψ by ψ(w) =∑i〈ϕ(ei), w〉ei. Then

〈v, ψ(w)〉 = 〈v,∑i

〈ϕ(ei), w〉ei〉 =∑i

〈ϕ(ei), w〉〈v, ei〉 =∑i

〈ei, v〉〈ϕ(ei), w〉

= 〈ϕ

(∑i

〈ei, v〉ei

), w〉 = 〈ϕ(v), w〉,

where we used that the inner product is antilinear in the first variable (〈λv,w〉 =λ〈v, w〉) and that any v ∈ V can be expressed as v =

∑i〈ei, v〉ei. Uniqueness

of ψ follows from Proposition A.13.

The unique map ψ from the above proposition is written as ϕ†, and calledthe hermitian adjoint of ϕ. From the formula 〈v, ϕ†(w)〉 = 〈ϕ(v), w〉 we canderive a number of properties of the hermitian adjoint:

(ϕ+ ψ)† = ϕ† + ψ† (ϕ ◦ ψ)† = ψ† ◦ ϕ†(λϕ)† = λϕ† ϕ†† = ϕ

Proposition A.15. If A is the matrix associated to ϕ : V → W (with respect

to fixed bases of V and W ), then the conjugate transpose AT

is the matrixassociated to ϕ†.

Proof. Exercise.

58

Page 59: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

Index

abelian group, 13abelianization, 20, 41, 44action, 3

linear, 3representation induced by, 5of Sn on Young tableaus, 25

adjunction, 44between Hom and tensor, 8, 44, 56between restriction and induction,

33averaging

inner products, 11linear maps, 10vectors, 9, 17

basis, 50for class functions, 16for direct sum, 52for dual space, 51for space of linear maps, 50for tensor product, 53

bilinear, 7, 53

category, 6, 40center, 43change of group, 8change of vector space, 8character, 14character table, 21class function, 16coinduced representation, 45commutator, 20commutator subgroup, 20composition, 40conjugacy, 16, 18, 25constructions of representations, 6contravariant functor, 42coset, 35covariant functor, 42cyclic group, 4cyclic property of trace, 14

decomposition into irreducibles, 11

decomposition into primitive idempo-tents, 24

dihedral group, 4, 21dimension formula, 17direct sum, 11

character of, 14of linear maps, 52of representations, 6of vector spaces, 51

disjoint idempotents, 24disjoint representations, 38dual map, 51dual representation, 7

character of, 14dual space, 51

eigenvalue, 9, 12, 17equivalence, 6, 12equivalent representations, 6, 19, 25equivalent tableaus, 26

forgetful functor, 41, 44, 45Frobenius reciprocity, 34, 45functor, 41

group action, see actiongroup representation, see representa-

tiongroup ring, 23, 32, 47

hermitian adjoint, 58Hom-tensor adjunction, 8, 44, 56homomorphism of modules, 32homomorphism of rings, 32

ideal, 23, 24, 32idempotent, 23, 24identity morphism, 40induced representation, 34, 36, 37, 45

character of, 37induction, see induced representationinner product, 11, 16, 57intertwiner, 5

obtained by averaging, 10

59

Page 60: Representations of Finite Groups - Universiteit Utrecht1 Representations 1.1 De nition and examples Representation theory is about studying a group by looking at the ways in which

dimension of space of, 17in terms of idempotents, 24from induced representation, 34between irreducible representations,

12section of, 13from tensor product, 7

invariant inner product, 11invariant subspace, 8, 9, 11, 23invariant vector, 9, 17irreducible representation, 9

of abelian group, 13character of, 16, 19decomposition into, 11in terms of idempotents, 24, 25for induced representations, 37–39intertwiner between, 12number of, 18number of occurrences, 19inside regular representation, 13,

19sum of squares of dimensions, 20of symmetric group, 25

isomorphism of representations, 6

Kronecker product, 55

left adjoint, 44

Mackey’s irreducibility criterion, 38matrix, 50module, 32monoidal natural transformation, 46morphism, 40

natural transformation, 42, 45non-degenerate, 57

object, 40one-dimensional representation, 20order on Young tableaus, 29orthogonal complement, 11, 23orthonormal basis, 57

for class functions, 16

primitive idempotent, 24, 26

regular representation, 5, 18, 47character of, 19decomposition of, 19as group ring, 23irreducible representations in, 13

representation, 3induced by group action, 5on space of linear maps, 7, 10, 42

restriction, 32, 33, 38, 42, 45right adjoint, 44ring, 23, 32

Schur’s Lemma, 12, 17Schur’s orthogonality relations, 16section, 13sesquilinear, 57skew-symmetric, 57standard representation

of cyclic group, 4of dihedral group, 4of symmetric group, 5

subrepresentation, see invariant subspacesymmetric group, 3, 5, 25

Tannaka duality, 47tensor product

character of, 14of linear maps, 55of modules, 32of representations, 7, 42of vector spaces, 53of vectors, 54

tetrahedron group, 4trace, 14transitivity of induction, 35transpose matrix, 51trivial representation, 5

vector space of linear maps, 50

Young diagram, 25Young tableau, 25

60


Recommended