+ All Categories
Home > Documents > Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a...

Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a...

Date post: 01-Aug-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
54
Latent Space Analysis SVD and Topic Models Eric Xing Lecture 22, December 3, 2015 Machine Learning 10-701, Fall 2015 Reading: Tutorial on Topic Model @ ACL12 1 © Eric Xing @ CMU, 2006-2015
Transcript
Page 1: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Latent Space AnalysisSVD and Topic Models

Eric Xing

Lecture 22, December 3, 2015

Machine Learning

10-701, Fall 2015

Reading: Tutorial on Topic Model @ ACL121© Eric Xing @ CMU, 2006-2015

Page 2: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

We are inundated with data …

Humans cannot afford to deal with (e.g., search, browse, or measure similarity) a huge number of text and media documents

We need computers to help out …

(from images.google.cn)

2© Eric Xing @ CMU, 2006-2015

Page 3: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

A task: Say, we want to have a mapping …, so that

Compare similarity Classify contents Cluster/group/categorize docs Distill semantics and perspectives ..

3© Eric Xing @ CMU, 2006-2015

Page 4: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Representation: Data:

Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

A high-dimensional and sparse representation– Not efficient text processing tasks, e.g., search, document

classification, or similarity measure– Not effective for browsing

As for the Arabian and Palestinean voices that are against the current negotiations and the so-called peace process, they are not against peace per se, but rather for their well-founded predictions that Israel would NOT give an inch of the West bank (and most probably the same for Golan Heights) back to the Arabs. An 18 months of "negotiations" in Madrid, and Washington proved these predictions. Now many will jump on me saying why are you blaming israelis for no-result negotiations. I would say why would the Arabs stall the negotiations, what do they have to loose ?

Arabian

negotiationsagainst

peaceIsrael

Arabs blaming

Bag of Words Representation

4© Eric Xing @ CMU, 2006-2015

Page 5: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Subspace analysis

Clustering: (0,1) matrix LSI/NMF: “arbitrary” matrices Topic Models: stochastic matrix Sparse coding: “arbitrary” sparse matrices

* *

T (m x k)

(k x k)

DT

(k x n)

=

X (m x n)

Document

Term ...

...

cluster/topic/basis

Distributions(subspace)

A priori weights Memberships(coordinates)

5© Eric Xing @ CMU, 2006-2015

Page 6: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

An example:

6© Eric Xing @ CMU, 2006-2015

Page 7: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Principal Component Analysis The new variables/dimensions

Are linear combinations of the original ones

Are uncorrelated with one another Orthogonal in original dimension space

Capture as much of the original variance in the data as possible

Are called Principal Components

Orthogonal directions of greatest variance in data

Projections along PC1 discriminate the data most along any one axis

First principal component is the direction of greatest variability (covariance) in the data

Second is the next orthogonal (uncorrelated) direction of greatest variability So first remove all the variability along the first component, and then

find the next direction of greatest variability

And so on …

Original Variable AO

rigin

al V

aria

ble

B

PC 1PC 2

7© Eric Xing @ CMU, 2006-2015

Page 8: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Computing the Components Projection of vector x onto an axis (dimension) u is uTx Direction of greatest variability is that in which the average square of

the projection is greatest:

Maximize uTXXTu s.t uTu = 1

Construct Langrangian uTXXTu – λuTu

Vector of partial derivatives set to zero

xxTu – λu = (xxT – λI) u = 0As u ≠ 0 then u must be an eigenvector of XXT with eigenvalue λ

is the principal eigenvalue of the correlation matrix C= XXT

The eigenvalue denotes the amount of variability captured along that dimension

8© Eric Xing @ CMU, 2006-2015

Page 9: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Computing the Components Similarly for the next axis, etc. So, the new axes are the eigenvectors of the matrix of

correlations of the original variables, which captures the similarities of the original variables based on how data samples project to them

Geometrically: centering followed by rotation Linear transformation

9© Eric Xing @ CMU, 2006-2015

Page 10: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

For symmetric matrices, eigenvectors for distinct eigenvalues are orthogonal

All eigenvalues of a real symmetric matrix are real.

All eigenvalues of a positive semidefinite matrix are non-negative

TSS and 0 if IS

0vSv if then ,0, Swww Tn

02121212121 vvvSv and ,},{},{},{

Eigenvalues & Eigenvectors

10© Eric Xing @ CMU, 2006-2015

Page 11: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Let be a square matrix with m linearly independent eigenvectors (a “non-defective” matrix)

Theorem: Exists an eigen decomposition

(cf. matrix diagonalization theorem)

Columns of U are eigenvectors of S

Diagonal elements of are eigenvalues of

Eigen/diagonal Decomposition

diagonal

Unique for

distinct eigen-values

11© Eric Xing @ CMU, 2006-2015

Page 12: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

PCs, Variance and Least-Squares The first PC retains the greatest amount of variation in the

sample

The kth PC retains the kth greatest fraction of the variation in the sample

The kth largest eigenvalue of the correlation matrix C is the variance in the sample along the kth PC

The least-squares view: PCs are a series of linear least squares fits to a sample, each orthogonal to all previous ones

12© Eric Xing @ CMU, 2006-2015

Page 13: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Doc 1 Doc 2 Doc 3 ...

Word 1 3 0 0 ...

Word 2 0 8 1 ...

Word 3 0 1 3 ...

Word 4 2 0 0 ...

Word 5 12 0 0 ...

... 0 0 0 ...

X =

The Corpora Matrix

13© Eric Xing @ CMU, 2006-2015

Page 14: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Singular Value Decomposition

TVUA

mm mn V is nn

For an m n matrix A of rank r there exists a factorization(Singular Value Decomposition = SVD) as follows:

The columns of U are orthogonal eigenvectors of AAT.

The columns of V are orthogonal eigenvectors of ATA.

ii

rdiag ...1 Singular values.

Eigenvalues 1 … r of AAT are the eigenvalues of ATA.

14© Eric Xing @ CMU, 2006-2015

Page 15: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

SVD and PCA The first root is called the prinicipal eigenvalue which has an

associated orthonormal (uTu = 1) eigenvector u

Subsequent roots are ordered such that λ1> λ2 >… > λM with rank(D) non-zero values.

Eigenvectors form an orthonormal basis i.e. uiTuj = δij

The eigenvalue decomposition of XXT = UΣUT

where U = [u1, u2, …, uM] and Σ = diag[λ 1, λ 2, …, λ M]

Similarly the eigenvalue decomposition of XTX = VΣVT

The SVD is closely related to the above X=U Σ1/2 VT

The left eigenvectors U, right eigenvectors V,

singular values = square root of eigenvalues.

15© Eric Xing @ CMU, 2006-2015

Page 16: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

0

5

10

15

20

25

PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10

Varia

nce

(%)

How Many PCs? For n original dimensions, sample covariance matrix is nxn, and has

up to n eigenvectors. So n PCs. Where does dimensionality reduction come from?

Can ignore the components of lesser significance.

You do lose some information, but if the eigenvalues are small, you don’t lose much n dimensions in original data calculate n eigenvectors and eigenvalues choose only the first p eigenvectors, based on their eigenvalues final data set has only p dimensions

16© Eric Xing @ CMU, 2006-2015

Page 17: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Within .40threshold

K is the number of singular values used17© Eric Xing @ CMU, 2006-2015

Page 18: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

* *

T (m x k)

(k x k)

DT

(k x n)

=

X (m x n)

Document

Term ...

K

kkkk Tdw

1

Summary: Latent Semantic Indexing

LSI does not define a properly normalized probability distribution of observed and latent entities Does not support probabilistic reasoning under uncertainty and data fusion

(Deerwester et al., 1990)

18© Eric Xing @ CMU, 2006-2015

Page 19: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

ProbabilisticModel

Real WorldData

P(Data | Parameters)

P(Parameters | Data)

(Generative Model)

(Inference)

Connecting Probability Models to Data

19© Eric Xing @ CMU, 2006-2015

Page 20: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Latent Semantic Structure in GM

Latent Structure

Words

),()( ww PP

w

Distribution over words

)w()()|w()w|(

PPPP

Inferring latent structure

20© Eric Xing @ CMU, 2006-2015

Page 21: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

How to Model Semantics? Q: What is it about? A: Mainly MT, with syntax, some learning

A Hierarchical Phrase-Based Model for Statistical Machine Translation

We present a statistical phrase-based Translation model that uses hierarchical phrases—phrases that contain sub-phrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experimentsusing BLEU as a metric, the hierarchical

Phrase based model achieves a relative Improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.

SourceTargetSMT

AlignmentScoreBLEU

ParseTreeNoun

PhraseGrammar

CFG

likelihoodEM

HiddenParametersEstimation

argMax

MT Syntax Learning

0.6 0.3 0.1

Unigram over vocabulary

Topi

cs

AdMixing Proportion

Topic Models

21© Eric Xing @ CMU, 2006-2015

Page 22: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Why this is Useful? Q: What is it about? A: Mainly MT, with syntax, some learning

A Hierarchical Phrase-Based Model for Statistical Machine Translation

We present a statistical phrase-based Translation model that uses hierarchical phrases—phrases that contain sub-phrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experimentsusing BLEU as a metric, the hierarchical

Phrase based model achieves a relative Improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.

MT Syntax Learning

AdMixing Proportion

0.6 0.3 0.1

Q: give me similar document? Structured way of browsing the collection

Other tasks Dimensionality reduction

TF-IDF vs. topic mixing proportion

Classification, clustering, and more …

22© Eric Xing @ CMU, 2006-2015

Page 23: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Words in Contexts

“It was a nice shot. ”

23© Eric Xing @ CMU, 2006-2015

Page 24: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Words in Contexts (con'd) the opposition Labor Party fared even worse, with a

predicted 35 seats, seven less than last election.

24© Eric Xing @ CMU, 2006-2015

Page 25: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

TOPIC 1

TOPIC 2

DOCUMENT 2: river2 stream2 bank2 stream2 bank2

money1 loan1 river2 stream2 loan1 bank2 river2 bank2

bank1 stream2 river2 loan1 bank2 stream2 bank2 money1

loan1 river2 stream2 bank2 stream2 bank2 money1 river2

stream2 loan1 bank2 river2 bank2 money1 bank1 stream2

river2 bank2 stream2 bank2 money1

DOCUMENT 1: money1 bank1 bank1 loan1 river2 stream2

bank1 money1 river2 bank1 money1 bank1 loan1 money1

stream2 bank1 money1 bank1 bank1 loan1 river2 stream2

bank1 money1 river2 bank1 money1 bank1 loan1 bank1

money1 stream2

.3

.8

.2

Mixture Components

(distributions over elements)

admixing weight vector

(represents all components’

contributions)

Bayesian approach: use priors Admixture weights ~ Dirichlet( ) Mixture components ~ Dirichlet( )

.7

A possible generative process of a document

25© Eric Xing @ CMU, 2006-2015

Page 26: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Probabilistic LSI

wnznd

NM

k

Hoffman (1999)

26© Eric Xing @ CMU, 2006-2015

z

N

nnnnn dzpzwpdpwdp

1

)|()|()(),(

Page 27: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Probabilistic LSI A "generative" model Models each word in a document as a sample from a mixture

model. Each word is generated from a single topic, different words in

the document may be generated from different topics. A topic is characterized by a distribution over words. Each document is represented as a list of admixing

proportions for the components (i.e. topic vector ).

wnznd

NM

k

wnznd

NM

k

k

27© Eric Xing @ CMU, 2006-2015

Page 28: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Latent Dirichlet Allocation

wnzn

NM

K

k

Blei, Ng and Jordan (2003)

Essentially a Bayesian pLSI:

28© Eric Xing @ CMU, 2006-2015

zw ddwpzpppp

N

nznn n

)()()()()(1

Page 29: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

LDA Generative model Models each word in a document as a sample from a mixture

model. Each word is generated from a single topic, different words in

the document may be generated from different topics. A topic is characterized by a distribution over words. Each document is represented as a list of admixing

proportions for the components (i.e. topic vector). The topic vectors and the word rates each follows a Dirichlet

prior --- essentially a Bayesian pLSI

wnzn

NM

K

k

wnzn

NM

K

k

29© Eric Xing @ CMU, 2006-2015

Page 30: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Topic Models = Mixed Membership Models = Admixture

Generating a documentPrior

θ

z

w β Nd

N

K

from ,| Draw -

from Draw- each wordFor

prior thefrom

:1 nzknn

n

lmultinomiazwlmultinomiaz

nDraw

Which prior to use?

30© Eric Xing @ CMU, 2006-2015

Page 31: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Choices of Priors Dirichlet (LDA) (Blei et al. 2003)

Conjugate prior means efficient inference Can only capture variations in each topic’s

intensity independently

Logistic Normal (CTM=LoNTAM) (Blei & Lafferty 2005, Ahmed & Xing 2006) Capture the intuition that some topics are highly

correlated and can rise up in intensity together Not a conjugate prior implies hard inference

Nested CRP (Blei et al 2005) Defines hierarchy on topics …

31© Eric Xing @ CMU, 2006-2015

Page 32: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

1log

1logexp

0 ,~,~

1

1

1

1

1

K

i

K

iii

KK

K

i

i

eC

e

NLN

from ,| Draw -

from Draw- each wordFor

prior thefrom

:1 nzknn

n

lmultinomiazwlmultinomiaz

nDraw

Generative Semantic of LoNTAMGenerating a document

- Log Partition Function- Normalization Constant

z

w

β

Nd

N

K

μ Σ

32© Eric Xing @ CMU, 2006-2015

Page 33: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Outcomes from a topic model The “topics” in a corpus:

There is no name for each “topic”, you need to name it! There is no objective measure of good/bad The shown topics are the “good” ones, there are many many trivial ones, meaningless ones,

redundant ones, … you need to manually prune the results How many topics? …

33© Eric Xing @ CMU, 2006-2015

Page 34: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Outcomes from a topic model The “topic vector” of each doc

Create an embedding of docs in a “topic space” Their no ground truth of to measure quality of inference But on it is possible to define an “objective” measure of goodness, such as classification

error, retrieval of similar docs, clustering, etc., of documents But there is no consensus on whether these tasks bear the true value of topic models …

34© Eric Xing @ CMU, 2006-2015

Page 35: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

The per-word topic indicator z:

Not very useful under the bag of word representation, because of loss of ordering

But it is possible to define simple probabilistic linguistic constraints (e.g, bi-grams) over z and get potentially interesting results [Griffiths, Steyvers, Blei, & Tenenbaum, 2004]

Outcomes from a topic model

35© Eric Xing @ CMU, 2006-2015

Page 36: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Outcomes from a topic model The topic graph S (when using CTM):

Kind of interesting for understanding/visualizing large corpora [David Blei, MLSS09]

36© Eric Xing @ CMU, 2006-2015

Page 37: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Outcomes from a topic model Topic change trends

[David Blei, MLSS09]

37© Eric Xing @ CMU, 2006-2015

Page 38: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

The Big Picture

Unstructured Collection Structured Topic Network

Topic Discovery

Dimensionality Reduction

w1

w2

wn

xx

xx

T1

Tk T2x x x

x

Word Simplex Topic Space

(e.g, a Simplex)38© Eric Xing @ CMU, 2006-2015

Page 39: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Computation on LDA

Inference Given a Document D

Posterior: P(Θ | μ,Σ, β ,D) Evaluation: P(D| μ,Σ, β )

Learning Given a collection of documents {Di}

Parameter estimation

,,logmaxarg),,(

iDP

θn

βi

39© Eric Xing @ CMU, 2006-2015

Page 40: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

A possible query:

Close form solution?

Sum in the denominator over Tn terms, and integrate over n k-dimensional topic vectors

Exact Bayesian inference on LDA is intractable

}{,,

,

)|()|()|()|()(mn

nz

Nn

nm

nmnzmn dddGppzpxpDp 1

)(

)|()|()|()|(

)(),()|(

}{,,

,

Dp

ddGppzpxp

DpDpDp

mn

nz

in

nm

nmnzmn

nn

?)|(?)|(

,

DzpDp

mn

nθn

θnθn

θn θ-nθn

θnθn θ θ

ββ

β ββ

40© Eric Xing @ CMU, 2006-2015

Page 41: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Variational Inference

Mean field approximation (Blei et al) Expectation propagation (Minka et al) Variational 2nd-order Taylor approximation (Ahmed and Xing)

Markov Chain Monte Carlo

Gibbs sampling (Griffiths et al)

Approximate Inference

41© Eric Xing @ CMU, 2006-2015

Page 42: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Collapsed Gibbs sampling(Tom Griffiths & Mark Steyvers)

Collapsed Gibbs sampling Integrate out

For variables z = z1, z2, …, zn

Draw zi(t+1) from P(zi|z-i, w)

z-i = z1(t+1), z2

(t+1),…, zi-1(t+1), zi+1

(t), …, zn(t)

θn

βi

42© Eric Xing @ CMU, 2006-2015

Page 43: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Gibbs sampling

Need full conditional distributions for variables Since we only sample z we need

number of times word w assigned to topic j

number of times topic j used in document d

θn

βi

G

G

43© Eric Xing @ CMU, 2006-2015

Page 44: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Gibbs sampling

i wi di zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

iteration1

44© Eric Xing @ CMU, 2006-2015

Page 45: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Gibbs sampling

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

?

iteration1 2

45© Eric Xing @ CMU, 2006-2015

Page 46: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Gibbs sampling

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

?

iteration1 2

G

G46© Eric Xing @ CMU, 2006-2015

Page 47: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Gibbs sampling

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

?

iteration1 2

G

G47© Eric Xing @ CMU, 2006-2015

Page 48: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Gibbs sampling

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

2?

iteration1 2

G

G48© Eric Xing @ CMU, 2006-2015

Page 49: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Gibbs sampling

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

21?

iteration1 2

G

G49© Eric Xing @ CMU, 2006-2015

Page 50: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Gibbs sampling

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

211?

iteration1 2

G

G50© Eric Xing @ CMU, 2006-2015

Page 51: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Gibbs sampling

i wi di zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

2112?

iteration1 2

G

G51© Eric Xing @ CMU, 2006-2015

Page 52: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Gibbs sampling

i wi di zi zi zi123456789

101112...

50

MATHEMATICSKNOWLEDGE

RESEARCHWORK

MATHEMATICSRESEARCH

WORKSCIENTIFIC

MATHEMATICSWORK

SCIENTIFICKNOWLEDGE

.

.

.JOY

111111111122...5

221212212111...2

211222212212...1

222122212222...1

iteration1 2 … 1000

G

G52© Eric Xing @ CMU, 2006-2015

Page 53: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Learning a TM Maximum likelihood estimation:

Need statistics on topic-specific word assignment (due to z), topic vector distribution (due to ), etc. E.g,, this is the formula for topic k:

These are hidden variables, therefore need an EM algorithm (also known as data augmentation, or DA, in Monte Carlo paradigm)

This is a “reduce” step in parallel implementation

53© Eric Xing @ CMU, 2006-2015

,logmaxarg,,,,),(

21 iK DP

Page 54: Machine Learningepxing/Class/10701/slides/lecture22-TM.pdfRepresentation: Data: Each document is a vector in the word space Ignore the order of words in a document. Only count matters!

Conclusion GM-based topic models are cool

Flexible Modular Interactive

There are many ways of implementing topic models unsupervised supervised

Efficient Inference/learning algorithms GMF, with Laplace approx. for non-conjugate dist. MCMC

Many applications … Word-sense disambiguation Image understanding Network inference

54© Eric Xing @ CMU, 2006-2015


Recommended