+ All Categories
Home > Documents > Vector Semantics - Carnegie Mellon...

Vector Semantics - Carnegie Mellon...

Date post: 07-Feb-2021
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
89
Vector Semantics Natural Language Processing Lecture 17 Adapted from Jurafsky and Martnn v3
Transcript
  • Vector Semantics

    Natural Language ProcessingLecture 17

    Adapted from Jurafsky and Martnn v3

  • Why vector models of meaning?computng the similarity between words

    “fast” is similar to “rapid”“tall” is similar to “height”

    Queston answering:Q: “How tall is Mt. Everest?”Candidate A: “The offial height of Mount Everest is 29029 feet”

    2

  • Word similarity for plagiarism detecton

  • Word similarity for historical linguistcs:semantc change over tme

    4

    Kulkarnin Al-Rfoun Perozzin Skiena 2015Sagin Kaufmann Clark 2013

    dog deer hound0

    5

    10

    15

    20

    25

    30

    35

    40

    45

  • Problems with thesaurus-based meaning

    • We don’t have a thesaurus for every language• We can’t have a thesaurus for every year• For historical linguistcsn we need to compare word meanings

    in year t to year t+1

    • Thesauruses have problems with recall• Many words and phrases are missing• Thesauri work less well for verbsn adjectves

  • Distributonal models of meaning= vector-space models of meaning = vector semantcs

    Intuitons: Zellig Harris (1954):• “oculist and eye-doctor … occur in almost the same

    environments”• “If A and B have almost identcal environments we say that

    they are synonyms.”

    Firth (1957): • “You shall know a word by the company it keeps!”

    6

  • Intuiton of distributonal word similarity

    • Nida example: Suppose I asked you what is tesgüino? A bottle of tesgüino is on the tableEverybody likes tesgüinoTesgüino makes you drunkWe make tesgüino out of corn.• From context words humans can guess tesgüino means• an alcoholic beverage like beer

    • Intuiton for algorithm: • Two words are similar if they have similar word contexts.

  • Four kinds of vector models

    Sparse vector representatons1. Mutual-informaton weighted word co-occurrence matrices

    Dense vector representatons:2. Singular value decompositon (and Latent Semantc

    Analysis)3. Neural-network-inspired models (skip-gramsn CBOW)4. Brown clusters

    8

  • Shared intuiton

    • Model the meaning of a word by “embedding” in a vector space.• The meaning of a word is a vector of numbers• Vector models are also called “embeddings”.

    • Contrast: word meaning is represented in many computatonal linguistc applicatons by a vocabulary index (“word number 545”)

    • Old philosophy joke: Q: What’s the meaning of life?A: LIFE’

    9

  • Vector Semantics

    Words and co-occurrence vectors

  • Co-occurrence Matrices

    • We represent how ofen a word occurs in a document• Term-document matrix

    • Or how ofen a word occurs with another• Term-term matrix

    (or word-word co-occurrence matrixor word-context matrix)11

  • Term-document matrix

    • Each cell: count of word w in a document d:• Each document is a count vector in ℕv: a column below

    12

    As You Like It Twelfth Night Julius Caesar Henry Vbattle 1 1 8 15soldier 2 2 12 36fool 37 58 1 5clown 6 117 0 0

  • Similarity in term-document matrices

    Two documents are similar if their vectors are similar

    13

    As You Like It Twelfth Night Julius Caesar Henry Vbattle 1 1 8 15soldier 2 2 12 36fool 37 58 1 5clown 6 117 0 0

  • The words in a term-document matrix

    • Each word is a count vector in ℕD: a row below

    14

    As You Like It Twelfth Night Julius Caesar Henry Vbattle 1 1 8 15soldier 2 2 12 36fool 37 58 1 5clown 6 117 0 0

  • The words in a term-document matrix

    • Two words are similar if their vectors are similar

    15

    As You Like It Twelfth Night Julius Caesar Henry Vbattle 1 1 8 15soldier 2 2 12 36fool 37 58 1 5clown 6 117 0 0

  • The word-word or word-context matrix• Instead of entre documentsn use smaller contexts• Paragraph• Window of 4 words

    • A word is now defned by a vector over counts of context words• Instead of each vector being of length Dn

    each vector is now of length |V|

    • The word-word matrix is |V|x|V|16

  • Word-Word matrixSample contexts 7 words

    17

    … …

    aardvark computer data pinch result sugar …apricot 0 0 0 1 0 1pineapple 0 0 0 1 0 1digital 0 2 1 0 1 0information 0 1 6 0 4 0

  • Word-word matrix

    • We showed only 4x6n but the real matrix is 50n000 x 50n000• So it’s very sparse• Most values are 0.

    • That’s OKn since there are lots of efcient algorithms for sparse matrices.• The size of windows depends on your goals• The shorter the windows n the more syntactc the representaton

    1-3 very syntactcy• The longer the windowsn the more semantc the representaton

    4-10 more semantcy18

  • 2 kinds of co-occurrence between 2 words

    • First-order co-occurrence (syntagmatc associaton):• They are typically nearby each other. • wrote is a frst-order associate of book or poem.

    • Second-order co-occurrence (paradigmatc associaton): • They have similar neighbors. • wrote is a second- order associate of words like said or

    remarked.

    19

    (Schütze and Pedersen, 1993)

  • Vector Semantics

    Positve Pointwise Mutual Informaton (PPMI)

  • Problem with raw counts

    • Raw word frequency is not a great measure of associaton between words• It’s very skewed• “the” and “of” are very frequentn but maybe not the most

    discriminatve

    • We’d rather have a measure that asks whether a context word is partcularly informatve about the target word.• Positve Pointwise Mutual Informaton (PPMI)

    21

  • Pointwise Mutual Informaton

    Pointwise mutual informaton: Do events x and y co-occur more than if they were independent?

    PMI between two words: (Church & Hanks 1989) Do words x and y co-occur more than if they were independent?

    PMI(X,Y) =log2P(x,y)P(x)P(y)

  • Positve Pointwise Mutual Informaton

    •  

  • Computng PPMI on a term-context matrix

    • Matrix F with W rows (words) and C columns (contexts)• fij is # of tmes wi occurs in context cj

    24

    pij =fij

    fijj=1

    C

    åi=1

    W

    åpi* =

    fijj=1

    C

    å

    fijj=1

    C

    åi=1

    W

    åp* j =

    fiji=1

    W

    å

    fijj=1

    C

    åi=1

    W

    å

    pmiij =log2pij

    pi*p* jppmiij =

    pmiij if pmiij > 00 otherwise

    ìíï

    îï

  • p(w=informatonnc=data) = p(w=informaton) =p(c=data) =

    25

    = .326/19

    11/19 = .58

    7/19 = .37 p(w,context) p(w)computer data pinch result sugar

    apricot 0.00 0.00 0.05 0.00 0.05 0.11pineapple 0.00 0.00 0.05 0.00 0.05 0.11digital 0.11 0.05 0.00 0.05 0.00 0.21information 0.05 0.32 0.00 0.21 0.00 0.58

    p(context) 0.16 0.37 0.11 0.26 0.11

    pij =fij

    fijj=1

    C

    åi=1

    W

    å

    p(wi ) =fij

    j=1

    C

    åN

    p(cj ) =fij

    i=1

    W

    åN

  • 26

    • pmi(informatonndata) = log2 ( .32 / (.37*.58) ) = .58(.57 using full prefision)

    pmiij =log2pij

    pi*p* j

    p(w,context) p(w)computer data pinch result sugar

    apricot 0.00 0.00 0.05 0.00 0.05 0.11pineapple 0.00 0.00 0.05 0.00 0.05 0.11digital 0.11 0.05 0.00 0.05 0.00 0.21information 0.05 0.32 0.00 0.21 0.00 0.58

    p(context) 0.16 0.37 0.11 0.26 0.11

    PPMI(w,context)computer data pinch result sugar

    apricot - - 2.25 - 2.25pineapple - - 2.25 - 2.25digital 1.66 0.00 - 0.00 -information 0.00 0.57 - 0.47 -

  • Weightng PMI

    • PMI is biased toward infrequent events• Very rare words have very high PMI values

    • Two solutons:• Give rare words slightly higher probabilites• Use add-one smoothing (which has a similar efect)

    27

  • Weightng PMI: Giving rare context words slightly higher probability

    •  

    28

  • 29

    Use Laplace (add-k) smoothingAdd-2SmoothedCount(w,context)

    computer data pinch result sugarapricot 2 2 3 2 3pineapple 2 2 3 2 3digital 4 3 2 3 2information 3 8 2 6 2

    p(w,context)[add-2] p(w)computer data pinch result sugar

    apricot 0.03 0.03 0.05 0.03 0.05 0.20pineapple 0.03 0.03 0.05 0.03 0.05 0.20digital 0.07 0.05 0.03 0.05 0.03 0.24information 0.05 0.14 0.03 0.10 0.03 0.36

    p(context) 0.19 0.25 0.17 0.22 0.17

  • PPMI versus add-2 smoothed PPMI

    30

    PPMI(w,context)[add-2]computer data pinch result sugar

    apricot 0.00 0.00 0.56 0.00 0.56pineapple 0.00 0.00 0.56 0.00 0.56digital 0.62 0.00 0.00 0.00 0.00information 0.00 0.58 0.00 0.37 0.00

    PPMI(w,context)computer data pinch result sugar

    apricot - - 2.25 - 2.25pineapple - - 2.25 - 2.25digital 1.66 0.00 - 0.00 -information 0.00 0.57 - 0.47 -

  • Vector Semantics

    Measuring similarity: the cosine

  • Measuring similarity

    • Given 2 target words v and w• We’ll need a way to measure their similarity.• Most measure of vectors similarity are based on the:• Dot product or inner product from linear algebra

    • High when two vectors have large values in same dimensions. • Low (in fact 0) for orthogonal vectors with zeros in complementary

    distributon32

  • Problem with dot product

    • Dot product is longer if the vector is longer. Vector length:

    • Vectors are longer if they have higher values in each dimension• That means more frequent words will have higher dot products• That’s bad: we don’t want a similarity metric to be sensitve to word

    frequency33

  • Soluton: cosine

    • Just divide the dot product by the length of the two vectors!

    • This turns out to be the cosine of the angle between them!

    34

  • Cosine for computng similarity

    Dot product Unit vectors

    vi is the PPMI value for word v in context i wi is the PPMI value for word w in context i.

    Cos(v,w) is the cosine similarity of v and w

    Sec. 6.3

    cos(v, w) =v· wv w

    =vv

    ·ww

    =viwii=1

    Nåvi2i=1

    Nå wi2i=1Nå

  • Cosine as a similarity metric

    • -1: vectors point in opposite directons • +1: vectors point in same directons• 0: vectors are orthogonal

    • Raw frequency or PPMI are non-negatven so cosine range 0-1

    36

  • large data computerapricot 2 0 0digital 0 1 2informaton 1 6 1

    37

    Which pair of words is more similar?cosine(apricotninformaton) =

    cosine(digitalninformaton) =

    cosine(apricotndigital) =

    2+0+0 √2+0+0   

    2√2√38

    =.23 

    cos(v, w) =v· wv w

    =vv

    ·ww

    =viwii=1

    Nåvi2i=1

    Nå wi2i=1Nå

    1+0+0

    1+36+1

    1+36+1

    0+1+4

    0+1+4 0+6+2

    0+0+0

    = 838 5

    =.58

    =0

  • Visualizing vectors and angles

    1 2 3 4 5 6 7

    1

    2

    3

    digital

    apricotinformation

    Dim

    ensio

    n 1:

    ‘lar

    ge’

    Dimension 2: ‘data’38

    large dataapricot 2 0digital 0 1informaton 1 6

  • Clustering vectors to visualize similarity in co-occurrence matrices

    Rohde, Gonnerman, Plaut Modeling Word Meaning Using Lexical Co-Occurrence

    H E A D

    H A N DF A C E

    D O G

    A M E R I C A

    C A T

    E Y E

    E U R O P E

    F O O T

    C H I N AF R A N C E

    C H I C A G O

    A R M

    F I N G E R

    N O S E

    L E G

    R U S S I A

    M O U S E

    A F R I C A

    A T L A N T A

    E A R

    S H O U L D E R

    A S I A

    C O W

    B U L L

    P U P P Y L I O N

    H A W A I I

    M O N T R E A L

    T O K Y O

    T O E

    M O S C O W

    T O O T H

    N A S H V I L L E

    B R A Z I L

    W R I S T

    K I T T E N

    A N K L E

    T U R T L E

    O Y S T E R

    Figure 8: Multidimensional scaling for three noun classes.

    W R I S TA N K L E

    S H O U L D E RA R ML E GH A N D

    F O O TH E A D

    N O S EF I N G E R

    T O EF A C E

    E A RE Y E

    T O O T HD O GC A T

    P U P P YK I T T E N

    C O WM O U S E

    T U R T L EO Y S T E R

    L I O NB U L LC H I C A G OA T L A N T A

    M O N T R E A LN A S H V I L L E

    T O K Y OC H I N AR U S S I A

    A F R I C AA S I AE U R O P E

    A M E R I C AB R A Z I L

    M O S C O WF R A N C E

    H A W A I I

    Figure 9: Hierarchical clustering for three noun classes using distances based on vector correlations.

    20

    39 Rohde et al. (2006)

  • Other possible similarity measures

  • Vector SemanticsAdding syntax

  • Using syntax to defne a word’s context• Zellig Harris (1968)“The meaning of enttesn and the meaning of grammatcal relatons among themn is related to the restricton of combinatons of these enttes relatve to other enttes”• Two words are similar if they have similar syntactc contextsDuty and responsibility have similar syntactc distributon: Modifed by

    adjectvesadditonaln administratven assumedn collectven congressionaln consttutonal …

    Objects of verbs assertn assignn assumen atend ton avoidn becomen breach..

  • Co-occurrence vectors based on syntactc dependencies

    • Each dimension: a context word in one of R grammatcal relatons• Subject-of- “absorb”

    • Instead of a vector of |V| featuresn a vector of R|V|• Example: counts for the word fell :

    Dekang Linn 1998 “Automatc Retrieval and Clustering of Similar Words”

  • Syntactc dependencies for dimensions

    • Alternatve (Padó and Lapata 2007):• Instead of having a |V| x R|V| matrix• Have a |V| x |V| matrix• But the co-occurrence counts aren’t just counts of words in a window• But counts of words that occur in one of R dependencies (subjectn objectn

    etc).• So M(“cell”n”absorb”) = count(subj(cellnabsorb)) + count(obj(cellnabsorb)) +

    count(pobj(cellnabsorb))n etc.

    44

  • PMI applied to dependency relatons

    • “Drink it” more common than “drink wine”• But “wine” is a beter “drinkable” thing than “it”

    Object of “drink” Count PMI

    it 3 1.3

    anything 3 5.2

    wine 2 9.3

    tea 2 11.8

    liquid 2 10.5

    Hindle, Don. 1990. Noun Classifcation from Predicate-Argument Structure. ACL

    Object of “drink” Count PMItea 2 11.8

    liquid 2 10.5

    wine 2 9.3

    anything 3 5.2

    it 3 1.3

  • Alternatve to PPMI for measuring associaton

    • t-idf (that’s a hyphen not a minus sign)• The combinaton of two factors• Term frequency (Luhn 1957): frequency of the word (can be logged)• Inverse document frequency (IDF) (Sparck Jones 1972)• N is the total number of documents

    • dfi = “document frequency of word i” = # of documents with word I

    • wij = word i in dofument j

    wij=tfij idfi46

    idfi =logNdfi

    æ

    èçç

    ö

    ø÷÷

  • t-idf not generally used for word-word similarity

    • But is by far the most common weightng when we are considering the relatonship of words to documents

    47

  • Vector SemanticsDense Vectors

  • Sparse versus dense vectors

    • PPMI vectors are• long (length |V|= 20n000 to 50n000)• sparse (most elements are zero)

    • Alternatve: learn vectors which are• short (length 200-1000)• dense (most elements are non-zero)

    49

  • Sparse versus dense vectors

    • Why dense vectors?• Short vectors may be easier to use as features in machine

    learning (less weights to tune)• Dense vectors may generalize beter than storing explicit counts• They may do beter at capturing synonymy:• far and automobile are synonyms; but are represented as

    distnct dimensions; this fails to capture similarity between a word with far as a neighbor and a word with automobile as a neighbor

    50

  • Three methods for getng short dense vectors

    • Singular Value Decompositon (SVD)• A special case of this is called LSA – Latent Semantc Analysis

    • “Neural Language Model”-inspired predictve models• skip-grams and CBOW

    • Brown clustering

    51

  • Vector Semantics

    Dense Vectors via SVD

  • Intuiton• Approximate an N-dimensional dataset using fewer dimensions• By frst rotatng the axes into a new space• In which the highest order dimension captures the most variance in the

    original dataset• And the next dimension captures the next most variancen etc.

    • Many such (related) methods:• PCA – principle components analysis• Factor Analysis• SVD

    53

  • 54

    Dimensionality reducton

  • Singular Value Decompositon

    55

    Any reftangular w x f matrix X equals the produft of 3 matrifes:W: rows corresponding to original but m columns represent dimensions in a new latent spacen such that

    • M column vectors are orthogonal to each other• Columns are ordered by the amount of variance in the dataset each new

    dimension accounts for

    S: diagonal m x m matrix of singular values expressing the importance of each dimension.C: columns corresponding to original but m rows corresponding to singular values

  • Singular Value Decompositon

    56 Landuaer and Dumais 1997

  • SVD applied to term-document matrix:Latent Semantc Analysis

    • If instead of keeping all m dimensionsn we just keep the top k singular values. Let’s say 300.

    • The result is a least-squares approximaton to the original X• But instead of multplyingn

    we’ll just make use of W.• Each row of W:• A k-dimensional vector• Representng word W

    57

    Deerwester et al (1988)

  • LSA more details

    • 300 dimensions are commonly used• The cells are commonly weighted by a product of two weights• Local weight: Log term frequency• Global weight: either idf or an entropy measure

    58

  • Let’s return to PPMI word-word matrices

    • Can we apply to SVD to them?

    59

  • SVD applied to term-term matrix

    60 (I’m simplifying here by assuming the matrix has rank |V|)

  • Truncated SVD on term-term matrix

    61

  • Truncated SVD produces embeddings

    62

    • Each row of W matrix is a k-dimensional representaton of each word w

    • K might range from 50 to 1000• Generally we keep the top k dimensionsn

    but some experiments suggest that getng rid of the top 1 dimension or even the top 50 dimensions is helpful (Lapesa and Evert 2014).

  • Embeddings versus sparse vectors

    • Dense SVD embeddings sometmes work beter than sparse PPMI matrices at tasks like word similarity• Denoising: low-order dimensions may represent unimportant

    informaton• Truncaton may help the models generalize beter to unseen data.• Having a smaller number of dimensions may make it easier for

    classifers to properly weight the dimensions for the task.• Dense models may do beter at capturing higher order co-

    occurrence. 63

  • Vector Semantics

    Embeddings inspired by neural language models:

    skip-grams and CBOW

  • Predicton-based models:An alternatve way to get dense vectors

    • Skip-gram (Mikolov et al. 2013a) CBOW (Mikolov et al. 2013b)• Learn embeddings as part of the process of word predicton.• Train a neural network to predict neighboring words• Inspired by neural net language models.• In so doingn learn dense embeddings for the words in the training corpus.

    • Advantages:• Fastn easy to train (much faster than SVD)• Available online in the word2vec package• Including sets of pretrained embeddings!

    65

  • Skip-grams

    • Predict each neighboring word • in a context window of 2C words • from the current word.

    • So for C=2n we are given word wt and predictng these 4 words:

    66

  • Skip-grams learn 2 embeddings for each w

    input embedding v, in the input matrix W• Column i of the input matrix W is the 1×d

    embedding vi for word i in the vocabulary.

    output embedding v n in output matrix W’′• Row i of the output matrix W is a ′ d × 1

    vector embedding v′i for word i in the vocabulary.67

  • Setup

    • Walking through corpus pointng at word w(t)n whose index in

    the vocabulary is jn so we’ll call it wj (1 < j < |V |).

    • Let’s predict w(t+1) n whose index in the vocabulary is k (1 < k < |

    V |). Hence our task is to compute P(wk|wj).

    68

  • One-hot vectors

    • A vector of length |V| • 1 for the target word and 0 for other words• So if “popsicle” is vocabulary word 5• The one-hot vector is• [0n0n0n0n1n0n0n0n0…….0]

    69

  • 70

    Skip-gram

  • 71

    Skip-gramh = vj

    o = W’h

    o = W’h

  • 72

    Skip-gram

    h = vjo = W’h

    ok = v’khok = v’k v∙ j

  • Turning outputs into probabilites

    • ok = v’k v∙ j• We use sofmax to turn into probabilites

    73

  • Embeddings from W and W’

    • Since we have two embeddingsn vj and v’j for each word wj• We can either:• Just use vj• Sum them• Concatenate them to make a double-length embedding

    74

  • But wait; how do we learn the embeddings?

    75

  • Relaton between skipgrams and PMI!

    • If we multply WW’T

    • We get a |V|x|V| matrix M n each entry mij corresponding to some associaton between input word i and output word j

    • Levy and Goldberg (2014b) show that skip-gram reaches its optmum just when this matrix is a shifed version of PMI:

    WW′T =MPMI −log k • So skip-gram is implicitly factoring a shifed version of the PMI

    matrix into the two embedding matrices.76

  • CBOW (Contnuous Bag of Words)

    77

  • Propertes of embeddings

    78

    • Nearest words to some embeddings (Mikolov et al. 20131)

  • Embeddings capture relatonal meaning!

    vector(‘king’) - vector(‘man’) + vector(‘woman’) vector(‘queen’)vector(‘Paris’) - vector(‘Franfe’) + vector(‘Italy’) vector(‘Rome’)

    79

  • Can I train embeddings on all of wikipedia

    Good embeddings need lots of (appropriate) dataBut there are pretrained models Word2vec GloveBut there’s more Bert (and Elmo): context dependent word vectors

    “Things are always beter with Bert” (or the thing beter than Bert)80

  • Vector SemanticsBrown clustering

  • Brown clustering

    • An agglomeratve clustering algorithm that clusters words based on which words precede or follow them

    • These word clusters can be turned into a kind of vector• We’ll give a very brief sketch here.

    82

  • Brown clustering algorithm

    • Each word is initally assigned to its own cluster. • We now consider consider merging each pair of clusters. Highest

    quality merge is chosen.• Quality = merges two words that have similar probabilites of preceding

    and following words• (More technically quality = smallest decrease in the likelihood of the

    corpus according to a class-based language model)

    • Clustering proceeds untl all words are in one big cluster.

    83

  • Brown Clusters as vectors

    • By tracing the order in which clusters are mergedn the model builds a binary tree from botom to top.

    • Each word represented by binary string = path from root to leaf• Each intermediate node is a cluster • Chairman is 0010n “months” = 01n and verbs = 1

    84

    Brown Algorithm

    •Words merged according to contextual similarity

    •Clusters are equivalent to bit-string prefxes•Prefx length determines the granularity of

    the clustering

    011

    president

    walkrun sprint

    chairmanCEO November October

    0 100 01

    00110010001

    10 11000 100 101010

  • Brown cluster examples

    85

  • Class-based language model

    • Suppose each word was in some class ci:

    86

    19.7 • BROWN CLUSTERING 19

    Figure19.15 Vector offsets showing relational properties of the vector space, shown byprojectingvectorsontotwodimensionsusingPCA. Intheleftpanel, ’king’ - ’man’ +’woman’iscloseto ’queen’. Intheright, weseetheway offsetsseemtocapturegrammatical number.(Mikolovetal., 2013b).

    19.7 BrownClusteringBrown clustering (Brownetal., 1992) isanagglomerativeclustering algorithmforBrownclusteringderivingvector representations of wordsbyclusteringwordsbasedontheir associa-tionswith thepreceding or followingwords.

    The algorithm makes use of the class-based language model (Brown et al.,class-basedlanguagemodel1992), amodel inwhicheachwordw2V belongstoaclassc2CwithaprobabilityP(w|c). Class based LMs assigns aprobability to apair of wordswi−1 andwi bymodeling thetransition betweenclasses rather thanbetween words:

    P(wi|wi−1) =P(ci|ci−1)P(wi|ci) (19.32)Theclass-basedLM canbeusedtoassignaprobability toanentirecorpusgiven

    aparticularly clusteringC asfollows:

    P(corpus|C) =nY

    i−1P(ci|ci−1)P(wi|ci) (19.33)

    Class-based language models are generally not used as a language model for ap-plications like machine translation or speech recognition because they don’t workas well as standard n-grams or neural language models. But they arean importantcomponent inBrownclustering.

    Brown clustering is ahierarchical clustering algorithm. Let’s consider anaive(albeit ineffcient) versionof thealgorithm:

    1. Eachwordis initially assigned to itsowncluster.2. We now consider consider merging each pair of clusters. The pair whose

    merger results inthesmallestdecreaseinthelikelihoodof thecorpus(accord-ingto theclass-based languagemodel) ismerged.

    3. Clustering proceedsuntil all wordsareinonebigcluster.Twowordsarethusmost likely tobeclustered if they havesimilar probabilities

    for preceding and followingwords, leading to morecoherent clusters. Theresult isthatwordswill bemergedif they arecontextually similar.

    By tracingtheorder inwhichclustersaremerged, themodel buildsabinary treefrombottomto top, in which the leaves are thewords in thevocabulary, and eachintermediate node in the tree represents the cluster that is formed by merging itschildren. Fig. 19.16showsaschematic viewof apartof atree.

    19.7 • BROWN CLUSTERING 19

    Figure19.15 Vector offsets showing relational properties of the vector space, shown byprojectingvectorsontotwodimensionsusingPCA. Intheleftpanel, ’king’ - ’man’ +’woman’iscloseto ’queen’. In theright, weseetheway offsetsseemtocapturegrammatical number.(Mikolov etal., 2013b).

    19.7 BrownClusteringBrown clustering (Brownetal., 1992) isanagglomerativeclustering algorithmforBrownclusteringderivingvector representations of wordsbyclusteringwordsbasedontheir associa-tionswith theprecedingor followingwords.

    The algorithm makes use of the class-based language model (Brown et al.,class-basedlanguagemodel1992), amodel inwhicheachwordw2V belongstoaclassc2CwithaprobabilityP(w|c). Class based LMs assigns aprobability to apair of wordswi−1 andwi bymodeling thetransition betweenclasses rather thanbetween words:

    P(wi|wi−1) =P(ci|ci−1)P(wi|ci) (19.32)Theclass-basedLM canbeusedtoassignaprobability toanentirecorpusgiven

    aparticularly clusteringC asfollows:

    P(corpus|C) =nY

    i−1P(ci|ci−1)P(wi|ci) (19.33)

    Class-based language models are generally not used as a language model for ap-plications likemachine translation or speech recognition because they don’t workas well as standard n-grams or neural language models. But they arean importantcomponent inBrownclustering.

    Brown clustering is ahierarchical clustering algorithm. Let’s consider anaive(albeit ineffcient) versionof thealgorithm:

    1. Eachwordis initially assigned to itsowncluster.2. We now consider consider merging each pair of clusters. The pair whose

    merger results inthesmallestdecreaseinthelikelihoodof thecorpus(accord-ingto theclass-based languagemodel) ismerged.

    3. Clustering proceedsuntil all wordsareinonebigcluster.Two wordsarethusmost likely tobeclustered if they havesimilar probabilities

    for preceding and followingwords, leading to morecoherent clusters. Theresult isthatwordswill bemergedif they arecontextually similar.

    By tracingtheorder inwhichclustersaremerged, themodel buildsabinary treefrombottomto top, in which the leaves are thewords in the vocabulary, and eachintermediate node in the tree represents the cluster that is formed by merging itschildren. Fig. 19.16showsaschematic viewof apartof atree.

  • Vector Semantics

    Evaluatng similarity

  • Evaluatng similarity• Extrinsic (task-basedn end-to-end) Evaluaton:• Queston Answering• Spell Checking• Essay grading

    • Intrinsic Evaluaton:• Correlaton between algorithm and human word similarity ratngs• Wordsim353: 353 noun pairs rated 0-10. sim(plane,far)=5.77

    • Taking TOEFL multple-choice vocabulary testsLevied is closest in meaning to: imposed, believed, requested, correlated

  • Summary

    • Distributonal (vector) models of meaning• Sparse (PPMI-weighted word-word co-occurrence matrices)• Dense:• Word-word SVD 50-2000 dimensions• Skip-grams and CBOWn (Pretrained: Word2Vecn GloVen Bert) • Brown clusters 5-20 binary dimensions.

    89

    Slide 1Slide 2Slide 3Slide 4Slide 5Slide 6Slide 7Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Slide 15Slide 16Slide 17Slide 18Slide 19Slide 20Slide 21Slide 22Slide 23Slide 24Slide 25Slide 26Slide 27Slide 28Slide 29Slide 30Slide 31Slide 32Slide 33Slide 34Slide 35Slide 36Slide 37Slide 38Slide 39Slide 40Slide 41Slide 42Slide 43Slide 44Slide 45Slide 46Slide 47Slide 48Slide 49Slide 50Slide 51Slide 52Slide 53Slide 54Slide 55Slide 56Slide 57Slide 58Slide 59Slide 60Slide 61Slide 62Slide 63Slide 64Slide 65Slide 66Slide 67Slide 68Slide 69Slide 70Slide 71Slide 72Slide 73Slide 74Slide 75Slide 76Slide 77Slide 78Slide 79Slide 80Slide 81Slide 82Slide 83Slide 84Slide 85Slide 86Slide 87Slide 88Slide 89


Recommended