+ All Categories
Home > Documents > CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long...

CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long...

Date post: 21-May-2020
Category:
Upload: others
View: 6 times
Download: 0 times
Share this document with a friend
30
CSE 291 - Advanced Statistical Natural Language Processing Nishant D. Gurnani May 23, 2017 1 / 30
Transcript
Page 1: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

CSE 291 - Advanced Statistical Natural Language ProcessingNishant D. Gurnani

May 23, 2017

1 / 30

Page 2: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Outline

Motivation

Sequence to Sequence Model

Experiments

Related Work

Discussion

2 / 30

Page 3: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Outline

Motivation

Sequence to Sequence Model

Experiments

Related Work

Discussion

3 / 30

Page 4: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Motivation

Deep neural networks (DNNs) are powerful models that work wellwhenever large labeled training sets are available

Drawbacks:

I Need inputs and outputs to be vectors of fixed dimensionality

I Consequently, cannot map sequences to sequences

I Significant limitation since many important problems(machine translation, image caption generation) are bestexpressed with sequences whose lengths are not known a-priori

Goal: a general sequence to sequence neural network

4 / 30

Page 5: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Recurrent Neural Networks

Given a sequence of inputs (x1, . . . , xT ), a standard RNNcomputes a sequence of outputs (y1, . . . , yT ) by iterating thefollowing equation:

ht = sigm(W hxxt + W hhht−1)

yt = W yhht

5 / 30

Page 6: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

RNN Drawbacks

I Have a one-to-one correspondence between the inputs andoutputs

I Have trouble learning “long-term dependencies”

– vanishing gradient problem– exploding gradient problem– Hochreiter (1991); Bengio et. al (1994)

6 / 30

Page 7: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

RNN Drawbacks

I Have a one-to-one correspondence between the inputs andoutputs

I Have trouble learning “long-term dependencies”

– vanishing gradient problem → LSTM– exploding gradient problem → Gradient clipping– Hochreiter (1991); Bengio et. al (1994)

7 / 30

Page 8: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Long Short-Term Memory (LSTM)

I Hochreiter and Schmidhuber (1997)

I An RNN architecture that is good at long-term dependencies

I Has almost no vanishing gradients

Key Insights:

I RNNs overwrite the hidden stateI LSTMs add to the hidden state

– compute a delta to the hidden state which we then add to it– addition has nice gradients– results in LSTM being good at noticing long-range correlations

8 / 30

Page 9: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Outline

Motivation

Sequence to Sequence Model

Experiments

Related Work

Discussion

9 / 30

Page 10: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Main Idea

I Neural networks are excellent at learning very complicatedfunctions

I “Coerce” a neural network to read one sequence and produceanother

I Learning should take care of the rest

10 / 30

Page 11: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Model

11 / 30

Page 12: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

LSTM hidden state

I The LSTM needs to read the entire input sequence, and thenproduce the target sequence “from memory”

I The input sequence is stored by a single LSTM hidden state

I So hidden state must be large

12 / 30

Page 13: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Deep model with large hidden state

13 / 30

Page 14: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Similar Work

I Kalchbrenner and Blunsom (2013) Recurrent ContinuousTranslation Models → convolutional encoder, recurrentdecoder

I Cho et. al (2014) Learning phrase representations using RNNencoder-decoder for statistical machine translation →recurrent encoder, recurrent decoder

I Bahdanau et. al (2014 arxiv version) Neural machinetranslation by jointly learning to align and translate →recurrent encoder, recurrent decoder + attention

14 / 30

Page 15: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Outline

Motivation

Sequence to Sequence Model

Experiments

Related Work

Discussion

15 / 30

Page 16: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Dataset

I WMT’14 English to French

I 12M sentences

I 348M French words

I 304M English words

I Train on 30% of training data which is a clean “selected”subset

I Choose this subset because of public availability of atokenized training and test set together with 1000-best listsfrom the baseline SMT

16 / 30

Page 17: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Training

We define a distribution over output sequences given inputsequences and maximize the log probability of a correct translationT given the source sentence S.

Training Objective:

1

|S |∑

(T ,S)∈S

log p(T |S)

Once training is complete, we produce translations by finding themost likely translation according to the LSTM:

T̂ = argmaxT

p(T |S)

Searching for the most likely translation is done using a simpleleft-to-right beam search decoder

17 / 30

Page 18: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

DecodingI Since there are exponentially many sentences, how do we find

the sentence with the highest probability?I Search problem: use simple greedy beam search

Decoding in a nutshell

– proceed left to right

– maintain N partial translations

– expand each translation with possible next words

– discard all but the top N new partial translations

18 / 30

Page 19: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Experimental Setup

19 / 30

Page 20: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Learning Parameters

For a change the learning parameters are fairly simple andstraightforward:

I batch size = 128

I learning rate = 0.7/batch size

I initialize uniform between -0.8 and 0.8

I norm of gradient is clipped to 5

I learning rate is halved every 0.5 epochs after 5 epochs

I no momentum

20 / 30

Page 21: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Reversing Source Sentences

I Authors find that LSTM learns much better when the sourcesentences are reverse

I Results in test BLEU scores of decoded translations increasingfrom 25.9 to 30.6

I Retroactively provide and explanation suggesting that doingso introduces many short term dependencies in the data thatmake the optimization problem much easier

21 / 30

Page 22: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Experiments

22 / 30

Page 23: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Experiments

23 / 30

Page 24: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

2-dimensional PCA projection

24 / 30

Page 25: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

2-dimensional PCA projection

25 / 30

Page 26: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Performance as a function of sentence length

26 / 30

Page 27: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Performance on sentences with progressively more rarewords

27 / 30

Page 28: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Related Work

I Bahdanau et al., ICLR 2015 Neural Machine Translation byJointly Learning to Align and Translate

I Lee, et al., TACL 2017 Fully Character-Level Neural MachineTranslation without Explicit Segmentation

I Wu et al., arxiv 2016 Google’s Neural Machine TranslationSystem: Bridging the Gap between Human and MachineTranslation

28 / 30

Page 29: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Related Work

I Ranzanto et. al, 2015 Sequence Level Training with RecurrentNeural Network

I Luong et. al, ICLR 2016 Multi-task Sequence to SequenceLearning

I Wiseman and Rush, EMNLP 2016 Sequence to SequenceLearning as Beam Search Optimization

29 / 30

Page 30: CSE 291 - Advanced Statistical Natural Language Processing Nishant … · 2017-08-09 · Long Short-Term Memory (LSTM) I Hochreiter and Schmidhuber (1997) I An RNN architecture that

Discussion FAQ

Q: What happens when the encoder and decoder models havedifferent numbers of hidden layers? Is there a constraint that theyneed to have the same number?

Q: Why didn’t the authors try deep bidirectional LSTMs?

Q: Does reversing the order of words in source sentences have anylinguistic rationale?

Q: For long sentences and bigger depth, doesn’t the modelcomplexity increase beyond the expressive capability of the model?

Q: Can the learned sentence representations from a language pair(Ex. English to French) be used to train a LSTM decoder foranother target language (Ex. English to Spanish)?

30 / 30


Recommended