Date post: | 29-Dec-2015 |
Category: |
Documents |
Upload: | gabriella-banks |
View: | 217 times |
Download: | 2 times |
Assessing the Impact of Frame Semantics on Textual Entailment
Authors: Aljoscha Burchardt, Marco Pennacchiotti, Stefan Thater, Manfred Pinkal
Saarland Univ, Germanyin Natural Language Engineering 1 (1) pp1-25
As (mis-)interpreted by Peter Clark
The Textual Entailment Task
Syntactically, the players have moved. from a syntactic point of view, T and H differ
But semantically, the players are still the same from a semantic point of view, T and H are the same
So, want to identify and match on semantic, not syntactic, level:
Need for “frame semantics” (syntax) X drown Y → (semantics) cause* drown victim (syntax) X drown in Y → (semantics) victim drown in cause
T: A flood drowned 11 people.H: 11 people drowned in a flood.
T: flood drown people
H: people drown in flood
* if X is inanimate (otherwise role is killer)
Frame Semantic Resources
PropBank: thematic roles (arg0, arg1, …):
arg0 search arg1 for arg2 (“Mary searched the room for the ring”) arg0 search for arg2 (“Fred searched for the ring”)
BUT roles are verb-specific (and names are overloaded) arg0 seek arg1 (“Mary sought the ring”)
No guarantee arg0 means the same in different verbs
Note: thematic roles like “agent” are necessarily verb-specific: Fred sold a car to John. John bought a car from Fred. Thematic roles: Fred, John are both agents. Case/semantic roles: Fred is the buyer, John is the seller.
Frame Semantic Resources
FrameNet: Semantic roles are shared among verbs
several verbs map to the same Frame Frames organized in a taxonomy Roles organized in a taxonomy
Doesn’t contain subcategorization templates for semantic role labeling causer kill victim
But does contain role-labeled examples, from which semantic role labeling algorithms can be learned
Example Frame in FrameNet
So: it seems FrameNet should really help!
Even more, FrameNet has (limited) inferential connections
T: Wyniemko, now 54 and living in Rochester Hills, was arrested and tried in 1994 for a rape in Clinton Township.H: Wyniemko was accused of rape.
But, limited success in practice
PropBank used by several systems, including the RTE3 winner: but unclear how much PropBank contributed
FrameNet used in SALSA (Burchardt and Frank) Shalmaneser + Detour for Semantic Role Labeling (SRL)
(Detour boosts SRL when training examples are missing)
SALSA: find matching semantic roles see if the role fillers match
machine learning approach: for set of known matching fillers (i) compute features (ii) learn which weighted sum of features implies match
But SALSA didn’t do significantly better than simple lexical overlap
Possible reasons for “failure”
Poor coverage of FrameNet Decision of applicable Frame is poor Semantic Role Labeling is poor Role filler matching is poor
How to distinguish between these? Create FATE, an annotated RTE corpus Only annotated the “relevant” parts of the sentences
FATE
Annotated RTE2 corpus (400+ve, 400-ve exs) Good interannotator agreement ~2 months work to create 4488 frames, 9512 roles annotated in the corpus
includes 373 (8%) Unknown_Frame 1% Unknown_Role → FrameNet coverage is good for this data!
Still, not always clear-cut:
Annotator: EXPORT; Shalmaneser: SENDING SENDING is still plausible
Cars exported by Japan increased
1. How do automatic and manual annotation compare?
Does SALSA pick the right frame?
When it picks the right frame, does assign the right
roles?
When it picks the right frame and role, does it get the right filler (i.e., the same head noun as the
gold standard)
Fred sold the book on the shelf to Mary
seller goods buyer
Commercial_Transaction
Results
If H is entailed by T, then we expect1. The Frame for H to also be in T
2. The Frame’s roles used in H to also be in T
3. The role fillers in H to match those in T
These may also be true if H isn’t entailed by T BUT: presumably with less probability
Results
If H is entailed by T, then we expect1. The Frame for H to also be in T (more often)
2. The Frame’s roles used in H to also be in T (more often)
3. The role fillers in H to match those in T (more often)
These may also be true if H isn’t entailed by T BUT: presumably with less probability
Also: compare with simple word overlap
Results
If H is entailed by T, then we expect1. The Frame for H to also be in T (more often)
Yes…. (Note: low difference here reflects that T and H typically talk about the same thing)
Results
If H is entailed by T, then we expect1. The Frame for H to also be in T (more often)
…but not much more than word overlap… (Not really surprising, as frames are picked based on words)
Results
If H is entailed by T, then we expect1. The Frame for H to also be in T (more often)
Also the hierarchy doesn’t help much here
Results
If H is entailed by T, then we expect1. The Frame for H to also be in T (more often)
2. The Frame’s roles used in H to also be in T (more often)
Again, low difference suggests that the roles talked about in T and H are usually the same
For pairs which have a Frame in common between T and H:
Results
If H is entailed by T, then we expect1. The Frame for H to also be in T (more often)
2. The Frame’s roles used in H to also be in T (more often)
3. The role fillers in H to match those in T (more often)
Results
If H is entailed by T, then we expect1. The Frame for H to also be in T (more often)
2. The Frame’s roles used in H to also be in T (more often)
3. The role fillers in H to match those in T (more often)
T: An avalanche has struck a popular skiing resort in Australia, killing at least 11 people.H: Humans died in an avalanche.
T: Virtual reality is used to train surgeons, pilots, astronauts, police officers, first-responders, and soldiers.
H: Soldiers are trained using virtual reality.
student
student
Some difficult cases:
Results
Also, even if we had perfect frame, role, and filler matching, entailment does not always follow: Negation:
Modality:
Conclusions
1. FrameNet’s coverage is good 2. Frame Semantic Analysis (frame/role/filler
selection) is mediocre 3. Simple lexical overlap at the frame level don’t
outperform simple lexical overlap at the syntactic level
4. Need better modeling: wider context (negation, modalities) role filler matching (semantic matching, e.g.,
WordNet) more knowledge in FrameNet, e.g., implications
e.g., kill → die, arrest → accuse
(Extra slides)
The Textual Entailment Task: More complex example
Again, need to match semantic roles:
Again need for “frame semantics” (syntax) X kill Y → (semantics) cause kill victim (syntax) X died in Y → (semantics) protagonist died in cause
ALSO: progagonist isa victim, Killing → Death
T: An avalanche has struck a popular skiing resort in Australia, killing at least 11 people.H: Humans died in an avalanche.
T: avalanche kill people
H: human die in avalanche