Curs 5Rezoluţia anaforei - continuare
Designing Test-Beds for General Anaphora Resolution
Work done in collaboration with:
Oana [email protected]
University of Saarland, Germany
DAARC’04
Evaluation of a minimal AR system
markablesRE-extractorinput outputAR-engine
Evaluation of a minimal AR system
re-test
re-gold
RE-extractor re-test-coref-test
re-gold-coref-gold
re-gold-coref-test
AR-engine
P, R, F
Test the RE-extractor
Test the whole system globally
P, R. F
Test only the AR-engine
P, R. F
input
The Orwell corpus
• Chapters 1, 2, 3 and 5 from George Orwell’s “Ninety eighty four”
• Automatic detection of markables– POS-tagging– FDG parser– markable = any construction dominated by a
noun/pronoun– detection of head and lemma (given)– deletion of relative clauses
The Penn Treebank corpus
• 7 files from WSJ• Extraction of markables from the PTB-
style constituency trees– Collin’s rules to extract head– WordNet script for lemma– dependency links between words
Dimensions of corpora
Orwell PTB
No. of sentences 983 281
No. of words 19520 6244
No. of REs 5474 1853
Average no. of REs per sentence 5.5 6.6
Pronouns 1902 136
No. of DEs 2768 1546
Markables• Generally, conformant with MUC-7 and ACE
criteria• Differences:
– do not include relative clauses – each term of an apposition is taken separately ([Big
Brother], [the primal traitor])– conjoined expressions are annotated individually
([John] and [Mary], [hills] and [mountains])– modifying nouns appearing in noun-noun modification
are not marked separately ([glass doors], [prison food], [the junk bond market])
Markables
What do we mark? • noun phrases
– definite: (the principle, the flying object) – indefinite: (a book, a future star) – undetermined (sole guardian of truth) – names (Winston Smith, The Ministry of Love)– dates (April) – currency expressions ($40) – percentages (48%)
• pronouns– personal (I, you, he, him, she, her, it, they, them) – possessive (his, her, hers, its, their, theirs) – reflexive (himself, herself, itself, themselves) – demonstrative (this, that, these, those) – wh-pronouns – when they replace an entity (which, who, whom, whose, that)
• numerals– when they refer to entities (four of them, the first, the second)
Our model: primary attributes• Lexical & morphological:
– lemma– number– POS– headForm
• Syntactic:– synt-role– dependency-link– npText– includedNPs– isDefinite, isIndefinite,– predNameOf
• Semantic:– isMaleName, isFemaleName, isFamilyName, isPerson– HeSheItThey
• Positional:– offset– sentenceID
Our model: knowledge sources
• For each attribute there is a knowledge source that fetches the value using:– The POS tagger output– The FDG structure– Large name databases– The WordNet hierarchy– Punctuation
Knowledge sources - HeSheItThey
• HeSheItThey = [Phe, Pshe, Pit, Pthey]– for pronouns – straightforward;– for NPs:
• n = # synsets of the head• f = # synsets which are hyponyms of <female>• m = # synsets which are hyponyms of <male>• p = # synsets which are hyponyms of <person>If NP is plural: Phe=0, Pshe=0, Pit=0, Pthey=1
Else: Phe= , Pshe= , Pit= , Pthey=0n
mpm )(*21
n
fpf )(*21
npn
Our model: rules• Demolishing rules:
– IncludingRule: prohibits coreference between nested REs • Certifying rules:
– PredNameRule – ProperNameRule – whRule
• Promoting/demoting rules: – HeSheItTheyRule– RoleRule– NumberRule– LemmaRule– PersonRule– SynonymyRule– HypernymyRule– WordnetChainRule
Our model: the whRules (example)
• Rules for detecting the antecedent of a wh-pronoun:– Case1:
I saw [a blond boy] who was playing in the garden.
– Case2:[The colour of the chair] which was underneath the
table…[The atmosphere of happiness] which she carried
with her.
Our model: domain of referential accessibility
• Linear
Evaluation of the RE-extractor
re-test
re-gold
RE-extractor re-test-coref-test
re-gold-coref-gold
re-gold-coref-test
AR-engine
P, R, F
Test the RE-extractor
input
When a gold-test pair of markables match?
Evaluation of the RE-extractor
re-test
re-gold
re-test-coref-test
re-gold-coref-gold
re-gold-coref-test
AR-engine
P, R, F
Test the RE-extractor
When a gold-test pair of markables match?
• head matching (HM): if they have the same head
PTB-HMP=0,90R=0,95F=0,92
Orwell-HMP=0,85R=0,94F=0,89
RE-extractor
gold
test
markable
markable
input
Evaluation of the RE-extractor
re-test
re-gold
re-test-coref-test
re-gold-coref-gold
re-gold-coref-test
AR-engine
Test the RE-extractor
When a gold-test pair of markables match?
• partial matching (PM): if they have the same head and the mutual overlap is higher than 50% (compared to the longest span)
PTB-PMP=0,78R=0,82F=0,79
P, R, F
Orwell-PMP=0,74R=0,80F=0,76
RE-extractor
gold
test
l1
l
l / l1 > 0.5
input
l2
Evaluation of the AR-engine•Same set of markables (on the “identity of head” criterion)
–For each anaphor in the gold: •If it belongs to a chain that doesn’t contain any other anaphor, then we
look in the test set to see if it belongs to a similar trivial chain– if yes will get the value 1;
gold
test Ci = 1
i
Evaluation of the AR-engine•Same set of markables (on the “identity of head” criterion)
–For each anaphor in the gold: •If it belongs to a chain that doesn’t contain any other anaphor, then we
look in the test set to see if it belongs to a similar trivial chain– if yes will get the value 1; – otherwise it will get the value 0;
gold
test Ci = 0
i
Evaluation of the AR-engine•Same set of markables (on the “identity of head” criterion)
–For each anaphor in the gold: •If the anaphor belongs to a chain containing other n anaphors, then we
look in the test set and count how many of these n anaphors belong to the chain corresponding to the current test-set anaphor (we note this number with m). The ratio m/n will be the value assigned to the current anaphor.
gold
test ci = 2/30 11
1 11
i
Evaluation of the AR-engine•Same set of markables (on the “identity of head” criterion)
–For each anaphor in the gold: •If the anaphor belongs to a chain containing other n anaphors, then we look in
the test set and count how many of these n anaphors belong to the chain corresponding to the current test-set anaphor (we note this number with m). The ratio m/n will be the value assigned to the current anaphor.
–Then we add this number for all anaphors and divide by no. of anaphors: ci / N
gold
test ci = 2/30 11
1 11
i
Evaluation of the AR-engine working on coreferences
RE-extractor AR-enginere-test re-test-coref-test
re-gold
re-gold-coref-gold
re-gold-coref-test
input
PTBSR = 0.69
MUC F = 0.75
OrwellSR = 0.66
MUC F = 0.72
Evaluation of the whole system• Possibly different sets of markables, identified on the “identity of head”
criterion, as found in gold and test, with possibly different spans– same global formula but the contribution of each markable is factored by
the mutual overlapping score, showing the test versus gold overlapping of markables
mosi = b/a
gold
test
a
b
Evaluation of the whole system• Possibly different sets of markables, identified on the “identity of head”
criterion, as found in gold and test, with possibly different spans– same global formula but the contribution of each markable is factored by
the mutual overlapping score, showing the test versus gold overlapping of markables
ci = (0.7+0.5)/3 = 1.2/3
gold
test
R = ci / Ng
0 0.50.7
1 11
i
Evaluation of the whole system• Possibly different sets of markables, identified on the “identity of head”
criterion, as found in gold and test, with possibly different spans– “misses” (failings to find certain markables) influence R– “false-alarms” (markables erroneously considered in the test) influence P
gold
test
0 0.7
1 11
i
false-alarm miss
Evaluation of the whole system
RE-extractor AR-enginere-test re-test-coref-test
re-gold
re-gold-coref-gold
OrwellSR = 0,55
re-gold-coref-test
input
PTBSR = 0,61
Commentaries• RE-extractor module gives better results on PTB
than on Orwell– human syntactic annotation versus automatic FDG
structure detection• AR module gives slightly better results on PTB than
on Orwell– news (finance) versus belles-lettres– heads: in PTB – extracted by rules relying on the human
syntactic annotation, in Orwell – extracted by rules relying on the FDG results
• Difficult to compare with other approaches/authors– apparently we are in the upper class– BUT: not the same corpus, not the same evaluation metric
Transferring Coreference Chains through Word Alignment
In collaboration with Oana Postolache and Constantin Orăsan
[email protected] [email protected]
University of Saarland, GermanyUniversity of Wolverhampton, United Kingdom
LREC’06
The goal
• Automatic annotation of coreference chains for languages with sparser resources (Romanian).
• Difficulties:En: John broke his arm recently. It hurts badly.
Ro: Ion şi-a rupt braţul recent. Îl doare rău.
English-Romanian parallel corpus
• George Orwell’s novel “1984”– 6,411 sentences– the English version is in the process of being
manually annotated with coreference chains (now we have half of the corpus)
• Experimental data: three parts from the first chapter– ~13K words.– 638 sentences– the Romanian version manually annotated for
evaluation purposes
Coreference information annotated
• Conformant with MUC-7 and ACE 2003.• Referential expressions are:
– Noun-phrases: definite, indefinite & undetermined– Proper names– Pronouns and wh-pronouns– Numerals
• The REs include only restrictive clauses• The term of an apposition is taken separately• Conjoined expressions are taken individually• Noun premodifiers are not marked
Experiment: Automatic word alignment
• We used the Romanian-English aligner COWAL (Tufiş et al., 2006)
• Performance: 83.30% F-measure
• The first ranked system out of 37 at ACL’05 shared task on word alignment
Experiment:Extraction of the Romanian REs• For an Eng RE with words e1, e2, … en, we
extract the Rom set of words r1, r2, …rm, surface ordered
• Heads are transferred through the alignment from Eng to Rom (1:n)
• We consider the Rom RE as the span of words between r1 and rm
Experiment 1:Extraction of the Romanian REsFour situations:
1. An Eng RE has a corresponding Rom RE with ONE head.2. An Eng RE has a corresponding Rom RE with ONE OR MORE
heads.3. An Eng RE has a corresponding Rom RE with NO head.4. An Eng RE has NO corresponding Rom RE.
Only REs conforming to 1. and 2. are considered. The head of the Rom RE is taken as the leftmost
head whose POS is Noun, Pronoun or Numeral.
Experiment 2:Coreference chains transfer
• As the Eng REs are clustered in chains referring to the same entity, and we have the corresponding Rom REs, we simply “import” the clustering.
• As not all Eng REs have a corresponding Rom RE, the no. of clusters between Eng and Rom may differ.
• Also there are differences between the lengths of corresponding clusters.
Evaluation
• Transferred data (system) is compared against gold standard data (manual) for Rom.
Evaluation of the RE heads
• We only consider the heads of the system REs and the heads of the gold standard REs.
Precision 95.7
Recall 69.5
F-measure 80.5
Evaluation of the RE spans (1/2)• All REs
The overlaps between the system REs and the gold standard REs: 2 * #(wordsSystemRE wordsGoldRE)
Overlap= ------------------------------------------------------- #wordsSystemRE + #wordsGoldRE
Precision 86.8Recall 63.0F-measure 73.0
Evaluation of the RE spans (2/2)
The previous numbers reflect also the penalties for not having a certain REs in the system, or having wrong REs (errors also contained in the heads evaluation).
Only correct system REs:– System REs with a correct head against the
corresponding gold REs.
Span overlap 90.7
Evaluation of coreference chains• All system REs against the gold REs
• The correct systems REs against the gold REs
MUC F-score 63.88B-cubed F-score 82.60
MUC F-score 87.72B-cubed F-score 93.14
Error analysis (1/3)Incorrect detection of Rom REs
• Wrong alignment• Eng. adjs/advs/verbs translated in Rom by nouns
En: naturally sanguine faceRo: faţă sangvină de la natură (face sanguine from the nature)
• Choices of the Rom translatorEn: The actual writing would be easyRo: Scrisul în sine era o treabă uşoară (The writing itself was an easy job)
• Eng noun premodifiers translated in Rom as prepositional phrase postmodifiers or possesives
En: a forced labour campRo: un lagăr de muncă silnică (a camp of forced labour)
Error analysis (2/3)Errors in the spans overlap
• Wrong alignment• Triggered by the choice of translation:
En: [Someone with a comb and a piece of toilet paper] was trying to keep tune with the music.
Ro: [Cineva se străduia, cu un pieptene şi o bucată de hârtie igienică], să ţină isonul muzicii.([Someone was trying, with a comb and a piece of toilet paper], to keep tune with the music.)
Error analysis (3/3)Incorrect detection of coreference
chains
• Errors due to translation choice:En: The sky was a harsh blue.
A predicative noun – subject relationship.
Ro: Cerul era de un albastru strident.(The sky was as a harsh blue.)
Conclusions• What and why?
– An automatic method for projecting coreference chains in parallel corpora
– To augment the scarce resources of coreference information
– A preprocessing step prior to manual correction in the annotation effort
• How good?– References: high precision (> 95%) but smaller recall (~
70%)– Coreference chains: relatively high F-measure (> 90%)
for correct REs
Our anaphoric… references
• Cristea,D., Dima,G.E. (2001): An integrating framework for anaphora resolution. In Information Science and Technology, Romanian Academy Publishing House, Bucharest, vol. 4, no. 3-4, p 273-291.
• Cristea,D., Postolache,O.-D., Dima,G.E., Barbu,C. (2002): AR-Engine – a framework for unrestricted co-reference resolution. In Proceedings of The Third International Conference on Language Resources and Evaluation, LREC-2002, Las Palmas, Spain.
• Cristea, D., Dima, G.E., Postolache, O.D., Mitkov, R. (2002): Handling complex anaphora resolution cases. In Proceedings of the Discourse Anaphora and Anaphor Resolution Colloquium, Lisbon, September 18-20, 2002.
• Postolache, O. and Cristea, D. (2004): Designing Test-beds for General Anaphora Resolution, in Proceedings of the Discourse Anaphora and Anaphor Resolution Colloquium – DAARC, St. Miguel, Portugal.
• Cristea,D.; Postolache,O.D. (2005): How to deal with wicked anaphora, in António Branco, Tony McEnery and Ruslan Mitkov (eds.): Anaphora Processing: Linguistic, Cognitive and Computational Modelling, Current Issues in Linguistic Theory, Benjamin Publishing Books, ISBN 90-272-4777-3 (Eur)/1-58811-621-2 (USA).
• Postolache, O., Cristea, D., Orasan, C. (2006). Transferring Coreference Chains through Word Alignment. In Proceedings of LREC-2006, Geneva, May 2006.
Contest on AR @ 6th DAARCMarch 2007, Lagos, Portugal
Call to be issued soon…
• On English only, 4 tracks: – with markables given, resolve only pronouns– with markables given, resolve all anaphors– no markables given, resolve only pronouns– no markables given, resolve all anaphors