21
Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC-13 Question Answering Main Task Hang Cui Keya Li Renxu Sun Tat-Seng Chua Min-Yen Kan {cuihang, likeya, sunrenxu, chuats, kanmy}@comp.nus.edu.sg

Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Embed Size (px)

Citation preview

Page 1: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 1/20

National University of Singapore at the TREC-13 Question Answering Main Task

Hang CuiKeya Li

Renxu SunTat-Seng ChuaMin-Yen Kan

{cuihang, likeya, sunrenxu, chuats, kanmy}@comp.nus.edu.sg

Page 2: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 2/20

System Architecture

Passage RetrievalUsing Query Expansion

with Google snippets AnswerExtraction

Using ApproximateDependency

RelationMatching

Definition Generationwith Soft Patterns

Topic Analysisand

DocumentRetrieval

QuestionAnalysis

Page 3: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 3/20

What’s New This Year

• Approximate matching of grammatical dependency relations for answer extraction

• Soft matching patterns in identifying definition sentences.– See [Cui et al., 2004a] and [Cui et al., 2004b]

• Exploiting definitions to answer factoid and list questions.

Page 4: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 4/20

Outline

• System architecture• New Features in TREC-13 QA Main Task

– Approximate Dependency Relation Matching for Answer Extraction

– Soft Matching Patterns for Definition Generation– Definition Sentences in Answering Topically-Related

Factoid/List Questions

• Conclusion

Page 5: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 5/20

Dependency Relation Matching in QA

• Why need to consider dependency relations?– An upper bound of 70% for answer extraction (Light et al., 2001)

• Many NE’s with the same type appearing close to each other.

– Some questions don’t have NE-type targets.

• E.g. what does AARP stand for?

• Tried before– PIQASso and MIT systems have applied dependency relations

in QA.– However:

• Poor performance due to low recall.• Used exact match of relations to extract answers directly.

Page 6: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 6/20

Extracting Dependency Relation Triples

• Minipar-based (Lin, 1998) dependency parsing

• Relation triple: two anchor words and their relationship– E.g. <“desk”, complement, “on”> for “on the desk”.

• Relation path: path of relations between two words– E.g., <“desk”, mod, complement “floor”> for “on the

desk at the fourth floor”

Page 7: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 7/20

Examples of relation triples

Q: What American revolutionary general turned over West Point to the British?

q1) General sub obj West Point

q2) West Point mod pcomp-n British

A: …… Benedict Arnold’s plot to surrender West Point to the British ……

s1) Benedict Arnold poss s sobj West Point

s2) West Point mod pcomp-n British

• So, in most cases, correct answers can’t be extracted by exact match of relations.

Page 8: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 8/20

Learning Relation Similarity

• We need a measure to find the similarity between two different paths.

• Adopt a statistical method to learn similarity from past QA pairs.

• Training data preparation– Around 1,000 factoid question-answer pairs from the

past two years’ TREC QA task.– Extract all relation paths between all non-trivial words

• 2,557 path pairs.– Align the paths according to identical anchor nodes.

Page 9: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 9/20

Using Mutual Information to Measure Relation Co-occurrence• Two relations’ similarity measured by their co-

occurrences in the question and answer paths.• Variation of mutual information (MI)

– a: reciprocal of the length sum of the two relation paths.• to discount the score of two relations appearing in long

paths.

)(Re)(Re

)Re,(Relog)Re,(Re

10

1010 lflf

llllMI

AQ

Relation-1 Relation-2 Similaritywhn pcomp-n 0.43whn i 0.42i pcomp-n 0.39i s 0.37pred mod 0.37appo vrel 0.35

Page 10: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 10/20

Measuring Path Similarity – 1

• We adopt two methods to compute path similarity using different relation alignment methods.

• Option 1: ignore the words of those relations along the given paths – Total Path Matching.– A path consists of only a list of relations: no relation

context (anchor words) considered.– Relation alignment by permutation of all possibilities.– Adopt IBM’s Model 1 for statistical translation:

j i

Aj

QiAPlen

Q

AQ llMIPlen

PPSim )Re,(Re)(1(

),()(

Page 11: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 11/20

Measuring Path Similarity – 2

• Option 2: consider the words of those relations along a path – Triple Matching.– A path consists of a list of relations and their words.

• Requires match of relation context (anchor words).

• Only those relations with matched words count.

– More strict match in relation alignment.

Mj

Aj

QiMN

Q

AQ llMIPlen

PPSim )Re,(Re)(1(

),()(

Page 12: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 12/20

Selecting Answer Strings Statistically

• Use the top 50 ranked sentences from the passage retrieval module for answer extraction.

• Evaluate the path similarity for relation paths between the question target or answer candidate and other question terms.

• Non-NE questions: evaluate all noun/verb phrases.

path

AAns

QAns PPSimAnsWeight ),()( ,*)(,*)(

Page 13: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 13/20

Discussions on Evaluation Results

• The use of approximate relation matching outperforms our previous answer extraction technique.– 22% improvement for overall

questions.– 45% improvement for Non-NE

questions (69 out of 230 questions).

• The two path similarity measurements do not make obvious difference.– Total Path Matching performs

slightly better than Triple Matching.

– Minipar can’t resolve long distance dependency as well.

Baseline NUSCHUA1 NUSCHUA2

Overall average accuracy

0.51 0.62 0.60

Questions w/ NE typed targets

0.68 0.78 0.75

Questions w/o NE typed targets

0.29 0.42 0.41

Page 14: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 14/20

Outline

• System architecture• New Experiments in TREC-13 QA Main Task

– Approximate Dependency Relation Matching for Answer Extraction

– Soft Matching Patterns for Definition Generation– Definition Sentences in Answering Topically-Related

Factoid/List Questions

• Conclusion

Page 15: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 15/20

Question Typing and Passage Retrieval for Factoid/List Q’s

• Question typing– Leveraging our past question typology and rule-based

question typing module.– Offline tagging of the whole TREC corpus using our

rule-based named entity tagger.

• Passage retrieval – on two sources:– Topic-relevant document set by the document

retrieval module: NUSCHUA1 and 2.– Definition sentences for a specific topic by the

definition generation module: NUSCHUA3

• Question-specific wrappers on definitions.

Page 16: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 16/20

Exploiting Definition Sentences to Answer Factoid/List Questions

• Conduct passage retrieval for factoid/list questions on the definition sentences about the topic.– Much more efficient due to smaller search space.– Average accuracy of 0.50, lower than that over all

topic-related documents.• Due to low recall – imposed cut-off for selecting

definition sentences (naïve use of definitions). • Some sentences for answering factoid/list

questions are not definition sentences.

Page 17: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 17/20

Exploiting Definitions from External Knowledge

• Pre-complied wrappers for extraction of specific fields of information for list questions– Works, product names and person titles.– From both generated definition sentences and

existing definitions: cross validation.– Achieves F-measure of 0.81 for 8 list questions about

works.

Page 18: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 18/20

Outline

• System architecture• New Experiments in TREC-13 QA Main Task

– Approximate Dependency Relation Matching for Answer Extraction

– Soft Matching Patterns for Definition Generation– Definition Sentences in Answering Topically-Related

Factoid/List Questions

• Conclusion

Page 19: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 19/20

Conclusion

• Approximate relation matching for answer extraction– Still have a hard time in dealing with difficult

questions.• Dependency relation alignment problem – words

often can’t be matched due to linguistic variations.• Semantic matching of words/phrases is needed

with relation matching.

• More effective use of topic related sentences in answering factoid/list questions.

Page 20: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 20/20

Q & A

Thanks!

Page 21: Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun

Hang Cui et al. NUS at TREC-13 QA Main Task 21/20

A Question Example

• Topic #14: Horus– Q1: Horus is the god of what?

1. Osiris, the god of the underworld, his wife, Isis, the goddess of fertility, and their son, Horus, were worshiped by ancient Egyptians.

2. The mummified hawk probably was dedicated to one of several gods associated with falcons, such as the sky god Horus, the war god Montu and the sun god Re.

3. The stolen pieces included stones from the entrances of tombs and a statue of the god Horus, who was half-man, half-falcon.

– No explicit question target– Relying on keyword matching or density-based answer

extraction may lead to wrong answer.