23
Learning User Interaction Models for Predicting Web Search Result Preference Eugene Agichtein et al. Microsoft Research SIGIR ‘06

Learning User Interaction Models for Predicting Web Search Result Preference

Embed Size (px)

DESCRIPTION

Learning User Interaction Models for Predicting Web Search Result Preference. Eugene Agichtein et al. Microsoft Research SIGIR ‘06. Objective. Provide a rich set of features for representing user behavior Query-text Browsing Clickthough Aggregate various feature RankNet. - PowerPoint PPT Presentation

Citation preview

Page 1: Learning User Interaction Models  for  Predicting Web Search Result Preference

Learning User Interaction Models for Predicting Web Search Result

PreferenceEugene Agichtein et al.

Microsoft Research

SIGIR ‘06

Page 2: Learning User Interaction Models  for  Predicting Web Search Result Preference

Objective

• Provide a rich set of features for representing user behavior– Query-text– Browsing– Clickthough

• Aggregate various feature– RankNet

Page 3: Learning User Interaction Models  for  Predicting Web Search Result Preference

Browsing feature

• Related work

• The amount of reading time could predict– interest level on news articles– rating in recommender system

• The amount of scrolling on a page also have strong relationship with interest

Page 4: Learning User Interaction Models  for  Predicting Web Search Result Preference

Browsing feature

• How to collect browsing feature?– Obtain the information via opt-in client-side

instrumentation

Page 5: Learning User Interaction Models  for  Predicting Web Search Result Preference

Browsing feature

• Dwell time

Page 6: Learning User Interaction Models  for  Predicting Web Search Result Preference

Browsing feature

• Average & Deviation

• Properties of the click event

Page 7: Learning User Interaction Models  for  Predicting Web Search Result Preference

Clickthrough feature

• 1. Clicked VS. Unclicked– Skip Above (SA)– Skip Next (SN)

• Advantage– Propose preference pair

• Disadvantage– Inconsistency– Noisiness of individual

Page 8: Learning User Interaction Models  for  Predicting Web Search Result Preference

Clickthrough feature

• 2. Position-biased

Page 9: Learning User Interaction Models  for  Predicting Web Search Result Preference

Clickthrough feature

Page 10: Learning User Interaction Models  for  Predicting Web Search Result Preference

Clickthrough feature

Page 11: Learning User Interaction Models  for  Predicting Web Search Result Preference

Clickthrough feature

• Disadvantage of SA & SN– User may click some irrelevant pages

Page 12: Learning User Interaction Models  for  Predicting Web Search Result Preference

Clickthrough feature

• Disadvantage of SA & SN– User often click part of relevant pages

Page 13: Learning User Interaction Models  for  Predicting Web Search Result Preference

Clickthrough feature

• 3. Feature for learning

Page 14: Learning User Interaction Models  for  Predicting Web Search Result Preference

Feature set

Page 15: Learning User Interaction Models  for  Predicting Web Search Result Preference

Feature set

Page 16: Learning User Interaction Models  for  Predicting Web Search Result Preference

Evaluation

• Dataset– Random sample 3500 queries and their top

10 results– Rate on a 6-point scale manually– 75% training, 25% testing– Convert into pairwise judgment– Remove tied pair

Page 17: Learning User Interaction Models  for  Predicting Web Search Result Preference

Evaluation

• Pairwise judgment

• Input– UrlA, UrlB

• Outpur– Positive: rel(UrlA) > rel(UrlB)

– Negative: rel(UrlA) ≤ rel(UrlB)

• Measurement– Average query precision & recall

Page 18: Learning User Interaction Models  for  Predicting Web Search Result Preference

Evaluation

1. Current– Original rank from search engine

• 2. Heuristic rule without parameter– SA, SA+N

• 3. Heuristic rule with parameter– CD, CDiff, CD + CDiff

• 4. Supervised learning– RankNet

Page 19: Learning User Interaction Models  for  Predicting Web Search Result Preference

Evaluation

Page 20: Learning User Interaction Models  for  Predicting Web Search Result Preference

Evaluation

Page 21: Learning User Interaction Models  for  Predicting Web Search Result Preference

Evaluation

Page 22: Learning User Interaction Models  for  Predicting Web Search Result Preference

Conclusion

• Recall is not a important measurement

• Heuristic rule– very low recall and low precision

• Feature set– Browsing features have higher precision

Page 23: Learning User Interaction Models  for  Predicting Web Search Result Preference

Discussion

• Is user interaction model better than search engine– Small coverage– Only pairwise judgment

• Given the same training data, which one is better, traditional ranking algorithm or user interaction?

• Which feature is more useful?