Upload
daniel-tunkelang
View
6.791
Download
0
Embed Size (px)
DESCRIPTION
These slides are from a tutorial at the 5th ACM International Conference on Recommender Systems (RecSys 2011). Recommender systems aim to provide users with products or content that satisfy the users' stated or inferred needs. The primary evaluation measures for recommender systems emphasize either the perceived relevance of the recommendations or the actions associated with those recommendations (e.g., purchases or clicks). Unfortunately, this transactional emphasis neglects how users interact with recommendations in the context of information seeking tasks. The effectiveness of this interaction determines the user's experience beyond a single transaction. This tutorial explores the role of recommendations as part of a conversation between the user and an information seeking system. The tutorial does not require any special background in interfaces or usability, and will focus on practical techniques to make recommender systems most effective for users.
Citation preview
1Recruiting SolutionsRecruiting SolutionsRecruiting Solutions
Recommendations as aConversation with the User
Daniel TunkelangPrincipal Data Scientist at LinkedIn
Daniel
2
Introductions
3
Let’s talk about how we talk with machines…
4
Clifford Nass’s secret:
1) Find a conclusion by a social science researcher.
2) Change
“People do X when interacting with other people.”
to
“People do X when interacting with a computer.”
3) Profit!
5
Let’s work on our relationship.
6
Core Message
Recommendations are a conversation with the user.
1) Consider asking vs. guessing.
2) Ask good questions.
3) It's ok to make mistakes…
if you have a good explanation
and adapt to feedback.
7
Our goal:
http://www.wilsoninfo.com/computerclipart.shtml
8
Overview
1) Theory
2) Examples
3) Action Items
9
1) Theory
10
Pragmatics: the Study of Conversation
Paul Grice
11
Grice’s Maxims of Conversation
Maxim 1: Quality
Maxim 2: Quantity
Maxim 3: Relation
Maxim 4: Manner
H. P. Grice, "Logic and conversation” [1975]
12
Maxim 1: Quality
13
Quality: Above All, the Truth
Xiao, Bo and Benbasat, Izak. 2011. "Product-Related Deception in E-Commerce: A Theoretical Perspective," MIS Quarterly, (35: 1) pp.169-195.
14
Don’t Lie
1) Don’t use “recommended” when you really mean
“sponsored” or “excess inventory”.
2) Optimize for the user’s utility.
3) Apply a standard of evidence (quality, quantity) that
you believe in.
15
Maxim 1: Quantity
16
Right Amount of Information
1) Exchange small units of information.
2) If recommendations supplement other content,
consider overall cognitive load.
3) Provide short, meaningful explanations.
17
Maxim 3: Relation
18
Relevant to the User
1) Offer value to the user.
2) Respect task context.
3) Don’t be obnoxious.
19
Maxim 4: Manner
20
Relevant to the User
1) Eschew obfuscation.
2) Avoid ambiguity.
3) Be brief.
4) Be orderly.
21
Another Perspective
Gary Marchionini
22
Human-Computer Information Retrieval
Empower people to explore large-scale information
but demand that
people also take responsibility for this control
by expending cognitive and physical energy.
Marchionini, G., “Toward Human-Computer Information Retrieval” [2006]
23
Principles of HCIR
1) Do more than deliver relevant information:
facilitate sensemaking.
2) Increase user responsibility and control:
require and reward effort.
3) Adapt to increasingly knowledgeable users over time.
4) Be engaging and fun to use!
24
Facilitate Sensemaking
25
Require and Reward Effort
http://www.posterenvy.com/catalog/ask_why.jpg
26
Adapt to User Knowledge
27
Be Engaging!
http://bluenile.com/
28
Applying the theory to…
1) Personalized Recommendations
2) Social Recommendations
3) Item Recommendations
29
Personalized Recommendations
1) Be transparent about model so users gain insight.
2) Allow users to modify models to correct mistakes.
3) Solicit just enough information to provide value.
30
Social Recommendations
1) Identify the right set of similar users.
2) Allow users to manipulate the social lens.
3) Accommodate users who break your model.
31
Item Recommendations
1) Explain recommendations to users.
2) Watch out for non-sequiturs (e.g., diapers -> beer).
3) Play well with user-controlled filtering and sorting.
32
2) Examples
33
34
Initial User Experience
35
“It just takes 2 minutes…”
36
Asking Before Guessing
37
Let’s try some answers:
38
Uh oh…
39
Expressing my gustibus…
40
New Star Trek = Yes; New Star Wars = No
41
Testing my patience…
42
Bring on the quality!
43
And continue the conversation.
44
Learning from Netflix
1) Ask the user for help up front. But not too much help.
2) Pay attention to what the user tells you!
3) Give users value early and often.
75% of Netflix views result from recommendations
45
46
Initial User Experience
47
Seed with an artist…
48
Or track or genre.
49
Goo Goo G'joob!
50
Ease user into recommendation space…
51
And go wild!
52
Shared Product: Personalized Stream
53
Positive and Negative Feedback
54
Learning from Pandora
1) Get meaningful input from user in one step.
2) Explain recommendations to users.
3) Solicit feedback and act on it immediately.
55
56
My home page…
57
Explanations and Humility
58
Explain What and Why
59
Recommendations as a Starting Point
60
Learning from Amazon
1) Show the factors that drive your conclusions.
2) Distinguish different kinds of recommendations.
3) Combine recommendations with user control.
Amazon: 35% of sales result from recommendations
61
3) Action Items
62
Increase explainability.
Explanations can be even more important than the recommendations themselves.
Herlocker et al., “Explaining collaborative filtering recommendations” [2000]
Sinha and Swearingen, “The role of transparency in recommender systems” [2002]
Tintarev and Masthoff, “Effective explanations of recommendations: User-centered design” [2007]
(via Òscar Celma’s book, Music Recommendation and Discovery: The Long Tail, Long Fail, and Long Play in the Digital Music Space)
63
Some models more explainable than others.
1) Consider decision trees and rule-based systems.
2) Avoid using latent, unlabeled features.
3) If the model is opaque, use examples as surrogates.
64
Make a good first impression.
Your user’s first experience is critical.
Use popularity as a default if it makes sense.
Solicit one valuable piece of information as quickly and painlessly as possible.
“Do you like the taste of beer?”
http://blog.okcupid.com/index.php/the-best-questions-for-first-dates/
65
Design feedback into your system.
You can make mistakes, if users can easily fix them.
Challenging if models use offline computation.
Respond instantly; generalize as quickly as possible.
Agarwal and Chen, “Machine Learning for Large Scale Recommender Systems” [ICML 2011 Tutorial]
66
Integrate recommendations with search.
Recommend next steps, not just items.
In a task context, recommendations are just another source of information scent.
Be careful in integrating offline recommendations with online features like search and navigation.
Pirolli, Information Foraging Theory: Adaptive Interaction with Information [2007]
67
Summary
Recommendations are a conversation with the user.
1) Consider asking vs. guessing.
2) Ask good questions.
3) It's ok to make mistakes…
if you have a good explanation
and adapt to feedback.