28
Образец заголовка Tutorial on Preference Elicitation and Interfaces Design by Xin Xia(xinxia3) and Xiaoyu Meng(xmeng16) Prepared as an assignment for CS410: Text Information Systems in Spring 2016

Preference Elicitation Interface

  • Upload
    -

  • View
    469

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Preference Elicitation Interface

Образец заголовка

Tutorial on Preference Elicitation

and Interfaces Designby Xin Xia(xinxia3)

and Xiaoyu Meng(xmeng16)

Prepared as an assignment for CS410: Text Information Systems in Spring 2016

Page 2: Preference Elicitation Interface

Образец заголовка

Introduction

Page 3: Preference Elicitation Interface

Образец заголовкаBackground• Recommendation System:

– Seek to predict the 'rating' or 'preference' that a user would give to an item

• Collaborative filtering: – A widely used approach in Recommendation system

• Assumption: – People who agreed in the past will agree in the future– They will like similar kinds of items as they liked in the past.

Page 4: Preference Elicitation Interface

Образец заголовкаOne Problem of Collaborative Filtering

• Cold Start Problem:– When a new user come along– The recommender systems knows nothing about him/her

• Use Preference Elicitation Systems to get enough information – Ask users questions to get their preference– To make good recommendation – To minimize users’ effort

Page 5: Preference Elicitation Interface

Образец заголовкаHow to Design Good Preference Elicitation Systems?

• Designing efficient and user-friendly interfaces• Choosing proper questions to ask a new user in order to get enough information

Page 6: Preference Elicitation Interface

Образец заголовка

Design space of user interfaces

Page 7: Preference Elicitation Interface

Образец заголовкаSome Facts about Users• Why people want to rate online?

– to be responsible– Record for future reference– Help the systems to improve

• People do not understand the rating scales well• Do not use the extremely good or bad scales, this tendency can be

noise for the system

Page 8: Preference Elicitation Interface

Образец заголовкаHow Users State Their Preference?• Two methods: Rating VS. Ranking• Ranking is better when the goal is to choose an

item• Rating is better when the goal is to categorize

items• Ranking is with consistence• Rating is less demanding and do not ask for a

lot of concentration

Page 9: Preference Elicitation Interface

Образец заголовкаHow Interfaces Can EffectUsers’ Opinions of Items?

• Possible problems[10]– Altered opinions can provide less accurate preference

information and lead to less accurate predictions– Altered opinions can make it hard to evaluate recommender

systems– People with bad intuitions can take this advantages to mislead

users to unusual ratings

Page 10: Preference Elicitation Interface

Образец заголовкаHow Interfaces Can EffectUsers’ Opinions of Items?

• Experiments with the MovieLens– Re-rate movies with showing predictions to users– Manipulate predictions for unrated movies– Re-rate movies with different scales

• Results– Users are consistent in the re-rating part– Users enjoy “finer-grained” scales the best– Recommender systems do effect users’ rating– Predictions shown don’t change users’ opinions– Systems are self-correcting– Users are sensitive to the manipulated predictions

• So, designing efficient and user-friendly interfaces is important!

Page 11: Preference Elicitation Interface

Образец заголовкаOne Design Example: Occasionally Connected Recommender System

• Different scenario from today’s life• It foresees what the mobile movie• Related apps look like nowadays• Key ideas of design[9]– Trust– Logic transparent– Details including images and ratings– Allowing users to refine recommendations

Page 12: Preference Elicitation Interface

Образец заголовкаHow to Make The RatingProcess More Efficient?

• List (ranking type)• Binary (ranking type)• Stars plus history (rating type)• Users like the stars plus history most[7] and they do not want direct comparison

Page 13: Preference Elicitation Interface

Образец заголовкаHow to Make The RatingProcess More Consistent?

• Tag and exemplar[8]• The exemplar interface has the lowest root mean

square error (RMSE), the lowest minimum RMSE and the least amount of natural noise

Page 14: Preference Elicitation Interface

Образец заголовка

Approaches to chooseproper examples

Page 15: Preference Elicitation Interface

Образец заголовкаSeveral Combined Strategies• Pop*Ent: –a combination of popularity and entropy

• Item-Item strategy: –Before a user can rate a presented movie, use any

strategy to select movies. –Then the recommender calculate the similarity between

items to select other items. –Update the similar movies list when the users have rated

more movies.–disadvantage: The user see an item correlated with liking

this item[3]Rashid, A. M., Albert, I., Cosley, D., Lam, S. K., McNee, S. M., Konstan, J. A., & Riedl, J. (2002, January). Getting to know you: learning new user preferences in recommender systems. In Proceedings of the 7th international conference on Intelligent user interfaces (pp. 127-134). ACM.

Page 16: Preference Elicitation Interface

Образец заголовкаImproved Strategies Based on Entropy

• Entropy0: entropy considering missing values – Treating the missing evaluations as a separate category of evaluation– Disadvantage: favor frequently-rated items too much, – Improvement: assign less weight to the category of miss evaluations.

• Harmonic mean of Entropy and Logarithm of Frequency (HELF) – Combine the entropy (variability) and frequency (popularity) together. – Use “Harmonic mean combines precision and recall sores”– Use logarithm to transform the exponential-like curve (number of times rated – Frequency) to a

linear-like curve.

• Information Gain through Clustered Neighbors (IGCN)– Takes the users rating history into account. – “Repeatedly computing information gain of items”. – The rating data only comes from who match the new user’s profile best so far

Page 17: Preference Elicitation Interface

Образец заголовкаImproved Strategy: “group of items”•Generate the groups: –Employ Spectral Clustering algorithm Describe the group–Pick representative tags first then find relevant items

• Eliciting preferences step:–Allocating a fixed total of points to one or more movie

groups. • The recommender system recommend items based on users’ “favorite” cluster. –Find out users who have the highest ratings for these iems in

that “favorite” cluster –Generate a pseudo rating profile.

Page 18: Preference Elicitation Interface

Образец заголовкаStimulate Users’ Preference by Examples

• Tweaking: –Let users state preferences respect to a current

example. • Example: –“look for anapartment similar to this, but with a

betterambience.”•User states his/her “target choice” by navigating the current best option to be even better.• Application: –FindMe systems

[1]Pu, P., & Chen, L. (2009). User-involved preference elicitation for product search and recommender systems. AI magazine, 29(4), 93.

Page 19: Preference Elicitation Interface

Образец заголовкаStimulate Users’ Preference by Examples

• Explicit preference model: –Maintain an explicit model. –By critiquing examples, user states additional preferences

and the system accumulates these preferences in a model.• Advantages: –Avoid recommending products that have already been

declined by user–The system can suggest products whose preferences are

still missing in the stated model

[1]Pu, P., & Chen, L. (2009). User-involved preference elicitation for product search and recommender systems. AI magazine, 29(4), 93.

Page 20: Preference Elicitation Interface

Образец заголовкаPreference Revision: Due with Preference Conflicts

• Partial constraint satisfaction techniques–Show partially satisfied results with compromises

clearly explained to the user• Browsing- based interaction techniques–Disadvantage: • Matching products will suddenly become null when the user

enters all the preferences • The user cannot know which attributes are conflicts.

• Partial constraint satisfaction techniques are better for revising preference.

[1]Pu, P., & Chen, L. (2009). User-involved preference elicitation for product search and recommender systems. AI magazine, 29(4), 93.

Page 21: Preference Elicitation Interface

Образец заголовкаTrade-off Assistance•When a user considers an item to be the final option, the Trade-off Assistance can help him/her get higher decision accuracy by give user several trade-off alternatives.–Two type: • system-proposed trade-off support • user-motivated trade-off method.

– advantage: a higher decision accuracy with less cognitive effort.

[1]Pu, P., & Chen, L. (2009). User-involved preference elicitation for product search and recommender systems. AI magazine, 29(4), 93.

Page 22: Preference Elicitation Interface

Образец заголовка

Evaluation

Page 23: Preference Elicitation Interface

Образец заголовкаHow Much Information in Ratings?• A preference bits framework can reduce noise in ratings and help to evaluate how much preference information can ratings and predictions hold[6].

[6]Kluver, D., Nguyen, T. T., Ekstrand, M., Sen, S., & Riedl, J. (2012, September). How many bits per rating?. In Proceedings of the sixth ACM conference on Recommender systems (pp. 99-106). ACM.

Page 24: Preference Elicitation Interface

Образец заголовкаPreference Bits Per Rating Increases as the Size of Rating Scale Increases• A sweet pot in the scales• Comparison between 5-points scale and 2-points scale•Quality and quantity trade-off

Page 25: Preference Elicitation Interface

Образец заголовкаPreference Bits VS. Other Metrics• Preference bits is scale free• Preference bits, mean absolute error (MAE) and

root mean square error (RMSE) provide similar evaluations

• Preference bits holds different characteristics and can tell the difference between scales more easily

Page 26: Preference Elicitation Interface

Образец заголовкаEvaluation of Example Choosing Strategies

•User effort: –How hard was it to sigh up

• Recommendation accuracy: –How well can the system make recommendations to

the user•Why choose these two properties: –Easy to measure in both off-line and on-line

[2]Rashid, A. M., Karypis, G., & Riedl, J. (2008). Learning preferences of new users in recommender systems: an information theoretic approach. ACM SIGKDD Explorations Newsletter, 10(2), 90-100.

Page 27: Preference Elicitation Interface

Образец заголовкаFuture work• How to improve prediction accuracy will be one of the main interests in the study of recommender systems.• Better investigation of the system’s needs for diverse ratings across all items• How to balance the User Effort and Recommendation Accuracy.• Using use’s information from social networking platform such as Facebook and Twitter to acquire user’s preference.•More meaningful scales and better algorithms need to be discovered.• Opinion expression and representation interfaces still have various aspects that haven’t been talked about.

Page 28: Preference Elicitation Interface

Образец заголовкаReference[1]Pu, P., & Chen, L. (2009). User-involved preference elicitation for product search and recommender systems. AI

magazine, 29(4), 93.[2]Rashid, A. M., Karypis, G., & Riedl, J. (2008). Learning preferences of new users in recommender systems: an

information theoretic approach. ACM SIGKDD Explorations Newsletter, 10(2), 90-100.[3]Rashid, A. M., Albert, I., Cosley, D., Lam, S. K., McNee, S. M., Konstan, J. A., & Riedl, J. (2002, January). Getting to

know you: learning new user preferences in recommender systems. In Proceedings of the 7th international conference on Intelligent user interfaces (pp. 127-134). ACM.

[4]Chang, S., Harper, F. M., & Terveen, L. (2015). Using groups of items for preference elicitation in recommender systems. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW’’15). ACM, New York, NY.

[5]McNee, S. M., Lam, S. K., Konstan, J. A., & Riedl, J. (2003). Interfaces for eliciting new user preferences in recommender systems. In User Modeling 2003 (pp. 178-187). Springer Berlin Heidelberg.

[6]Kluver, D., Nguyen, T. T., Ekstrand, M., Sen, S., & Riedl, J. (2012, September). How many bits per rating?. In Proceedings of the sixth ACM conference on Recommender systems (pp. 99-106). ACM.

[7]Nobarany, S., Oram, L., Rajendran, V. K., Chen, C. H., McGrenere, J., & Munzner, T. (2012, May). The design space of opinion measurement interfaces: exploring recall support for rating and ranking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2035-2044). ACM.

[8]Nguyen, T. T., Kluver, D., Wang, T. Y., Hui, P. M., Ekstrand, M. D., Willemsen, M. C., & Riedl, J. (2013, October). Rating support interfaces to improve user experience and recommender accuracy. In Proceedings of the 7th ACM conference on Recommender systems (pp. 149-156). ACM.

[9]Miller, B. N., Albert, I., Lam, S. K., Konstan, J. A., & Riedl, J. (2003, January). MovieLens unplugged: experiences with an occasionally connected recommender system. In Proceedings of the 8th international conference on Intelligent user interfaces (pp. 263-266). ACM.

[10]Cosley, D., Lam, S. K., Albert, I., Konstan, J. A., & Riedl, J. (2003, April). Is seeing believing?: how recommender system interfaces affect users' opinions. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 585-592). ACM.