Click here to load reader

Tom Rampley Recommendation Engines: an Introduction

Embed Size (px)

Citation preview

  • Slide 1

Tom Rampley Recommendation Engines: an Introduction Slide 2 A Brief History of Recommendation Engines Today: Recommenders become core products In addition to Amazon, companies like Pandora, Stitchfix, and Google (because what is a search engine other than a document recommender?) make recommendations a core value add of their services 2000: Amazon joins the party The introduction and vast success of the Amazon recommendation engine in the early 2000s led to wide acceptance of the technology as a way of increasing sales 1992: Recommenders are older than you might think GroupLens becomes the first widely used recommendation engine Slide 3 What Does a Recommender Do? Recommendation engines use algorithms of varying complexity to suggest items based upon historical information Item ratings or content Past user behavior/purchase history Recommenders typically use some form of collaborative filtering Slide 4 Collaborative Filtering The name: Collaborative because the algorithm takes the choices of many users into account to make a recommendation Rely on user taste similarity Filtering because you use the preferences of other users to filter out the items most likely to be of interest to the current user Collaborative Filtering algorithms include: K nearest neighbors Cosine similarity Pearson correlation Bayesian belief nets Markov decision processes Latent semantic indexing methods Association Rules Learning Slide 5 Cosine Similarity Example Lets walk through an example of a simple collaborative filtering algorithm, namely cosine similarity Cosine similarity can be used to find similar items, or similar individuals. In this case, well be trying to identify individuals with similar taste Imagine individual ratings on a set of items to be a [user,item] matrix. You can then treat the ratings of each individual as an N- dimensional vector of ratings on items: {r 1, r 2 r N } The similarity of vectors (individuals ratings) can be computed by computing the cosine of the angle between them: The closer the cosine is to 1, the more alike the two individuals ratings are Slide 6 Cosine Similarity Example Continued Lets say we have the following matrix of users and ratings of TV shows: And we encounter a new user, James, who has only seen and rated 5 of these 7 shows: Of the two remaining shows, which one should we recommend to James? True BloodCSIJAGStar TrekCastleThe Wire Twin Peaks Bob5214325 Mary4421312 Jim1152523 George3435543 Jennifer5242410 Natalie0504414 Robin5500422 True BloodCSIJAGStar TrekCastle James55310 Slide 7 Cosine Similarity Example Continued To find out, well see who James is most similar to among the folks who have rated all the shows by calculating the cosine similarity between the vectors of the 5 shows that each individual have in common: It seems that Mary is the closest to James in terms of show ratings among the group. Of the two remaining shows, The Wire and Twin Peaks, Mary slightly preferred Twin Peaks so that is what we recommend to James Cosine SimilarityJames Bob0.73 Mary0.89 Jim0.47 George0.69 Jennifer0.78 Natalie0.50 Robin0.79 Slide 8 Collaborative Filtering Continued This simple cosine similarity example could be extended to extremely large datasets with hundreds or thousands of dimensions You can also compute item to item similarity by treating the item as the vectors for which youre computing similarity, and the users as the dimensions Allows for recommending similar items to a user after theyve made a purchase Amazon uses a variant of this algorithm This is an example of item-to-item collaborative filtering Slide 9 Adding ROI to the Equation: an Example with Nave Bayes When recommending products, some may generate more margin for the firm than others Some algorithms can take cost into account when making recommendations Nave Bayes is a commonly used classifier that allows for the inclusion of marginal value of a product sale in the recommendation decision Slide 10 Nave Bayes Bayes theorem tells us the probability of our beliefs being true given prior beliefs and evidence Nave Bayes is a classifier that utilizes Bayes theorem (with simplifying assumptions) to generate a probability of an instance belonging to a class Class likelihood can be combined with expected payoff to generate the optimal payoff from a recommendation Slide 11 Nave Bayes Continued How does the NB algorithm generate class probabilities, and how can we use the algorithmic output to maximize expected payoff? Lets say we want to figure out which of two products to recommend to a customer Each product generates a different amount of profit for our firm per unit sold We know the target customers past purchasing behavior, and we know the past purchasing behavior of twelve other customers who have bought one of the two potential recommendation products Lets represent our knowledge as a series of matrices and vectors Slide 12 Nave Bayes Continued Slide 13 NB uses (independent) probabilities of events to generate class probabilities Using Bayes theorem (and ignoring the scaling constant) the probability of a customer with past purchase history (a vector of past purchases) buying item is: P ( 1, , i | j ) P ( j ) Where P ( j ) is the frequency with which the item appears in the training data, and P ( 1, , i | j ) is P ( i | j ) for all i items in the training data That P ( 1, , i | j ) P ( j ) = P ( i | j ) P ( j ) is dependent up on the assumption of conditional independence between past purchases Slide 14 Nave Bayes Continued In our example, we can calculate the following probabilities: Slide 15 Now that we can calculate P ( 1, , i | j ) P ( j ) for all instances, lets figure out the most likely boat purchase for Eric: These probabilities may seem very low, but recall that we left out the scaling constant in Bayes theorem since were only interested in the relative probabilities of the two outcomes Nave Bayes Continued P()ToysGamesCandyBooksBoat EricSquirt GunLifeSnickersHarry Potter? Sailboat6/123/122/12 3/120.00086806 Speedboat6/121/122/121/12 0.00004823 Slide 16 So it seems like the sailboat is a slam dunk to recommend. Its much more likely (18 times!) for Eric to buy than the speedboat. But lets consider a scenario: lets say our hypothetical firm generates $20 of profit whenever a customer buys a speedboat, but only $1 when they buy a sailboat (outboard motors are apparently very high margin) In that case, it would make more sense to recommend the speedboat, because our expected payoff from the speedboat recommendation would be 11% greater ($20/$1 *.0000048/.00087) than our expected payout from the sailboat recommendation This logic can be applied to any number of products, by multiplying the set of purchase probabilities by the set of purchase payoffs, taking the maximum value as the recommended item Nave Bayes Continued Slide 17 Challenges While recommendation algorithms in many cases are relatively simple as machine learning goes, there are a couple of difficult problems that all recommenders must deal with: Cold start problem How do you make recommendations to someone for whom you have very little or no data? Data sparsity With millions of items for sale, most customers have bought very few individual items Grey and Black sheep problem Some people have very idiosyncratic taste, and making recommendations to them is extremely difficult because they dont behave like other customers Slide 18 Dealing With Cold Start Typically only a problem in the very early stages of a user-system interaction Requiring creation of a profile for new users can mitigate the problem to a certain extent, by making early recommendations contingent upon supplied personal data A recommender system can also start out using item-item recommendations based upon the first items a user buys, and gradually change over to a person-person system as the system learns the users taste Slide 19 Dealing With Data Sparsity Data sparsity can be dealt with primarily by two methods: Data imputation Latent factor methods Data imputation typically uses an algorithm like cosine similarity to impute the rating of an item based upon the ratings of similar users Latent factor methods typically use some sort of matrix decomposition to reduce the rank of the large, sparse matrix while simultaneously adding ratings for unrated items based upon latent factors Slide 20 Dealing With Data Sparsity Techniques like principal components analysis/singular value decomposition allow for the creation of low rank approximations to sparse matrices with relatively little loss of information Slide 21 Dealing With Sheep of Varying Darkness To a large extent, these cases are unavoidable Feedback on recommended items post purchase, as well as the purchase rate of recommended items, can be used to learn even very idiosyncratic preferences, but take longer than for a normal user Grey and black sheep are doubly troublesome because their odd tendencies can also weaken the strength of your engine to make recommendations to the broad population of white sheep Slide 22 References A good survey of recommendation techniques Matrix factorization for use in recommenders Article on the BellKor solution to the Netflix challenge Article on Amazon's recommendation engine