Upload
anmol-bhasin
View
4.671
Download
2
Tags:
Embed Size (px)
DESCRIPTION
Algorithmically matching items to users in a given context is essential for the success and profitability of large scale recommender systems like content optimization, computational advertising, search, shopping, movie recommendation, and many more. The objective is to maximize some utility (e.g. total revenue, total engagement) of interest over a long time horizon. This is a bandit problem since there is positive utility in displaying items that may have low mean but high variance. A key challenge in such bandit problems is the curse of dimensionality. Bandit problems are also difficult to work with for responses that are observed with considerable delay (e.g. return visits, confirmation of a buy). One approach is to optimize multiple competing objectives in the short-term to achieve the best long-term performance. For instance, in serving content to users on a website, one may want to optimize some combination of clicks and downstream advertising revenue in the short-term to maximize revenue in the long-run. In this talk, I will discuss some of the technical challenges by focusing on a concrete application - content optimization on the Yahoo! front page. I will also briefly discuss response prediction techniques for serving ads on the RightMedia Ad exchange.Bio: Deepak Agarwal is a statistician at Yahoo! who is interested in developing statistical and machine learning methods to enhance the performance of large scale recommender systems. Deepak and his collaborators significantly improved article recommendation on several Yahoo! websites, most notably on the Yahoo! front page (a 200+% improvement in click-rates). He also works closely with teams in computational advertising to deploy elaborate statistical models on the RightMedia Ad Exchange, yet another large scale recommender system. He currently serves as associate editor for the Journal of American Statistical Association (JASA) and IEEE Transaction on Knowledge discovery and Data Engineering (TKDE).
Citation preview
Recommender Systems: The Art and Science of Matching Items to Users
Deepak [email protected]
LinkedIn, 7th July, 2011
2Deepak Agarwal @LinkedIn’11
Recommender Systems
Serve the “right” item to users in an automated fashion to optimize long-term business objectives
3Deepak Agarwal @LinkedIn’11
Content Optimization: Match articles to users
4Deepak Agarwal @LinkedIn’11
Advertising: Recommend Ads on Pages
Displa
y/G
raph
ical A
d
Contextual Advertis
ing
5Deepak Agarwal @LinkedIn’11
Shopping: Recommend Related Items to buy
6Deepak Agarwal @LinkedIn’11
Recommend Movies
7Deepak Agarwal @LinkedIn’11
Recommend People
8Deepak Agarwal @LinkedIn’11
Problem Definition
USER
Item InventoryArticles, web page,
ads, …
Construct an automated algorithm to select item(s) to show
Get feedback (click, time-spent,rating, buy,…)
Refine parameters of the algorithm
Repeat (large number of times)Optimize metric(s) of interest
(Total clicks, Total revenue,…)
Low Marginal cost per serve, Efficient and intelligent systems can
provide significant improvements
Example applications Content, Movie, Advertising, Shopping,….. Context
page, previous item viewed,
…
9Deepak Agarwal @LinkedIn’11
Data Mining → Clever Algorithms
• So much data, enough to process it all and process it fast? • Ideally, we want to learn every user-item interaction
– Number of things to learn increases with data size
– Dynamic nature exacerbates the problem– We want to learn things quickly in order to react fast
10Deepak Agarwal @LinkedIn’11
Simple Approach: Segment Users/Items
UsersItem/item segments
User segments
Estimate CTR of items in each user segment
Serve most popular item in segment
i
j
CTRij = clicksij/viewsij
11Deepak Agarwal @LinkedIn’11
Example Application: Yahoo! front page
Recommend most popular article on slot F1 (out of 30-40, editorially programmed)
Can collect data every 5 minutes
Should be simple, just count clicks and views, right?
Not quite!
F1 F2 F3 F4
Today module
NEWS
12Deepak Agarwal @LinkedIn’11
Simple algorithm we began with
• Initialize CTR of every new article to some high number– This ensures a new article has a chance of being shown
• Show the most popular CTR article (randomly breaking ties) for each user visit in the next 5 minutes
• Re-compute the global article CTRs after 5 minutes• Show the new most popular for next 5 minutes• Keep updating article popularity over time
• Quite intuitive. Did not work! Performance was bad. Why?
13Deepak Agarwal @LinkedIn’11
Bias in the data: Article CTR decays over time
• This is what an article CTR curve looked like
We were computing CTR by cumulating clicks and views. – Missing decay dynamics? Dynamic growth model using a Kalman filter. – New model tracked decay very well, performance still bad
• And the plot thickens, my dear Watson!
14Deepak Agarwal @LinkedIn’11
Explanation of decay: Repeat exposure
• User Fatigue → CTR Decay
15Deepak Agarwal @LinkedIn’11
Clues to solve the mystery
• Users seeing an article for the first time have higher CTR, those being exposed have lower– but we use the same CTR estimate for all ?
• Other sources of bias? How to adjust for them?• A simple idea to remove bias
– Display articles at random to a small randomly chosen population• Call this the Random bucket• Randomization removes bias in data
– (Charles Pierce,1877; R.A. Fisher, 1935)• Some other observations
– Sticking with an article for complete 5 minutes was degrading performance, many bad articles got displayed too many times
– Reaction time to display good articles was slower
16Deepak Agarwal @LinkedIn’11
CTR of same article with/without randomization
Serving bucket Random bucket
DecayTime-of-Day
17Deepak Agarwal @LinkedIn’11
CTR of articles in Random bucket
• Track
Unbiased CTR, but it is dynamic. Simply counting clicks and views still didn’t won’t work well.
18Deepak Agarwal @LinkedIn’11
New algorithm
• Create a small random bucket which selects one out of K existing articles at random for each user visit
• Learn unbiased article popularity using random bucket data by tracking (through a non-linear Kalman filter)
Serve the most popular article in the serving bucket• Override rules: Don’t show an article to a user after few
previous exposures, other rules (diversity, voice),….
19Deepak Agarwal @LinkedIn’11
Other advantages
• The random bucket ensures continuous flow of data for all articles, we quickly discard bad articles and converge to the best one
• This saved the day, the project was a success!– Initial click-lift 40% (Agarwal et al. NIPS 08) – after 3 years it is 200+% (fully deployed on Yahoo! front
page and elsewhere on Yahoo!), we are still improving the system
20Deepak Agarwal @LinkedIn’11
More Details
• Agarwal, Chen, Elango, Ramakrishnan, Motgi, Roy, Zachariah. Online models for Content Optimization, NIPS 2008
• Agarwal, Chen, Elango. Spatio-Temporal Models for Estimating Click-through Rate, WWW 2009
21Deepak Agarwal @LinkedIn’11
Lessons learnt
• It is ok to start with simple models that learn a few things, but beware of the biases inherent in your data– E.g. of things gone wrong
• Learning article popularity – Data used from 5am-8am pst, served from 10am-1pm pst – Bad idea if article popular on the east, not on the west
• Randomization is a friend, use it when you can. Update the models fast, this may reduce the bias– User visit patterns close in time are similar
• What if we can’t afford complete randomization?– Learn how to gamble
22Deepak Agarwal @LinkedIn’11
Why learn how to gamble?
• Consider a slot machine with two arms
p2(unknown payoff
probabilities)
The gambler has 1000 plays, what is the best way to experiment ? (to maximize total expected reward)
This is called the “bandit” problem, have been studied for a long time.
Optimal solution: Play the arm that has maximum potential of being good
p1 >
23Deepak Agarwal @LinkedIn’11
Recommender Problems: Bandits?
• Two Items: Item 1 CTR= 2/100 ; Item 2 CTR= 250/10000– Greedy: Show Item 2 to all; not a good idea– Item 1 CTR estimate noisy; item could be potentially better
• Invest in Item 1 for better overall performance on average
• This is also referred to as Explore/exploit problem– Exploit what is known to be good, explore what is potentially good
CTR
Pro
babi
lity
dens
ity Article 2
Article 1
24Deepak Agarwal @LinkedIn’11
Bayes optimal solution in next 5 mins 2 articles, 1 uncertain
Uncertainty in CTR: pseudo #views
Opt
imal
allo
catio
n to
unc
erta
in a
rtic
le
25Deepak Agarwal @LinkedIn’11
More Details on the Bayes Optimal Solution
• Agarwal, Chen, Elango. Explore-Exploit Schemes for Web Content Optimization, ICDM 2009 – (Best Research Paper Award)
26Deepak Agarwal @LinkedIn’11
Recommender Problems: bandits in a casino
• Items are arms of bandits, ratings/CTRs are unknown payoffs– Goal is to converge to the best CTR item quickly– But this assumes one size fits all (no personalization)
• Personalization– Each user is a separate bandit– Hundreds of millions of bandits (huge casino)
• Rich literature (several tutorials on the topic)– Broadly : Clever/adaptive randomization– Our random bucket is a solution, often a good one in practice.
27Deepak Agarwal @LinkedIn’11
Back to the number of things to learn (curse of dimensionality)
• Pros of learning things at granular resolutions– Better estimates of affinities at event level
• (ad 77 has high CTR on publisher 88, instead of ad 77 has good CTR on sports publisher)
– Bias becomes less problematic• The more we chop, less prone we are to aggregating dissimilar
things, less biased our estimates.
• Challenges– Too much sparsity to learn everything at granular resolutions
• We don’t have that much traffic• E.g. many ads are not even shown on many publishers
– Explore/exploit helps but cannot do so much experimentation– In advertising, response rates (conversion, click) are too low,
further exacerbates the problem
28Deepak Agarwal @LinkedIn’11
Solution: Go granular but with back-off
• Too little data at granular level, need to borrow from coarse resolutions with abundant data (smoothing, shrinkage)
0/5
1. Pub-id=88, ad-id=77, zip=Palo Alto
2/20040/1000
400/10000200/5000
12. Pub-id=88, adv-id=911. Palo Alto
111. Bay Area121. Adv-id=9
CTR(1) = w1(0/5)+ w11(2/200) +w12(40/1000)+w121(200/5000) +w111(400/10000)
29Deepak Agarwal @LinkedIn’11
Sometimes too much data at granular level
100/50000
1. Pub-id=88, ad-id=80, zip=Arizona
….……
12. Pub-id=88, adv-id=811. Arizona
No need to back-off
CTR(1) = 100/50000
30Deepak Agarwal @LinkedIn’11
How much to borrow from ancestors?
• Learning the weights when there is little data• Depends on heterogeneity in CTRs of small cells
– Ancestors with similar CTR child nodes are more credible
• E.g. if all zip-codes in Bay Area have similar CTRs, more weights given to Bay Area node– Pool similar cells, separate dissimilar ones
Bay Area
Palo Alto
Mtn View
Las Gatos
31Deepak Agarwal @LinkedIn’11
Crucial issue
• Obtain grouping structures to perform effective back-off
BUT• How do we detect such groupings when dealing with high
dimensional data?– Billions/trillions of possible attribute combinations
• Statistical modeling to the rescue– Art and science, requires experience. – Important to understand the business, the problem, the data.
33Deepak Agarwal @LinkedIn’11
TWO EXAMPLES OF LEARNING GRANULAR
MODELS WITH BACK-OFF
34Deepak Agarwal @LinkedIn’11
Online Advertising: Matching ads to opportunities
Ad
vert
iser
s
Ad Network
Ads
Page
Pick best ads
User
Publisher
Examples:Yahoo, Google,
MSN,
Ad exchanges(network
of “networks”) …
Opportunity
35Deepak Agarwal @LinkedIn’11
How to Select “Best” ads
Ad
vert
iser
s
Ad Network
Ads
Page
Pick best ads
User
Publisher
Response rates(click, conversion,ad-view)
Bids
Auction
Click
conversion
Select argmax f(bid,rate)
Statisticalmodel
36Deepak Agarwal @LinkedIn’11
The Ad Exchange: Unified Marketplace
Transparency and value
Has ad impression to sell --AUCTIONS
Bids $0.50Bids $0.75 via Network…
… which becomes $0.45 bid
Bids $0.65—WINS!
AdSenseAd.com
Bids $0.60
37Deepak Agarwal @LinkedIn’11
Advertising example
• f(bid, rate) ---- rate is unknown, needs to be estimated• Goal: maximize revenue, advertiser ROI• High dimensional rate estimation
• Response obtained through interaction among few heavy-tailed categorical variables (pub, user, and ad)– #levels : could be millions and changes over time
),,|( juiyF
( pub, user)ad
38Deepak Agarwal @LinkedIn’11
Data
• Features available for both opportunity and ad– Publisher: Publisher content type – User: demographics, geo,…– Ad: Industry, text/video, text (if any)
• Hierarchically organized– Publisher hierarchy: URL → Domain → Publisher type– Geo hierarchy for users– Ad hierarchy: Ad → Campaign → Advertiser
• Past empirical analysis (Agarwal et al, KDD 2007)– Hierarchical grouping provides homogeneity in rates– Here, groupings available through domain knowledge
39Deepak Agarwal @LinkedIn’11
Model Setup
i j
xi, xjB( )Piuj = λij
baseline
residual
Eij = ∑u B(xi , xu,xj) (Expected Success)
Sij ~ Poisson(Eij λij)
MLE ( Sij /Eij ) does not work well
,
xu,
40Deepak Agarwal @LinkedIn’11
Hierarchical Smoothing of residuals
• Assuming two hierarchies (Publisher and advertiser)
Pub-class
Pub-id
Advertiser
Conv-id
campaign
Ad-idcell (i,j)
(Sij, Eij, λij)
41Deepak Agarwal @LinkedIn’11
Back-off Model
campaign
Pub-class
Pub-id
Advertiser
Conv-id
Ad-id
(Sij, Eij, λij)i j
7 neighbors3 blues, 4 greens
3214321 bbbggggredij
Back-off is through parameter sharingBlues and greens are neighbors of several reds
42Deepak Agarwal @LinkedIn’11
Ad- exchange (RightMedia)
• Advertisers participate in different ways– CPM (pay by ad-view)– CPC (pay per click)– CPA (pay per conversion)
• To conduct an auction, normalize across pricing types– Compute eCPM (expected CPM)
• Click-based eCPM = click-rate*CPC• Conversion-based eCPM = conv-rate*CPA
43Deepak Agarwal @LinkedIn’11
Data
• Two kinds of conversion rates– Post-Click conv-rate = click-rate*conv/click– Post-View conv-rate = conv/ad-view
• Three response rate models– Click-rate (CLICK), conv/click (PCC), – post-view conv/view (PVC)
44Deepak Agarwal @LinkedIn’11
Datasets : Right-Media
• CLICK [~90B training events, ~100M parameters]• Post Click Conversion(PCC) (~.5B training events,~81M
parameters)• PVC – Post-View conversions (~7B events, ~6M
parameters)– Cookie gets augmented with pixel, trigger conversion when user
visits the landing page
• Features– Age, gender, ad-size, pub-class, user fatigue– 2 hierarchies (publisher and advertiser)
• Two baselines– Pubid x adid [FINE] (no hierarchical information)– Pubid x advertiser [COARSE] (collapse cells)
45Deepak Agarwal @LinkedIn’11
Accuracy: Average test log-likelihood
46Deepak Agarwal @LinkedIn’11
More Details
• Agarwal, Kota, Agrawal, Khanna: Estimating Rates of Rare Events with Multiple Hierarchies through Scalable Log-linear Models, KDD 2010
47Deepak Agarwal @LinkedIn’11
Back to Yahoo! front page
Recommend articles: Image Title, summary Links to other pages
For each user visit, Pick 4 out of a pool of K
Routes traffic to other pages
1 2 3 4
48Deepak Agarwal @LinkedIn’11
DATA
article j with
User i withuser features xi(demographics,browse history,search history, …)
item features xj
(keywords, content categories, ...)
(i, j) : response yijvisits
Algorithm selects
(rating or click/no-click)
49Deepak Agarwal @LinkedIn’11
Bipartite Graph completion problem
UsersArticles
no-click
click
Observed Graph
• Users
• Articles
PredictedCTR Graph
50Deepak Agarwal @LinkedIn’11
Factor Model to estimate CTR at granular levels
i j
ui vji
User popularity
jItem popularity
r
kjkikjiijij vupCTR
1
1))exp(1(
51Deepak Agarwal @LinkedIn’11
Estimating granular latent factors via back-off
• If user/item have high degree, good estimates of factors available else we need back-off
• Back-off: We use user/item features through regressions
Age=old Geo=Mtn-View Int=Ski
Uik = G1k 1(Agei=old) + G2k 1(Geoi=Mtn-View) + G3k 1(Inti=Ski)
• Weights of 8 different fallbacks using 3 parameters
52Deepak Agarwal @LinkedIn’11
Estimates with back-off
• For new user/article, factor estimates based on features
• For old user/article, factor estimates
• Linear combination of regression and user “ratings”
itemnewnew
usernewnew DG xvxu
,
)()(Rest)|( 1'
ii Nj
jijuseri
Njjji RGIE vxvvu
jiijij yR
53Deepak Agarwal @LinkedIn’11
Estimating the back-off Regression function
j i j
jiji
ijiij
ddDgGgDataf vuvuvu )),(),(),,((
Maximize
Integral cannot be computed in closed form, approximated by Monte Carlo using Gibbs Sampling
54Deepak Agarwal @LinkedIn’11
Data Example
• 2M binary observations by 30K heavy users on 4K articles– Heavy user ---- at least 30 visits to the portal in last 5 months
• Article features– Editorially labeled category information (~50 binary features)
• User features– Demographics, browse behavior (~1K features)
• Training/test split by timestamp of events (75/25)• Methods
– Factor model with regression, no online updates– Factor model with regression + online updates– Online model based on user-user similarity (Online-UU)– Online probabilistic latent semantic index (Online-PLSI)
55Deepak Agarwal @LinkedIn’11
ROC curve
Factor model: regression + online updates
Factor model: regression only
56Deepak Agarwal @LinkedIn’11
More Details
• Agarwal and Chen: Regression Based Latent Factor Models, KDD 2009
57Deepak Agarwal @LinkedIn’11
Computation
• Both models run on Hadoop, scalable to large datasets
• For the factor models, also working on online EM – Collaboration with Andrew Cron, Duke University
58Deepak Agarwal @LinkedIn’11
MULTI-OBJECTIVESBEYOND CLICKS
59Deepak Agarwal @LinkedIn’11
Post-click utilities
SPORTS
NEWS
OMG
FINANCE
Recommender
EDITORIAL
contentClicks on FP links influence downstream supply distribution
AD SERVER
PREMIUM DISPLAY (GUARANTEED)
NETWORK PLUS (Non-Guaranteed)
Downstream engagement
(Time spent)
60Deepak Agarwal @LinkedIn’11
Serving Content on Front Page: Click Shaping
• What do we want to optimize?• Usual: Maximize clicks (maximize downstream supply from FP)• But consider the following
– Article 1: CTR=5%, utility per click = 5 – Article 2: CTR=4.9%, utility per click=10
• By promoting 2, we lose 1 click/100 visits, gain 5 utils
• If we do this for a large number of visits --- lose some clicks but obtain significant gains in utility?– E.g. lose 5% relative CTR, gain 20% in utility (revenue, engagement, etc)
61Deepak Agarwal @LinkedIn’11
How are Clicks being Shaped ?autos finance
health
hotjobs
movies
new.music
news
omgrealestate
rivals
shine
shopping
sports
tech
travel
tv
video
other
gmy.news
buzz
videogamesautos
finance
health
hotjobs
movies
new.music
news
omgrealestate
rivals
shine
shopping
sports
tech
travel
tv
video
other
videogames
buzz
gmy.news
-10.00%
-8.00%
-6.00%
-4.00%
-2.00%
0.00%
2.00%
4.00%
6.00%
8.00%
10.00%
Supply distributionChanges
BEFOREAFTER
SHAPING can happen with respect to multiple downstream metrics (like engagement, revenue,…)
62Deepak Agarwal @LinkedIn’1162
Multi-Objective Optimization
A1
A2
An
n articles K properties
news
finance
omg
… …
S1
S2
Sm
m user segments
…
• CTR of user segment i on article j: pij
• Time duration of i on j: dij
known p ij, d ijx ij: variables
63Deepak Agarwal @LinkedIn’1163
Multi-Objective Program
Scalarization
Goal Programming
64Deepak Agarwal @LinkedIn’11
Pareto-optimal solution (more in KDD 2011)
64
65Deepak Agarwal @LinkedIn’11
More Details
• Agarwal, Chen, Elango, Wang: Click Shaping to Optimize Multiple Objectives, KDD 2011 (forthcoming)
66Deepak Agarwal @LinkedIn’11
Can we do it with Advertising Revenue?
• Yes, but need to be careful.– Interventions can cause undesirable long-term impact– Communication between two complex distributed systems
– Display advertising at Y! also sold as long-term guaranteed contracts
• We intervene to change supply when contract is at risk of under-delivering
• Research to be shared in the future
67Deepak Agarwal @LinkedIn’11
Summary
• Simple models that learn a few parameters are fine to begin with BUT beware of bias in data– Small amounts of randomization + fast model updates
• Clever Randomization using Explore/Exploit techniques
• Granular models are more effective but we need good statistical algorithms to provide back-off estimates
• Considering multi-objective optimization is often important
68Deepak Agarwal @LinkedIn’11
A modeling strategy
Offline(Logistic, GBDT,..)Coarse and slow changing
components
Feature EngineeringContent: IR, clustering, taxonomy, entity,..
User profiles: clicks, views, social, community,..
Online(Fine resolution
Corrections)(item, user level)(Quick updates)
Explore/Exploit(Adaptive sampling)
Initialize
69Deepak Agarwal @LinkedIn’11
Indexing for fast retrieval at runtime
• Retrieving the top-k when item inventory is large in few a milli-seconds could be challenging for complex models
• Current work (joint with Maxim Guverich)– Approximate the model by an index friendly synthetic model– Index friendly model retrieves the top-K very fast, a second stage
evaluation on top-K retrieves the top-k ( K > k)– Research to be shared in a forthcoming paper
70Deepak Agarwal @LinkedIn’11
Collaborators
• Bee-Chung Chen (Yahoo! Research, CA)• Pradheep Elango (Yahoo! Labs, CA)• Liang Zhang (Yahoo! Labs, CA)• Nagaraj Kota (Yahoo! Labs, India)• Xuanhui Wang (Yahoo! Labs, CA)• Rajiv Khanna (Yahoo! Labs, India)• Andrew Cron (Duke University)• Engineering & Product Teams (CA, India)
• Special thanks to Yahoo! Labs senior leadership for the support– Andrei Broder, Preston MacAfee ,Prabhakar Raghavan ,Raghu Ramakrishnan