Evan Estola, Lead Machine Learning Engineer, Meetup at MLconf SEA - 5/20/16

Preview:

Citation preview

When Recommendation Systems Go Bad

Evan Estola5/20/16

About Me

Evan EstolaLead Machine Learning Engineer @ Meetupevan@meetup.com@estola

We want a world full of real, local community.Women’s Veterans Meetup, San Antonio, TX

Recommendation Systems: Collaborative Filtering

Recommendation Systems: Rating Prediction

Netflix prizeHow many stars would user X give movie YBoring

Recommendation Systems: Learning To Rank

Active area of researchUse ML model to solve a ranking problemPointwise: Logistic Regression on binary label, use output for

rankingListwise: Optimize entire listPerformance Metrics

Mean Average PrecisionP@KDiscounted Cumulative Gain

Data Science impacts

lives

Ads you seeApps you downloadFriend’s Activity/Facebook feedNews you’re exposed toIf a product is availableIf you can get a ridePrice you pay for thingsAdmittance into collegeJob openings you find/getIf you can get a loan

You just wanted a kitchen scale, now Amazon thinks you’re a drug dealer

Ego

Member/customer/user firstFocus on building the best product,

not on being the most clever data scientist

Much harder to spin a positive user story than a story about how smart you are

“Black-sounding” names 25% more likely to be served ad suggesting criminal record

Ethics

We have accepted that Machine Learning can seem creepy, how do we prevent it from becoming immoral?

We have an ethical obligation to not teach machines to be prejudiced.

Data Ethics

Awareness

Tell your friends Tell your coworkersTell your boss

Identify groups that could be negatively impacted by your work

Make a choiceTake a stand

Interpretable Models

For simple problems, simple solutions are often worth a small concession in performance

Inspectable models make it easier to debug problems in data collection, feature engineering etc.

Only include features that work the way you want

Don’t include feature interactions that you don’t want

Logistic Regression

StraightDistanceFeature(-0.0311f),ChapterZipScore(0.0250f),RsvpCountFeature(0.0207f),AgeUnmatchFeature(-1.5876f),GenderUnmatchFeature(-3.0459f),StateMatchFeature(0.4931f),CountryMatchFeature(0.5735f),FacebookFriendsFeature(1.9617f),SecondDegreeFacebookFriendsFeature(0.1594f),ApproxAgeUnmatchFeature(-0.2986f),SensitiveUnmatchFeature(-0.1937f),KeywordTopicScoreFeatureNoSuppressed(4.2432f),TopicScoreBucketFeatureNoSuppressed(1.4469f,0.257f,10f),TopicScoreBucketFeatureSuppressed(0.2595f,0.099f,10f),ExtendedTopicsBucketFeatureNoSuppressed(1.6203f,1.091f,10f),ChapterRelatedTopicsBucketFeatureNoSuppressed(0.1702f,0.252f,0.641f),ChapterRelatedTopicsBucketFeatureNoSuppressed(0.4983f,0.641f,10f),DoneChapterTopicsFeatureNoSuppressed(3.3367f)

Feature Engineering and Interactions

● Good Feature:○ Join! You’re interested in Tech x Meetup is about Tech

● Good Feature: ○ Don’t join! Group is intended only for Women x You are a Man

● Bad Feature:○ Don’t join! Group is mostly Men x You are a Woman

● Horrible Feature:○ Don’t join! Meetup is about Tech x You are a Woman

Meetup is not interested in propagating gender stereotypes

Ensemble Models and

Data segregation

Ensemble Models: Combine outputs of several classifiers for increased accuracy

If you have features that are useful but you’re worried about interaction (and your model does it automatically) use ensemble modeling to restrict the features to separate models.

Ensemble Model, Data Segregation

Data:*InterestsSearchesFriendsLocation

Data:*GenderFriendsLocation

Data:Model1 PredictionModel2 Prediction

Model1 Prediction

Model2 Prediction

Final Prediction

Fake profiles, track adsCareer coaching for “200k+”

Executive jobs AdMale group: 1852 impressionsFemale group: 318

Diversity Controlled Testing

CMU - AdFisherCrawls ads with simulated user profiles

Same technique can work to find bias in your own models!Generate Test Data

Randomize sensitive feature in real data setRun Model

Evaluate for unacceptable biased treatmentMust identify what features are sensitive and what outcomes are

unwanted

● Twitter bot● “Garbage in,

garbage out”● Responsibility?

“In the span of 15 hours Tay referred to feminism as a "cult" and a "cancer," as well as noting "gender equality = feminism" and "i love feminism now." Tweeting "Bruce Jenner" at the bot got similar mixed response, ranging from "caitlyn jenner is a hero & is a stunning, beautiful woman!" to the transphobic "caitlyn jenner isn't a real woman yet she won woman of the year?"”

Tay.ai

Diverse test data

Outliers can matterThe real world is messySome people will mess with youSome people look/act different than

you

DefenseDiversityDesign

You know racist computers are a bad idea

Don’t let your company invent racist computers