29
CROWD CENTRALITY David Karger Sewoong Oh Devavrat Shah MIT and UIUC

Crowd Centrality

  • Upload
    helki

  • View
    83

  • Download
    0

Embed Size (px)

DESCRIPTION

Crowd Centrality. David Karger Sewoong Oh Devavrat Shah MIT and UIUC. Crowd Sourcing. Crowd Sourcing. $30 million to land on moon. $0.05 for Image Labeling Data Entry Transcription. Micro-task Crowdsourcing. Micro-task Crowdsourcing. Left. Left. - PowerPoint PPT Presentation

Citation preview

Page 1: Crowd Centrality

CROWD CENTRALITY

David Karger Sewoong Oh Devavrat Shah

MIT and UIUC

Page 2: Crowd Centrality

CROWD SOURCING

Page 3: Crowd Centrality

CROWD SOURCING $30 million to land on moon

$0.05 forImage LabelingData EntryTranscription

Page 4: Crowd Centrality

MICRO-TASK CROWDSOURCING

Page 5: Crowd Centrality

Which door is the women’s restroom?

Right

Left

Left

MICRO-TASK CROWDSOURCING

Page 6: Crowd Centrality

MICRO-TASK CROWDSOURCING

Undergrad Intern:Mturk (single label):

200 image/hr, cost: $15/hr4000 image/hr, cost: $15/hr

Reliability

90% 65%

Mturk (mult. labels): 500 image/hr, cost: $15/hr 90%

Find cancerous tumor cells

Page 7: Crowd Centrality

THE PROBLEM

Goal: Reliable estimate the tasks with minimal cost

Operational questions: Task assignment Inferring the “answers”

Page 8: Crowd Centrality

TASK ASSIGNMENT

Random ( , )-regular bipartite graphs Locally Tree-like

Sharp analysis

Good expander

High Signal to Noise Ratio

Tasks

Batches

Page 9: Crowd Centrality

MODELING THE CROWD

Binary tasks: Worker reliability:

Necessary assumption: we know

Aij

+ - + - +

Page 10: Crowd Centrality

INFERENCE PROBLEM Majority:

Oracle:

ti

++

-

p1 p2 p3 p4 p5

Page 11: Crowd Centrality

INFERENCE PROBLEM Majority:

Oracle:

Our Approach:p1 p2 p3 p4 p5

Page 12: Crowd Centrality

PREVIEW OF RESULTS

Distribution of {pj}: observed to be Beta distribution by Holmes ‘10 + Ryker et al ‘10 EM algorithm : Dawid, Skene ‘79 + Sheng, Provost, Ipeirotis ‘10

Page 13: Crowd Centrality

PREVIEW OF RESULTS

Page 14: Crowd Centrality

ITERATIVE INFERENCE Iteratively learn

Message-passing O(# edges) operations

Approximate MAPp1 p2 p3 p4 p5

Page 15: Crowd Centrality

EXPERIMENTS: AMAZON MTURK

Learning similarities Recommendations Searching, …

Page 16: Crowd Centrality

EXPERIMENTS: AMAZON MTURK

Learning similarities Recommendations Searching, …

Page 17: Crowd Centrality

EXPERIMENTS: AMAZON MTURK

Page 18: Crowd Centrality

TASK ASSIGNMENT: WHY RANDOM GRAPH

Page 19: Crowd Centrality

KEY METRIC: QUALITY OF CROWDCrowd Quality Parameter

p1 p2 p3 p4 p5

Theorem (Karger-Oh-Shah). Let n tasks assigned to n workers as per

an (l,l) random regular graph Let ql > √2 Then, for all n large enough (i.e. n =Ω(lO(log(1/q)) elq))) after O(log (1/q)) iterations of the algorithm

If pj = 1 for all j q = 1

If pj = 0.5 for all j q = 0

q different from μ2 = (E[2p-1])2

q≤μ≤√q

Page 20: Crowd Centrality

HOW GOOD IS THIS ? To achieve target Perror ≤ε, we need

Per task budget l = Θ(1/q log (1/ε))

And this is minimax optimal

Under majority voting (with any graph choice) Per task budget required is l = Ω(1/q2 log (1/ε))

no significant gain by knowing side-information(golden question, reputation, …!)

Page 21: Crowd Centrality

ADAPTIVE TASK ASSIGNMENT: DOES IT HELP ?

Theorem (Karger-Oh-Shah). Given any adaptive algorithm,

let Δbe the average number of workers required per task

to achieve desired Perror ≤ε Then there exists {pj} with quality q so that

gain through adaptivity is limited

Page 22: Crowd Centrality

WHICH CROWD TO EMPLOY

Page 23: Crowd Centrality

BEYOND BINARY TASKS Tasks:

Workers:

Assume pj ≥ 0.5 for all j Let q be quality of {pj}

Results for binary task extend to this setting

Per task, number of workers required scale as O(1/q log (1/ε) + 1/q log K) To achieve Perror ≤ ε

Page 24: Crowd Centrality

BEYOND BINARY TASKS Converting to K-1 binary problems

each with quality ≥ q

For each x, 1 < x ≤ K: Aij(x) = +1 if Aij ≥ x, and -1 otherwise ti(x) = +1 if ti ≥ x, and -1 otherwise Then

Corresponding quality q(x) ≥ q

Using result for binary problem, we have Perror(x) ≤ exp(-lq/16) Therefore

Perror ≤ Perror(2) + … + Perror(K) ≤ K exp(-lq/16)

Page 25: Crowd Centrality

WHY ALGORITHM WORKS? MAP estimation

Prior on probability {pj} Let f(p) be density over [0,1]

Answers A=[Aij] Then,

Belief propagation (max-product) algorithm for MAP With Haldane prior: pj is 0 or 1 with equal probability Iteration k+1: for all task-worker pairs (i,j)

Xi/Yjrepresent log likelihood ratio for ti/pj= +1 vs -1

This is exactly the same as our algorithm! And our random task assignment graph is tree-like

That is, our algorithm is effectively MAP for Haldane prior

Page 26: Crowd Centrality

A minor variation of this algorithm Ti

next = Tijnext

= Σ Wij’ Aij’ = Σ Wj’ Aij’ Wj

next = Wijnext

= Σ Ti’j Ai’j = Σ Ti’ Ai’j

Then, Tnext = AAT T

(subject to this modification) our algorithm is computing Left signular vector of A (corresponding to largest s.v.)

So why compute rank-1 approximation of A ?

WHY ALGORITHM WORKS?

Page 27: Crowd Centrality

WHY ALGORITHM WORKS?

Random graph + probabilistic model E[Aij] = (ti pj - (1-pj)ti) l/n = ti (2pj-1)l/n E[A] = t (2p-1)T l/n That is,

E[A] is rank-1 matrix And, t is the left singular vector of E[A]

If A ≈ E[A] Then computing left singular vector of A makes sense

Building upon Friedman-Kahn-Szemeredi ‘89 Singular vector of A provides reasonable approximation

Perror = O(1/lq) Ghosh, Kale, Mcafee ’12

For sharper result we use belief propagation

Page 28: Crowd Centrality

CONCLUDING REMARKS Budget optimal micro-task crowd sourcing via

Random regular task allocation graph Belief propagation

Key messages All that matters is quality of crowd Worker reputation is not useful for non-adaptive tasks Adaptation does not help due to fleeting nature of

workers Reputation + worker id needed for adaptation to be effective Inference algorithm can be useful for assigning reputation

Model of binary task is equivalent to K-ary tasks

Page 29: Crowd Centrality

ON THAT NOTE…