61
Data-Intensive Computing for Text Analysis CS395T / INF385T / LIN386M University of Texas at Austin, Fall 2011 Lecture 7 October 6, 2011 Matt Lease School of Information University of Texas at Austin ml at ischool dot utexas dot edu Jason Baldridge Department of Linguistics University of Texas at Austin Jasonbaldridge at gmail dot com 1

Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

  • View
    2.117

  • Download
    1

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Data-Intensive Computing for Text Analysis CS395T / INF385T / LIN386M

University of Texas at Austin, Fall 2011

Lecture 7 October 6, 2011

Matt Lease

School of Information

University of Texas at Austin

ml at ischool dot utexas dot edu

Jason Baldridge

Department of Linguistics

University of Texas at Austin

Jasonbaldridge at gmail dot com

1

Page 2: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Acknowledgments

Course design and slides based on Jimmy Lin’s cloud computing courses at the University of Maryland, College Park

Some figures courtesy of the following excellent Hadoop books (order yours today!)

• Chuck Lam’s Hadoop In Action (2010)

• Tom White’s Hadoop: The Definitive Guide, 2nd Edition (2010)

2

Page 3: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Today’s Agenda

• Hadoop Counters • Graph Processing in MapReduce

– Representing/Encoding Graphs • Adjacency matrices vs. Lists

– Example: Single Source Shortest Page – Example: PageRank

• Themes – No shared memory redundant computation

• More computational capability overcomes less efficiency

– Iterate MapReduce computations until convergence – Use non-MapReduce driver for over-arching control

• Not just for pre- and post-processing • Opportunity for global synchronization between iterations

• In-class exercise

3

Page 4: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Hadoop Counters

Page 5: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Lam p. 98, White pp. 226-227

5

Page 6: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Hadoop Counters & Global State

• Hadoop’s Counters provide its only means for sharing/modifying global distributed state – Built-in safeguards for distributed modification

• e.g. two tasks try to increment a counter simultaneously

– Lightweight: only long bytes… per counter

– Limited control • create, read, and increment

• no destroy, arbitrary set, or decrement

• Advertised use: progress tracking and logging

• To what extent might we “abuse” counters for tracking/updating interesting shared state?

White p. 172

6

Page 7: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

How high (and precisely) can you count?

• How precise? • Integer representation

• To approximate fractional values, scale and truncate (Lin & Dyer p. 99)

• How high? – “8-byte integers” (Lin & Dyer p. 99 ): really only one byte?

– Old API: org.apache.hadoop.mapred.Counters • long getCounter(…), incrCounter(…, long amount)

– New API: org.apache.hadoop.mapreduce.Counter • long getValue(), increment(long incr)

• How many? – Old API: static int MAX_COUNTER_LIMIT (next slide…)

– New API: ???? (int countCounters() )

7

Page 8: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

8

Page 9: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

9

Page 10: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

White p. 173, 227-231

• incrCounter(…)

• getCounters(…)

• getCounter(…)

• findCounter(…)

• http://developer.yahoo.com/hadoop/tutorial/module5.html#metrics

10

Page 11: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Counters and Global State

Counter values are definitive only once a job has successfully completed - White p. 227 What about while a job is running? • If a task reports progress, it sets a JobTracker flag to indicate a status

change should be sent to the TaskTracker – The flag is checked in a separate thread every 3s, and if set, the

TaskTracker is notified – What about counter updates?

• The TaskTracker sends heartbeats to the JobTracker (at least every 5s) which include the status of all tasks being run by the TaskTracker... – Counters (which can be relatively larger) are sent less frequently

• JobClient receives the latest status by polling the jobtracker every 1s • Clients can call JobClient’s getJob() to obtain a RunningJob instance

with the latest status information (at time of the call?)

White p. 172

11

Page 12: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Representing Graphs

Page 13: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

What’s a graph?

Graphs are ubiquitous

The Web (pages and hyperlink structure)

Computer networks (computers and connections)

Highways and railroads (cities and roads/tracks)

Social networks

G = (V,E), where

V: the set of vertices (nodes)

E: the set of edges (links)

Either/Both may contain additional information

• e.g. edge weights (e.g. cost, time, distance)

• e.g. node values (e.g. PageRank)

Graph types

Directed vs. undirected

Cyclic vs. acyclic

Page 14: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Some Graph Problems

Finding shortest paths

Routing Internet traffic and UPS trucks

Finding minimum spanning trees

Telco laying down fiber

Finding Max Flow

Airline scheduling

Identify “special” nodes and communities

Breaking up terrorist cells, spread of avian flu

Bipartite matching

Monster.com, Match.com

And of course... PageRank

Page 15: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Graphs and MapReduce

MapReduce graph processing typically involves

Performing computations at each node

• e.g. using node features, edge features, and local link structure

Propagating computations

• “traversing” the graph

Key questions

How do you represent graph data in MapReduce?

How do you traverse a graph in MapReduce?

Page 16: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Graph Representation

How do we encode graph structure suitably for

computation

propagation

Two common approaches

Adjacency matrix

Adjacency list

1

2

3

4

Page 17: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Adjacency Matrices

Represent a graph as an |V| x |V| square matrix M

Mjk = w directed edge of weight w from node j to node k

• w=0 no edge exists

• Mii: main diagonal gives self-loop weights from node i to itself

If undirected, use only top-right of matrix (symmetry)

1 2 3 4

1 0 1 0 1

2 1 0 1 1

3 1 0 0 0

4 1 0 1 0

1

2

3

4

Page 18: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Adjacency Matrices: Critique

Advantages:

Amenable to mathematical manipulation

Easy iteration for computation over out-links and in-links

• Mj* column over all out-links from node j

• M*k row over all in-links to node k

Disadvantages

Sparsity: wasted computations, wasted space

Page 19: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Adjacency Lists

Take adjacency matrices… and throw away all the zeros

Hmm… look familiar…?

1: 2, 4

2: 1, 3, 4

3: 1

4: 1, 3

1 2 3 4

1 0 1 0 1

2 1 0 1 1

3 1 0 0 0

4 1 0 1 0

Page 20: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Inverted Index: Boolean Retrieval

one fish, two fish Doc 1

red fish, blue fish Doc 2

cat in the hat Doc 3

1

1

1

1

1

1

1 2 3

1

1

1

4

blue

cat

egg

fish

green

ham

hat

one

3

4

1

4

4

3

2

1

blue

cat

egg

fish

green

ham

hat

one

2

green eggs and ham Doc 4

1 red

1 two

2 red

1 two

Page 21: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Adjacency Lists: Critique

Vs. Adjacency matrix

Sparsity: More compact, fewer wasted computations

Easy to compute over out-links

What about computation over in-links?

1: 2, 4

2: 1, 3, 4

3: 1

4: 1, 3

1 2 3 4

1 0 1 0 1

2 1 0 1 1

3 1 0 0 0

4 1 0 1 0

Page 22: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Single Source Shortest Path

Page 23: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Problem

Find shortest path from a source node to one or more

target nodes

Shortest may mean lowest weight or cost, etc.

Classic approach

Dijkstra’s Algorithm

• Maintain a global priority queue over all (node, distance) pairs

• Sort queue by min distance to reach each node from the source node

• Initialization: distance to source node = 0, all others =

• Visit nodes in order of (monotonically) increasing path length

• Whenever node visited, no shorter path exists

• For each node is visited

• update its neighbours in the queue

• Remove the node from the queue

Page 24: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Edsger W. Dijkstra

May 11, 1930 – August 6, 2002

Received the 1972 Turing Award

Schlumberger Centennial Chair of Computer Science at

UT Austin (1984-2000)

http://en.wikipedia.org/wiki/Dijkstra’s_algorithm

Wikipedia has nice animation of it in action

Page 25: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Dijkstra’s Algorithm

Maintain global priority queue over all (node, distance) pairs

Sort queue by min distance to reach each node from the source node

Initialization

distance to source node = 0

distance to all other nodes =

While queue not empty

visit next node (i.e. the node with shortest path length in the queue)

• Output distance to it if desired

• Update distance to each of its neighbours in the queue

• Remove it from the queue

Page 26: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Dijkstra’s Algorithm Example

0

10

5

2 3

2

1

9

7

4 6

Example from CLR

Page 27: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Dijkstra’s Algorithm Example

0

10

5

Example from CLR

10

5

2 3

2

1

9

7

4 6

Page 28: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Dijkstra’s Algorithm Example

0

8

5

14

7

Example from CLR

10

5

2 3

2

1

9

7

4 6

Page 29: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Dijkstra’s Algorithm Example

0

8

5

13

7

Example from CLR

10

5

2 3

2

1

9

7

4 6

Page 30: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Dijkstra’s Algorithm Example

0

8

5

9

7

1

Example from CLR

10

5

2 3

2

1

9

7

4 6

Page 31: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Dijkstra’s Algorithm Example

0

8

5

9

7

Example from CLR

10

5

2 3

2

1

9

7

4 6

Page 32: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Problem

Find shortest path from a source node to one or more

target nodes

Shortest may mean lowest weight or cost, etc.

Classic approach

Dijkstra’s Algorithm

Page 33: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Problem

Find shortest path from a source node to one or more

target nodes

Shortest may mean lowest weight or cost, etc.

Classic approach

Dijkstra’s Algorithm

MapReduce approach

Parallel Breadth-First Search (BFS)

Page 34: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Finding the Shortest Path

Assume unweighted graph (for now…)

General Inductive Approach

Initialization

• DISTANCETO(source s) = 0

• For any node n connected to s, DISTANCETO(n) = 1

• Else DISTANCETO(any other node p) = For each iteration

• For every node n

• For every neighbor m M(n),

DISTANCETO(m) = 1 + min( DISTANCETO(n) )

s

… m3

m2

m1

n

d1

d2

d3

Page 35: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Visualizing Parallel BFS

n0

n3 n2

n1

n7

n6

n5

n4

n9

n8

Page 36: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

From Intuition to Algorithm

Representation

Key: node n

Value: d (distance from start)

• Also: adjacency list (list of nodes reachable from n)

Initialization: d = for all nodes except start node

Mapper

m adjacency list: emit (m, d + 1)

Sort/Shuffle

Groups distances by reachable nodes

Reducer

Selects minimum distance path for each reachable node

Additional bookkeeping needed to keep track of actual path

Page 37: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

BFS Pseudo-Code

What type should we use for the values?

Page 38: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Multiple Iterations Needed

Each iteration advances the “frontier” by one hop

Subsequent iterations find more reachable nodes

Multiple iterations are needed to explore entire graph

Preserving graph structure

Problem: Where did the adjacency list go?

Solution: mapper emits (n, adjacency list) s well

Page 39: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Stopping Criterion

How many iterations are needed?

Convince yourself: when a node is first “discovered”,

we’ve found the shortest path

Now answer the question...

Six degrees of separation?

Practicalities of implementation in MapReduce

Page 40: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Comparison to Dijkstra

Dijkstra’s algorithm is more efficient

At any step it only pursues edges from the minimum-cost path

inside the frontier

MapReduce explores all paths in parallel

Lots of “waste”

Useful work is only done at the “frontier”

Why can’t we do better using MapReduce?

Page 41: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Weighted Edges

Now consider non-unit, positive edge weights

Why can’t edge weights be negative?

Adjacency list now includes a weight w for each edge

In mapper, emit (m, d + wp) instead of (m, d + 1) for each node m

Is that all?

Page 42: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Stopping Criterion

How many iterations are needed in parallel BFS (positive

edge weight case)?

Convince yourself: when a node is first “discovered”,

we’ve found the shortest path

Page 43: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Additional Complexities

s

p q

r

search frontier

10

n1

n2

n3

n4

n5

n6 n7

n8

n9

1

1 1

1

1

1

1

1

Page 44: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Stopping Criterion

How many iterations are needed in parallel BFS (positive

edge weight case)?

Practicalities of implementation in MapReduce

Unrelated to stopping… where have we seen min/max before?

Page 45: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

In General: Graphs and MapReduce

Graph algorithms typically involve

Performing computations at each node: based on node features,

edge features, and local link structure

Propagating computations: “traversing” the graph

Generic recipe

Represent graphs as adjacency lists

Perform local computations in mapper

Pass along partial results via outlinks, keyed by destination node

Perform aggregation in reducer on inlinks to a node

Iterate until convergence: controlled by external “driver”

Don’t forget to pass the graph structure between iterations

Page 46: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

PageRank

Page 47: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Random Walks Over the Web

Random surfer model

User starts at a random Web page

User randomly clicks on links, surfing from page to page

PageRank

Characterizes the amount of time spent on any given page

Mathematically, a probability distribution over pages

PageRank captures notions of page importance

Correspondence to human intuition?

One of thousands of features used in web search

Note: query-independent

Page 48: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Given page x with inlinks t1…tn, where

C(t) is the out-degree of t

is probability of random jump

N is the total number of nodes in the graph

PageRank: Defined

n

i i

i

tC

tPR

NxPR

1 )(

)()1(

1)(

X

t1

t2

tn

Page 49: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Computing PageRank

Properties of PageRank

Can be computed iteratively

Effects at each iteration are local

Sketch of algorithm:

Start with seed PRi values

Each page distributes PRi “credit” to all pages it links to

Each target page adds up “credit” from multiple in-bound links to

compute PRi+1

Iterate until values converge

Page 50: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Simplified PageRank

First, tackle the simple case:

No random jump factor

No dangling links

Then, factor in these complexities…

Why do we need the random jump?

Where do dangling links come from?

Page 51: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Sample PageRank Iteration (1)

n1 (0.2)

n4 (0.2)

n3 (0.2) n5 (0.2)

n2 (0.2)

0.1

0.1

0.2 0.2

0.1 0.1

0.066 0.066 0.066

n1 (0.066)

n4 (0.3)

n3 (0.166) n5 (0.3)

n2 (0.166) Iteration 1

Page 52: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Sample PageRank Iteration (2)

n1 (0.066)

n4 (0.3)

n3 (0.166) n5 (0.3)

n2 (0.166)

0.033

0.033

0.3 0.166

0.083 0.083

0.1 0.1 0.1

n1 (0.1)

n4 (0.2)

n3 (0.183) n5 (0.383)

n2 (0.133) Iteration 2

Page 53: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

PageRank in MapReduce

n5 [n1, n2, n3] n1 [n2, n4] n2 [n3, n5] n3 [n4] n4 [n5]

n2 n4 n3 n5 n1 n2 n3 n4 n5

n2 n4 n3 n5 n1 n2 n3 n4 n5

n5 [n1, n2, n3] n1 [n2, n4] n2 [n3, n5] n3 [n4] n4 [n5]

Map

Reduce

Page 54: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

PageRank Pseudo-Code

Page 55: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Complete PageRank

Two additional complexities

What is the proper treatment of dangling nodes?

How do we factor in the random jump factor?

Solution:

Second pass to redistribute “missing PageRank mass” and

account for random jumps

p is PageRank value from before, p' is updated PageRank value

|G| is the number of nodes in the graph

m is the missing PageRank mass

How to perform bookkeeping for dangling nodes?

How to implement this 2nd pass in Hadoop?

p

G

m

Gp )1(

1'

Page 56: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

PageRank Convergence

Alternative convergence criteria

Iterate until PageRank values don’t change

Iterate until PageRank rankings don’t change

Fixed number of iterations

Convergence for web graphs?

Page 57: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Local Aggregation

Use combiners

BFS uses min, PageRank uses sum

• associative and commutative

In-mapper combining design pattern also applicable

Opportunity for aggregation when mapper sees multiple nodes

with out-links to same destination node

How do we maximize opportunities for local aggregation?

Partition the dataset into clusters with many internal and few

external links

Chicken-and-egg problem: don’t we need MapReduce to do this?

• Use cheap heuristics

• e.g. social network: zip code or school

• e.g. for web: language or domain name

• etc.

m3

m2

m1

n

d1

d2

d3

Page 58: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

Limitations of MapReduce

Amount of intermediate data (to shuffle) is proportional to

number of edges in graph

We have considered sparse graphs (i.e. with few edges),

minimizing such intermediate data

For dense graphs with O(n^2) edges, runtime would be

dominated by copying intermediate data

Consequently, MapReduce algorithms are often

impractical on large, dense graphs

But isn’t data-intensive computing exactly what

MapReduce is supposed to help us with??

See (Lin and Dyer, p. 101)

Page 59: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

In-class Exercise:

All Pairs PBFS

Page 60: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

1: class Mapper

2: method Map( Node N )

3: d = N.Distance

4: Emit( N.id, N )

5: for all (nid m in N.AdjacencyList) do

6: Emit( m, d + 1)

1: class Reducer

2: method Reduce(nid m, [d1, d2, ...])

3: dmin = 1

4: Node M = null

5: for all d in counts [d1, d2, ...] do

6: if IsNode(d) then

7: M = d

8: else if d < dmin then

9: dmin = d

10: M.Distance = dmin

11: Emit( M )

1: class Mapper

2: method Map( sid s, Node N )

3: d = N[s].Distance

4: Emit( Pair(sid, N.id), N )

5: for all (nid m in N.AdjacencyList) do

6: Emit( Pair(sid, m), d + 1)

1: class Reducer

2: method Reduce( Pair(sid s,nid m), [d1,

d2, ...] )

3: dmin = 1

4: M = null

5: for all d in counts [d1, d2, ...] do

6: if IsNode(d) then

7: M = d

8: else if d < dmin then

9: dmin = d

10: M[s].Distance = dmin

11: Emit( M )

Page 61: Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

1: class Mapper

2: method Map( sid s, Node N )

3: d = N[s].Distance

4: Emit( Pair(sid, N.id), N )

5: for all (nid m in N.AdjacencyList) do

6: Emit( Pair(sid, m), d + 1)

1: class Reducer

2: method Reduce( Pair(sid s,nid m), [d1,

d2, ...] )

3: dmin = 1

4: M = null

5: for all d in counts [d1, d2, ...] do

6: if IsNode(d) then

7: M = d

8: else if d < dmin then

9: dmin = d

10: M[s].Distance = dmin

11: Emit( M )

1: class Mapper

2: method Map( sid s, Node N )

3: d = N[s].Distance4:

4: if sid=0 then

5: Emit( Pair(sid, N.id), N )

6: for all (nid m in N.AdjacencyList) do

7: Emit( Pair(sid, m), d + 1)

Partition: all pairs with same 2nd nid to same

reducer

KeyComp: order by sid, the nid, sort sid=0

first

1: class Reducer

2: M = null

3: method Reduce( Pair(sid s,nid m), [d1,

d2, ...] )

4: dmin = 1

5: for all d in counts [d1, d2, ...] do

6: if IsNode(d) then

7: M = d

8: else if d < dmin then

9: dmin = d

10: M[s].Distance = dmin

11: Emit( M )