47
Embracing the Data Deluge: Data-Intensive Computing for the Masses Jimmy Lin University of Maryland Tuesday, July 13, 2010 This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

Embracing the Data Deluge: Data-Intensive Computing for the Masses

  • Upload
    zaide

  • View
    38

  • Download
    0

Embed Size (px)

DESCRIPTION

Embracing the Data Deluge: Data-Intensive Computing for the Masses. Jimmy Lin University of Maryland Tuesday, July 13, 2010. - PowerPoint PPT Presentation

Citation preview

Page 1: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Embracing the Data Deluge:Data-Intensive Computing for the Masses

Jimmy LinUniversity of Maryland

Tuesday, July 13, 2010

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United StatesSee http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

Page 2: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Introduction We live in a world of large data…

Staying relevant requires embracing it! In text processing…

Emergence and dominance of empirical, data-driven research Constant danger: uninteresting conclusions on “toy” datasets

(or, experiments taking forever) In the natural sciences…

Emergence of the 4th Paradigm: data-intensive eScience Difficult computer science problems!

How do we practically scale to large datasets? Case study in text processing: statistical machine translation Case study in bioinformatics: DNA sequence alignment

Page 3: Embracing the Data Deluge: Data-Intensive Computing for the Masses

How much data? Google processes 20 PB a day (2008) Wayback Machine has 3 PB + 100 TB/month (3/2009) eBay has 6.5 PB of user data + 50 TB/day (5/2009) Facebook has 36 PB of user data + 80-90 TB/day (6/2010) CERN’s LHC: 15 PB a year (any day now) LSST: 6-10 PB a year (~2015)

640K ought to be enough for anybody.

Page 4: Embracing the Data Deluge: Data-Intensive Computing for the Masses

No data like more data!

(Banko and Brill, ACL 2001)(Brants et al., EMNLP 2007)

s/knowledge/data/g;

How do we get here if we’re not Google?

Page 5: Embracing the Data Deluge: Data-Intensive Computing for the Masses

+ simple, distributed programming models cheap commodity clusters

= data-intensive computing for the masses!

(or utility computing)

Source: flickr (turtlemom_nancy/2046347762)

Why is this different?

Path to data nirvana?

Page 6: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Parallel computing is hard!

Message Passing

P1 P2 P3 P4 P5

Shared Memory

P1 P2 P3 P4 P5

Mem

ory

Different programming models

Different programming constructsmutexes, conditional variables, barriers, …masters/slaves, producers/consumers, work queues, …

Fundamental issuesscheduling, data distribution, synchronization, inter-process communication, robustness, fault tolerance, …

Common problemslivelock, deadlock, data starvation, priority inversion…dining philosophers, sleeping barbers, cigarette smokers, …

Architectural issuesFlynn’s taxonomy (SIMD, MIMD, etc.),network typology, bisection bandwidthUMA vs. NUMA, cache coherence

The reality: programmer shoulders the burden of managing concurrency…(I want my students developing new machine learning algorithms, not debugging race conditions)

Page 7: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Source: Ricardo Guimarães Herrmann

Page 8: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Source: MIT Open Courseware

Page 9: Embracing the Data Deluge: Data-Intensive Computing for the Masses
Page 10: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Source: NY Times (6/14/2006)

The datacenter is the computer!

Page 11: Embracing the Data Deluge: Data-Intensive Computing for the Masses

MapReduce

Page 12: Embracing the Data Deluge: Data-Intensive Computing for the Masses

MapReduce Functional programming meets distributed processing

Independent per-record processing in parallel Aggregation of intermediate results to generate final output

Programmers specify two functions:map (k, v) → <k’, v’>*reduce (k’, v’) → <k’, v’>* All values with the same key are sent to the same reducer

The execution framework handles everything else… Handles scheduling Handles data management, transport, etc. Handles synchronization Handles errors and faults

Page 13: Embracing the Data Deluge: Data-Intensive Computing for the Masses

mapmap map map

Shuffle and Sort: aggregate values by keys

reduce reduce reduce

k1 k2 k3 k4 k5 k6v1 v2 v3 v4 v5 v6

ba 1 2 c c3 6 a c5 2 b c7 8

a 1 5 b 2 7 c 2 3 6 8

r1 s1 r2 s2 r3 s3

Page 14: Embracing the Data Deluge: Data-Intensive Computing for the Masses

split 0split 1split 2split 3split 4

worker

worker

worker

worker

worker

UserProgram

outputfile 0

outputfile 1

(1) submit

(2) schedule map (2) schedule reduce

(3) read(4) local write

(5) remote read(6) write

Inputfiles

Mapphase

Intermediate files(on local disk)

Reducephase

Outputfiles

Adapted from (Dean and Ghemawat, OSDI 2004)

Master

(I want my students developing new machine learning algorithms, not debugging race conditions)

Page 15: Embracing the Data Deluge: Data-Intensive Computing for the Masses

MapReduce Implementations Google has a proprietary implementation in C++

Bindings in Java, Python Hadoop is an open-source implementation in Java

Development led by Yahoo, used in production Now an Apache project Rapidly expanding software ecosystem

Lots of custom research implementations For GPUs, cell processors, etc.

Page 16: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Case Study #1Statistical Machine Translation

Chris Dyer (Linguistics Ph.D., 2010)

Page 17: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Translation Model

LanguageModel

Decoder

Foreign Input Sentence

maria no daba una bofetada a la bruja verde

English Output Sentencemary did not slap the green witch

Word Alignment

Statistical Machine Translation

(vi, i saw)(la mesa pequeña, the small table)…

Phrase Extraction

i saw the small tablevi la mesa pequeña

Parallel Sentences

he sat at the tablethe service was good

Target-Language Text

Training Data

Page 18: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Maria no dio una bofetada a la bruja verde

Mary not

did not

no

did not give

give a slap to the witch green

slap

a slap

to the

to

the

green witch

the witch

by

slap

Translation as a Tiling Problem

Mary

did not

slap

the

green witch

Page 19: Embracing the Data Deluge: Data-Intensive Computing for the Masses

The Data Bottleneck

“Every time I fire a linguist, the performance of our … system goes up.”- Fred Jelinek

Page 20: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Translation Model

LanguageModel

Decoder

Foreign Input Sentence

maria no daba una bofetada a la bruja verde

English Output Sentencemary did not slap the green witch

Word Alignment

Statistical Machine Translation

(vi, i saw)(la mesa pequeña, the small table)…

Phrase Extraction

i saw the small tablevi la mesa pequeña

Parallel Sentences

he sat at the tablethe service was good

Target-Language Text

Training Data

We’ve built MapReduce implementations of these two components!

Page 21: Embracing the Data Deluge: Data-Intensive Computing for the Masses

HMM Alignment: Giza

Single-core commodity server

Page 22: Embracing the Data Deluge: Data-Intensive Computing for the Masses

HMM Alignment: MapReduce

Single-core commodity server

38 processor cluster

Page 23: Embracing the Data Deluge: Data-Intensive Computing for the Masses

HMM Alignment: MapReduce

38 processor cluster

1/38 Single-core commodity server

Page 24: Embracing the Data Deluge: Data-Intensive Computing for the Masses

What’s the point? The optimally-parallelized version doesn’t exist! MapReduce occupies a sweet spot in the design space for

a large class of problems: Fast… in terms of running time + scaling characteristics Easy… in terms of programming effort Cheap… in terms of hardware costs

Chris Dyer, Aaron Cordova, Alex Mont, and Jimmy Lin. Fast, Easy, and Cheap: Construction of Statistical Machine Translation Models with MapReduce. Proceedings of the Third Workshop on Statistical Machine Translation at ACL 2008

Page 25: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Case Study #2DNA Sequence Alignment

Michael Schatz(Computer Science Ph.D., 2010)

Page 26: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Strangely-Formatted Manuscript Dickens: A Tale of Two Cities

Text written on a long spool

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, …

Page 27: Embracing the Data Deluge: Data-Intensive Computing for the Masses

… With Duplicates Dickens: A Tale of Two Cities

“Backup” on four more copies

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, …

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, …

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, …

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, …

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, …

Page 28: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Shredded Book Reconstruction Dickens accidently shreds the manuscript

How can he reconstruct the text? 5 copies x 138,656 words / 5 words per fragment = 138k

fragments The short fragments from every copy are mixed together Some fragments are identical

It was the best of of times, it was thetimes, it was the worst age of wisdom, it was the age of foolishness, …

It was the best worst of times, it wasof times, it was the the age of wisdom, it was the age of foolishness,

It was the the worst of times, it best of times, it was was the age of wisdom, it was the age of foolishness, …

It was was the worst of times,the best of times, it it was the age of wisdom, it was the age of foolishness, …

It it was the worst ofwas the best of times, times, it was the age of wisdom, it was the age of foolishness, …

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, …

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, …

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, …

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, …

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, …

It was the best of of times, it was thetimes, it was the worst age of wisdom, it was the age of foolishness, …

It was the best worst of times, it wasof times, it was the the age of wisdom, it was the age of foolishness,

It was the the worst of times, it best of times, it was was the age of wisdom, it was the age of foolishness, …

It was was the worst of times,the best of times, it it was the age of wisdom, it was the age of foolishness, …

It it was the worst ofwas the best of times, times, it was the age of wisdom, it was the age of foolishness, …

Page 29: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Greedy AssemblyIt was the best of

of times, it was the

best of times, it was

times, it was the worst

was the best of times,

the best of times, it

of times, it was the

times, it was the age

It was the best of

of times, it was the

best of times, it was

times, it was the worst

was the best of times,

the best of times, it

it was the worst of

of times, it was the

times, it was the age

it was the age of

was the age of wisdom,

the age of wisdom, it

age of wisdom, it was

of wisdom, it was the

it was the age of

was the age of foolishness,

the worst of times, it

The repeated sequence make the correct reconstruction ambiguous!

Alternative: model sequence reconstruction as a graph problem…

Page 30: Embracing the Data Deluge: Data-Intensive Computing for the Masses

de Bruijn Graph Construction Dk = (V,E)

V = All length-k subfragments (k < l) E = Directed edges between consecutive subfragments

(Nodes overlap by k-1 words)

Locally constructed graph reveals the global structure Overlaps between sequences implicitly computed

It was the best of

Original Fragment

It was the best was the best of

Directed Edge

de Bruijn, 1946Idury and Waterman, 1995Pevzner, Tang, Waterman, 2001

Page 31: Embracing the Data Deluge: Data-Intensive Computing for the Masses

de Bruijn Graph Assembly

the age of foolishness

It was the best

best of times, it

was the best of

the best of times,

of times, it was

times, it was the

it was the worst

was the worst of

worst of times, it

the worst of times,

it was the age

was the age ofthe age of wisdom,

age of wisdom, it

of wisdom, it was

wisdom, it was the

A unique Eulerian tour of the graph reconstructs the

original text

If a unique tour does not exist, try to simplify the

graph as much as possible

Page 32: Embracing the Data Deluge: Data-Intensive Computing for the Masses

de Bruijn Graph Assembly

the age of foolishness

It was the best of times, it

of times, it was the

it was the worst of times, it

it was the age ofthe age of wisdom, it was theA unique Eulerian tour of the

graph reconstructs the original text

If a unique tour does not exist, try to simplify the

graph as much as possible

Page 33: Embracing the Data Deluge: Data-Intensive Computing for the Masses

GATGCTTACTATGCGGGCCCCCGGTCTAATGCTTACTATGC

GCTTACTATGCGGGCCCCTTAATGCTTACTATGCGGGCCCCTT

TAATGCTTACTATGCAATGCTTAGCTATGCGGGC

AATGCTTACTATGCGGGCCCCTTAATGCTTACTATGCGGGCCCCTT

CGGTCTAGATGCTTACTATGCAATGCTTACTATGCGGGCCCCTTCGGTCTAATGCTTAGCTATGC

ATGCTTACTATGCGGGCCCCTT

?Subject genome

Sequencer

Reads

Human genome: 3 gbpA few billion short reads (~100 GB compressed data)

Present solutions: large-shared memory machines or clusters with high-speed interconnects

Can we get by with MapReduce on cheap commodity clusters?

Page 34: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Graph Compression

Challenges– Nodes stored on different machines– Nodes can only access direct neighbors

Randomized Solution– Randomly assign H / T to each

compressible node– Compress H T links

Page 35: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Fast Graph Compression

Initial Graph: 42 nodes

Page 36: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Fast Graph Compression

Round 1: 26 nodes (38% savings)

Page 37: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Fast Graph Compression

Round 2: 15 nodes (64% savings)

Page 38: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Fast Graph Compression

Round 3: 6 nodes (86% savings)

Page 39: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Fast Graph Compression

Round 4: 5 nodes (88% savings)

Page 40: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Contrail De Novo Assembly of the Human Genome

Genome: African male NA18507 (SRA000271, Bentley et al., 2008)

Input: 3.5B 36bp reads, 210bp insert (~40x coverage)

Initial

NMax

>7 B27 bp

Compressed

>1 B303 bp

5.0 M14,007 bp

B

B’

A

Clip Tips

4.2 M20,594 bp

Pop Bubbles

B

B’A C

Assembly of Large Genomes with Cloud Computing.Schatz MC, Sommer D, Kelley D, Pop M, et al. In Preparation.

Page 41: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Source: flickr (fatboyke/2918399820)

Page 42: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Source: flickr (60in3/2338247189)

Page 43: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Best thing since sliced bread? Distributed programming models:

MapReduce is the first Definitely not the only And probably not even the best Alternatives: Pig, Dryad/DryadLINQ, Pregel, etc.

It’s all about the right level of abstraction The von Neumann architecture won’t cut it anymore

Separating the what from how Developer specifies the computation that needs to be performed Execution framework handles actual execution Framework hides system-level details from the developers

Page 44: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Source: NY Times (6/14/2006)

The datacenter is the computer!What are the appropriate abstractions for the datacenter computer?

Page 45: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Source: flickr (infidelic/3008675635)

Page 46: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Source: Wikipedia (Tide)

Commoditization of large-data processing capabilities allows us to ride the rising tide!

Page 47: Embracing the Data Deluge: Data-Intensive Computing for the Masses

Questions?