Upload
amr-awadallah
View
34.821
Download
0
Embed Size (px)
DESCRIPTION
Presentation given at the TDWI Executive Summit 2009 in San Diego, California.
Citation preview
Amr AwadallahCTO, Cloudera, Inc.
August 5, 2009
How Hadoop Revolutionized Data
Warehousing at Yahoo and Facebook
Outline
•Problems We Wanted to Solve•What is Hadoop?•HDFS and MapReduce•Access Languages for Hadoop•Hadoop vs RDBMSes•Conclusion
2
3
Our Older Systems Limited Raw Data Access
Storage Farm for Unstructured Data (20TB/day)
Instrumentation
Collection
RDBMS (200GB/day)
BI / Reports
Mostly Append
Ad hoc Queries &Data Mining
ETL Grid Non-Consumption
Filer heads are a bottleneck
We Needed To Be More Agile (part 1)• Data Errors and Reprocessing
– We encountered data errors that required reprocessing, which could happen a long time after the fact. “Tape Data” was cost prohibitive to reprocess, we needed to retain raw-data online for long time periods
• Conformation Loss– Conversion of data from raw format to conformed dimensions
causes some information loss. We needed access to the original data to recover lost information whenever needed (e.g.: a new browser user agent)
• Shrinking ETL Window– The storage filers for raw data started becoming a significant
bottleneck as large amounts of data needed to be copied to the ETL grid for processing (e.g. 30 hours to process a day’s worth of data)
• Ad Hoc Queries on Raw Data– We wanted to run ad hoc queries against the original raw event data, but the storage filers only store and can’t compute
4
We Needed To Be More Agile (part 2)• Data Model Agility: Schema-on-Read vs Schema-on-Write
– We wanted to access data even if it had no schema yet, e.g. frequently a new product or feature will launch but we can’t get their dashboards since their schemas weren’t defined yet
– Schema-on-Read is slower in terms of machine time (due to read overhead) but it allows us to evolve in an agile way, then we materialize to relational datamarts when data model stabilizes
• Consolidated Repository and Ubiquitous Access– We wanted to eliminate borders and have a single repository where anybody can store, join, and process any of our data bits
• Beyond Reporting (Data-As-Product)– Last, but not least, we wanted to process the data in ways that feed directly into the product/business (e.g. Email Spam Filtering, Ad Targeting, Collaborative Filtering, Multimedia Processing)
5
The Solution: A Store-Compute Grid
6
Storage + Computation
Instrumentation
Collection
RDBMS
Interactive Apps “Batch” Apps
Mostly Append
ETL and Aggregations
Ad hoc Queries& Data Mining
What is Hadoop?
• A scalable fault-tolerant grid operating system for data storage and processing
• Its scalability comes from the marriage of:– HDFS: Self-Healing High-Bandwidth Clustered Storage– MapReduce: Fault-Tolerant Distributed Processing
• Operates on unstructured and structured data• A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …)
• Open source under the friendly Apache License• http://wiki.apache.org/hadoop/
7
Hadoop History• 2002-2004: Doug Cutting and Mike Cafarella started
working on Nutch (a web-scale crawler-based search system)
• 2003-2004: Google publishes GFS and MapReduce papers • 2004: Cutting adds DFS & MapReduce support to Nutch• 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch• 2007: NY Times converts 4TB of archives over 100 EC2s• 2008: Web-scale deployments at Y!, Facebook, Last.fm• April 2008: Fastest sort of a TB, 3.5mins over 910
nodes• May 2009:
– Fastest sort of a TB, 62secs over 1460 nodes– Sorted a PB in 16.25hours over 3658 nodes– 100s of deployments worldwide (
http://wiki.apache.org/hadoop/PoweredBy )
• June 2009: Hadoop Summit 2009 – 750 attendees
8
Hadoop Design Axioms
1.System Shall Manage and Heal Itself2.Performance Shall Scale Linearly 3.Compute Should Move to Data4.Simple Core, Modular and Extensible
9
10
Block Size = 64MBReplication Factor = 3
HDFS: Hadoop Distributed File System
Cost/GB is a few ¢/month vs $/month
11
MapReduce: Distributed Processing
MapReduce Example for Word Count
12
Split 1
Split i
Split N
Map 1(docid, text)
(docid, text) Map i
(docid, text)
Map M
Reduce 1Output File 1
(sorted words, sum of
counts)
Reduce iOutput File i
(sorted words, sum of
counts)
Reduce ROutput File R
(sorted words, sum of
counts)
(words, counts)(sorted words, counts)
Map(in_key, in_value) => list of (out_key, intermediate_value) Reduce(out_key, list of intermediate_values) => out_value(s)
Shuffle(words, counts) (sorted words, counts)
“To Be Or Not To Be?”
Be, 5
Be, 12
Be, 7Be, 6
Be, 30
cat *.txt | mapper.pl | sort | reducer.pl > out.txt
Hadoop Is More Than Just Analytics/BI• Building the Web Search Index• Processing News/Content Feeds• Content/Ad Targeting Optimization• Fraud Detection and Fighting Email Spam• Facebook Lexicon: Trends of words on walls
• Collaborative Filtering (you might like)• Batch Video/Image Transcoding• Gene Sequence Alignment
13
Apache Hadoop Ecosystem
14
HDFS(Hadoop Distributed File System)
HBase (Key-Value store)
MapReduce (Job Scheduling/Execution System)
Pig (Data Flow) Hive (SQL)
BI ReportingETL Tools
Avr
o (S
eria
lizat
ion)
Zoo
keep
r (C
oord
inat
ion) Sqoop
RDBMS
Hadoop Development Languages• Java MapReduce
– Gives the most flexibility and performance, but with a potentially longer development cycle
• Streaming MapReduce– Allows you to develop in any language of your choice, but slightly slower performance
• Pig– A relatively new data-flow language contributed by Yahoo, suitable for ETL like workloads (procedural multi-stage jobs)
• Hive– A SQL warehouse on top of MapReduce (contributed by Facebook). It has two main components:
1.A meta-store which keeps the schema for files, and 2.An interpreter which converts the SQL query into MapReduce
15
Hive Features
• A subset of SQL covering the most common statements
• Agile data types: Array, Map, Struct, and JSON objects
• User Defined Functions and Aggregates• Regular Expression support• MapReduce support• JDBC support• Partitions and Buckets (for performance optimization)
• In The Works: Indices, Columnar Storage, Views, Microstrategy compatibility, Explode/Collect
• More details: http://wiki.apache.org/hadoop/Hive
16
Relational Databases:• An ACID Database system• Stores Tables (Schema)• Stores 100s of terabytes• Processes 10s of TB/query• Transactional Consistency• Lookup rows using index• Mostly queries• Interactive response
Hadoop:• A data grid operating system
• Stores Files (Unstructured)
• Stores 10s of petabytes• Processes 10s of PB/job• Weak Consistency• Scan all blocks in all files
• Queries & Data Processing• Batch response (>1sec)
Hadoop vs. Relational Databases
17
Relational Databases:
Hadoop:
Use The Right Tool For The Right Job
18
Hadoop Criticisms (part 1)• Hadoop MapReduce requires Rocket Scientists
– Hadoop has the benefit of both worlds, the simplicity of SQL and the power of Java (or any other language for that matter)
• Hadoop is not very efficient hardware wise– Hadoop optimizes for scalability, stability and flexibility versus squeezing every tiny bit of hardware performance
– It is cost efficient to throw more “pizza box” servers to gain performance than hire more engineers to manage, configure, and optimize the system or pay 10x the hardware cost in software
• Hadoop can’t do quick random lookups– HBase enables low-latency key-value pair lookups (no fast joins)
• Hadoop doesn’t support updates/inserts/deletes– Not for multi-row transactions, but HBase enables transactions with row-level consistency semantics
19
Hadoop Criticisms (part 2)
• Hadoop isn’t highly available– Though Hadoop rarely loses data, it can suffer from down-time if the master NameNode goes down. This issue is currently being addressed, and there are HW/OS/VM solutions for it
• Hadoop can’t be backed-up/recovered quickly– HDFS, like other file systems, can copy files very quickly. It also has utilities to copy data between HDFS clusters
• Hadoop doesn’t have security– Hadoop has Unix style user/group permissions, and the community is working on improving its security model
• Hadoop can’t talk to other systems– Hadoop can talk to BI tools using JDBC, to RDBMSes using Sqoop, and to other systems using FUSE, WebDAV & FTP
20
Conclusion
Hadoop is a data grid operating system which augments current BI systems and improves their agility by providing an economically scalable solution for storing and processing large amounts of unstructured data over long periods of time
21
22
Contact Information• If you have further questions or
comments:
Amr AwadallahCTO, Cloudera [email protected]
twitter.com/awadallahtwitter.com/cloudera
APPENDIX
23
Hadoop High-Level Architecture
24
Name NodeMaintains mapping of file
blocks to data node slaves
Job TrackerSchedules jobs across task
tracker slaves
Data NodeStores and serves
blocks of data
Hadoop ClientContacts Name Node for data or
Job Tracker to submit jobs
Task TrackerRuns tasks (work units)
within a jobShare Physical Node