Big Data and Cloud Computing

Preview:

Citation preview

Cloud and Big DataFarzad Nozarian (fnozarian@aut.ac.ir)

Amirkabir University of Technology

With the help of Dr. Amir H. Payberah (amir@sics.se)

Big Data Analytics Stack

Hadoop Big Data Analytics

Stack

Spark Big Data Analytics

Stack

Big Data - File systems

• Traditional file-systems are not well-designed for large-scale data

processing systems.

• Efficiency has a higher priority than other features, e.g., directory

service.

• Massive size of data tends to store it across multiple machines in a

distributed way.

• HDFS/GFS, Amazon S3, ...

Big Data - Database

• Relational Databases Management Systems (RDMS) were not designed to be

distributed.

• NoSQL databases relax one or more of the ACID properties: BASE

• Different data models: key/value, column-family, graph, document.

• Hbase/BigTable, Dynamo, Scalaris, Cassandra, MongoDB, Voldemort, Riak,

Neo4J, ...

Big Data - Resource Management

• Different frameworks require different computing resources.

• Large organizations need the ability to share data and resources between

multiple frameworks.

• Resource management share resources in a cluster between multiple

frameworks while providing resource isolation.

• Mesos, YARN, Quincy, ...

Big Data - Execution Engine

• Scalable and fault tolerance parallel data processing on clusters of unreliable

machines.

• Data-parallel programming model for clusters of commodity machines.

• MapReduce, Spark, Stratosphere, Dryad, Hyracks, ...

Big Data - Query/Scripting Language

• Low-level programming of execution engines, e.g., MapReduce, is not easy

for end users.

• Need high-level language to improve the query capabilities of execution

engines.

• It translates user-defined functions to low-level API of the execution

engines.

• Pig, Hive, Shark, Meteor, DryadLINQ, SCOPE, ...

Big Data - Stream Processing

• Providing users with fresh and low latency results.

• Database Management Systems (DBMS) vs. Data Stream Management Systems (DSMS)

• Storm, S4, SEEP, D-Stream, Naiad, ...

Big Data - Graph Processing

• Many problems are expressed using graphs: sparse computational

dependencies, and multiple iterations to converge.

• Data-parallel frameworks, such as MapReduce, are not ideal for these

problems: slow

• Graph processing frameworks are optimized for graph-based problems.

• Pregel, Giraph, GraphX, GraphLab, PowerGraph, GraphChi, ...

Big Data - Machine Learning

• Implementing and consuming machine learning techniques at scale are

difficult tasks for developers and end users.

• There exist platforms that address it by providing scalable machine learning

and data mining libraries.

• Mahout, MLBase, SystemML, Ricardo, Presto, ...

Big Data - Configuration and Synchronization

Service

• A means to synchronize distributed applications accesses to shared resources.

• Allows distributed processes to coordinate with each other.

• Zookeeper, Chubby, ...

Hadoop Ecosystem

Hadoop Ecosystem-HDFS

• A foundational component of the Hadoop ecosystem is the Hadoop

Distributed File System (HDFS).

• HDFS is the mechanism by which a large amount of data can be distributed

over a cluster of computers, and data is written once, but read many times

for analytics.

• It provides the foundation for other tools, such as HBase.

Hadoop Ecosystem-MapReduce

• Hadoop’s main execution framework is MapReduce, a programming model

for distributed, parallel data processing, breaking jobs into mapping phases

and reduce phases.

• Developers write MapReduce jobs for Hadoop, using data stored in HDFS for

fast data access.

• Because of the nature of how MapReduce works, Hadoop brings the

processing to the data in a parallel fashion, resulting in fast implementation.

Hadoop Ecosystem-HBase

• A column-oriented NoSQL database built on top of HDFS, HBase is used

for fast read/write access to large amounts of data.

• HBase uses Zookeeper for its management to ensure that all of its

components are up and running.

Hadoop Ecosystem-Zookeeper

• Zookeeper is Hadoop’s distributed coordination service.

• Designed to run over a cluster of machines, it is a highly available service

used for the management of Hadoop operations, and many components of

Hadoop depend on it.

Hadoop Ecosystem-Oozie

• A scalable workflow system

• Oozie is integrated into the Hadoop stack, and is used to coordinate

execution of multiple MapReduce jobs.

• It is capable of managing a significant amount of complexity, basing

execution on external events that include timing and presence of required

data.

Hadoop Ecosystem-Pig

• An abstraction over the complexity of MapReduce programming

• the Pig platform includes an execution environment and a scripting language

(Pig Latin) used to analyze Hadoop data sets.

• Its compiler translates Pig Latin into sequences of MapReduce programs.

Hadoop Ecosystem-Hive

• An SQL-like, high-level language used to run queries on data stored in

Hadoop

• Hive enables developers not familiar with MapReduce to write data queries

that are translated into MapReduce jobs in Hadoop.

Hadoop Ecosystem-Sqoop

• a connectivity tool for moving data between relational databases and data

warehouses and Hadoop.

• Sqoop leverages database to describe the schema for the imported/exported

data and MapReduce for parallelization operation and fault tolerance.

Hadoop Ecosystem-Flume

• a distributed, reliable, and highly available service for efficiently collecting,

aggregating, and moving large amounts of data from individual machines to

HDFS.

• provides a streaming of data flows

• allowing to move data from multiple machines within an enterprise into

Hadoop.

Beyond the core components

• Whirr — This is a set of libraries that allows users to easily spin-up Hadoop clusters on top of Amazon EC2, Rackspace, or any virtual infrastructure.

• Mahout — This is a machine-learning and data-mining library that provides MapReduce implementations for popular algorithms used for clustering, regression testing, and statistical modeling.

• BigTop — This is a formal process and framework for packaging and interoperability testing of Hadoop’s sub-projects and related components.

• Ambari — This is a project aimed at simplifying Hadoop management by providing support for provisioning, managing, and monitoring Hadoop clusters.

Storing Data in Hadoop

HDFS - HBase

HDFS-Architecture

• The HDFS design is based on the design of the Google File System (GFS).

• To be able to store a very large amount of data (terabytes or petabytes)

• HDFS is designed to spread the data across a large number of machines, and to

support much larger file sizes compared to distributed filesystems such as NFS.

• HDFS uses data replication

• To better integrate with Hadoop’s MapReduce, HDFS allows data to be read and

processed locally.

HDFS-ArchitectureHDFS is implemented as a block-structured file system

HDFS-Using HDFS Files

• User applications access the HDFS file system using an HDFS client

• ACCESSING HDFS

• FileSystem (FS) shell

• HDFS Java APIs

HDFS-Using HDFS Files

HBase-Architecture

• HBase is a distributed, versioned, column-oriented, multidimensional storage system, designed for high performance and high availability.

• HBase is an open source implementation of Google’s BigTable architecture.

• Similar to traditional relational database management systems (RDBMSs), data in HBase is organized in tables.

• Unlike RDBMSs, however, HBase supports a very loose schema definition, and does not provide any joins, query language, or SQL.

• The main focus of HBase is on Create, Read, Update, and Delete (CRUD) operations on wide sparse tables.

• HBase leverages HDFS for its persistent data storage.

Processing Data with MapReduce

MapReduce-Roadmap

• Understanding MapReduce fundamentals

• Getting to know MapReduce application execution

• Understanding MapReduce application design

MAPREDUCE-GETTING TO KNOW

• MapReduce is a framework for executing highly parallelizable and distributable algorithms across huge data sets using a large number of commodity computers.

• inspired by these concepts and introduced by Google in 2004

• MapReduce was introduced to solve large-data computational problems, and is specifically designed to run on commodity hardware.

• It is based on divide-and-conquer principles — the input data sets are split into independent chunks, which are processed by the mappers in parallel.

MAPREDUCE-GETTING TO KNOW

MAPREDUCE-Execution

Pipeline

MAPREDUCE-Runtime Coordination and

Task Management

word count implementation-Map Phase

word count implementation-Reduce Phase

word count implementation-

Driver

DESIGNING MAPREDUCE

IMPLEMENTATIONS

Necessary questions to reformulate the initial

problem in terms of MapReduce

• How do you break up a large problem into smaller tasks? More specifically,

how do you decompose the problem so that the smaller tasks can be

executed in parallel?

• Which key/value pairs can you use as inputs/outputs of every task?

• How do you bring together all the data required for calculation? More

specifically, how do you organize processing the way that all the data

necessary for calculation is in memory at the same time?

Simple Data Processing with

MapReduce

Inverted Indexes Example

Building Joins with MapReduce

• Two “standard” implementations exist for joining data in MapReduce:

• Reduce-side join

• Map-side join

• A most common implementation of a join is a reduce-side join.

• Map-side join is very well in the case of one-to-one joins, where at most one

record from every data set has the same key.

Road Enrichment Example

A simplified road enrichment algorithm

1. Find all links connected to a given node. For example, as shown in Figure, node N1 has links L1, L2, L3, and L4, while node N2 has links L4, L5, and L6.

2. Based on the number of lanes for every link at the node, calculate the road width at the intersection.

3. Based on the road width, calculate the intersection geometry.

4. Based on the intersection geometry, move the road’s end point to tie it to the intersection geometry.

Algorithm assumptions

• A node is described with an object N with the key NN1… NNm. For example, node

N1 can be described as NN1and N2 as NN2. All nodes are stored in the nodes input

file.

• A link is described with an object L with the key LL1… LLm. For example, link L1

can be described as LL1 , L2 as LL2, and so on. All the links are stored in the links

source file.

• Also introduce an object of the type link or node (LN), which can have any key.

• Finally, it is necessary to define two more types — intersection (S) and road (R).

Phase 1

Calculation of Intersection Geometry and

Moving the Road’s End Points Job

Phase 2Merge Roads Job

Links Elevation Example

• This problem can be defined as follows. Given a links graph and terrain

model, convert two dimensional (x,y) links into three-dimensional (x, y, z)

links. This process is called link elevation.

Simplified link elevation

algorithm

1. Split every link into fixed-length

fragments (for example, 10 meters).

2. For every piece, calculate heights (from

the terrain model) for both start and

end points of each link.

3. Combine pieces together into original

links.

Phase 1

Split Links into Pieces and Elevate Each

Piece Job

Phase 2Combine Link’s Pieces into Original Links Job

Recommended