34
THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

Embed Size (px)

Citation preview

Page 1: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

THE GOOGLE FILE SYSTEMCS 595

LECTURE 8

3/2/2015

Page 2: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

TYPES OF CLOUDS

• Public, Private, Hybrid Clouds

• Names do not necessarily dictate location

• Type may depend on whether temporary or permanent

Page 3: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

SEND ME INTERESTING LINKS ABOUT CLOUDS

Page 4: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

PAPER TO READ

• Read the paper on GFS: Evolution on Fast-Forward

• Also a link to a longer paper on GFS – original paper from 2003

• I assume you are reading papers as specified in the class schedule

Page 5: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

THE ORIGINAL GOOGLE FILE SYSTEMGFS

Page 6: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

• During the lecture, you should point out problems with GFS design decisions

Page 7: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

COMMON GOALS OF GFSAND MOST DISTRIBUTED FILE SYSTEMS• Performance

• Reliability

• Scalability

• Availability

Page 8: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

GFS DESIGN CONSIDERATIONS

• Component failures are the norm rather than the exception.• File System consists of hundreds or even thousands of storage machines built from

inexpensive commodity parts.

• Files are Huge. Multi-GB Files are common.• Each file typically contains many application objects such as web documents.

• Append, Append, Append.• Most files are mutated by appending new data rather than overwriting existing data.

• Co-Designing• Co-designing applications and file system API benefits overall system by increasing flexibility

Page 9: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

GFS

• Why assume hardware failure is the norm?

• The amount of layers in a distributed system mean failure on any could contribute to data corruption.• Application• OS• Memory• Disk• Network• Physical connections• Power

• It is cheaper to assume common failure on poor hardware and account for it, rather than invest in expensive hardware and still experience occasional failure.• Quantity and quality of components guarantee some are not functional at any given time.• Constant monitoring, error detection, fault tolerance, and automatic recovery must exist in

GFS

Page 10: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

INITIAL ASSUMPTIONS• System built from inexpensive commodity components that fail

• Modest number of files:

• few million 100MB or larger files.

• Multi-GB files are common.

• Didn’t optimize for smaller files

• 2 kinds of reads:

• large streaming read (1MB)

• small random reads (batch and sort)

• Well-defined semantics:• Master/slave, producer/ consumer and many-way merge. 1 producer per machine append to

file.

• Atomic RW

• High sustained bandwidth chosen over low latency (difference?)

Page 11: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

HIGH BANDWIDTH VERSUS LOW LATENCY

•Example:

• An airplane flying across the country filled with backup tapes has very high bandwidth because it gets all data at destination faster than any existing network

• However – each individual piece of data had high latency

Page 12: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

INTERFACE• GFS – familiar file system interface

• Files organized hierarchically in directories, path names

• Create, delete, open, close, read, write

• Snapshot and record append (allows multiple clients to append simultaneously)

• This means atomic read/writes – not transactions!

Page 13: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

MASTER/SERVERS (SLAVES)

• Single master, multiple chunkservers

• Each file divided into fixed-size chunks of 64 MB• Typical file system block size: 4KB

• Chunks stored by chunkservers on local disks as Linux files

• Immutable and globally unique 64 bit chunk handle (name or number) assigned at creation

• Size of namespace: 264

• Size of storage capacity: 264 x 64 MB

• Regulated by storage capacity of the master

Page 14: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015
Page 15: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

MASTER/SERVERS

• R or W chunk data specified by chunk handle and byte range

• Each chunk replicated on multiple chunkservers – default is 3

• Why?

Page 16: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

MASTER/SERVERS• Master maintains all file system metadata

– Namespace, access control info, mapping from files to chunks, location of chunks

– Controls garbage collection of chunks

– Reclaim physical storage when files are deleted

– Communicates with each chunkserver through HeartBeat messages

– Instructions and collect state

– Clients interact with master for metadata, chunksevers do the rest, e.g. R/W on behalf of applications

– No caching –

• For client working sets too large, simplified coherence

• For chunkserver – chunks already stored as local files, Linux caches MFU in memory

Page 17: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

HEARTBEATS

• What do we gain from Heartbeats?

• Not only do we get the new state of a remote system, this also updates the master regarding failures.

• Any system that fails to respond to a Heartbeat message is assumed dead. This information allows the master to update his metadata accordingly.

• This also queues the Master to create more replicas of the lost data.

Page 18: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

CLIENT• Client translates offset in file into chunk index within file

• Send master request with file name/chunk index

• Master replies with chunk handle and location of replicas

• Client caches info using file name/chunk index as key

• Client sends request to one of the replicas (closest)

• Further reads of same chunk require no interaction

• Can ask for multiple chunks in same request

Page 19: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

MASTER OPERATIONS• Master executes all namespace operations

• Manages chunk replicas

• Makes placement decision

• Creates new chunks (and replicas)

• Coordinates various system-wide activities to keep chunks fully replicated

• Balance load

• Reclaim unused storage

Page 20: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

•Do you see any problems?

•Do you question any design decisions?

Page 21: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

MASTER - JUSTIFICATION

•Single Master –

• Simplifies design

• Placement, replication decisions made with global knowledge

• Doesn’t R/W, so not a bottleneck

• Client asks master which chunkservers to contact

Page 22: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

CHUNK SIZE - JUSTIFICATION• 64 MB, larger than typical file system block size

• Replica stored as plain Linux file, extended as needed

• Lazy space allocation

• Reduces interaction of client with master

• R/W on same chunk only 1 request to master

• Mostly R/W large sequential files

• Likely to perform many operations on given chunk (keep persistent TCP connection)

• Reduces size of metadata stored on master

Page 23: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

CHUNK PROBLEMS

•But –

• If small file – one chunk may be hot spot

• Can fix this with replication, stagger batch application start times

Page 24: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015
Page 25: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

METADATA• 3 types:

• File and chunk namespaces

• Mapping from files to chunks

• Location of each chunk’s replicas

• All metadata in memory of master

• First two types stored

in logs for persistence

(on master local disk and

replicated remotely)

Page 26: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

METADATA

• Instead of keeping track of chunk location info

• Poll – which chunkserver has which replica

• Master controls all chunk placement

• Disks may go bad, chunkserver errors, etc.

Page 27: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

METADATA - JUSTIFICATION

• In memory –fast

• Periodically scans state

• garbage collect

• Re-replication if chunkserver failure

• Migration to load balance

• Master maintains < 64 B data for each 64 MB chunk

• File namespace < 64B

• To support larger file systems:

• Add extra RAM/Disks to the master (cheap)

Page 28: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

CHUNK SIZE (AGAIN)- JUSTIFICATION

• 64 MB is large – think of typical size of email

• Why Large Files?o METADATA!

• Every file in the system adds to the total overhead metadata that the system must store.

• More individual data means more data about the data is needed.

Page 29: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

OPERATION LOG• Historical record of critical metadata changes

• Provides logical time line of concurrent ops

• Log replicated on remote machines

• Flush record to disk locally and remotely

• Log kept small – checkpoint when > size

• Checkpoint in B-tree form

• New checkpoint built without delaying mutations (takes about 1 min for 2 M files)

• Only keep latest checkpoint and subsequent logs

Page 30: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

SNAPSHOT

• Snapshot makes copy of file

• Used to create checkpoint or branch copies of huge data sets

• First revokes leases on chunks

• Newly created snapshot points to same chunks as source file

• After snapshot, client sends request to master to find lease holder

• Master give lease to new copy

Page 31: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

SHADOW MASTER

•Master Replication• Replicated for reliability

• Not mirrors, so may lag primary slightly (fractions of second)

• Shadow master read replica of operation log, applies same sequence of changes to data structures as the primary does

Page 32: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

SHADOW MASTER

• If Master fails:• Start shadow instantly

•Read-only access to file systems even when primary master down

• If machine or disk fails, monitor outside GFS starts new master with replicated log

• Clients only use canonical name of master

Page 33: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

CREATION, RE-REPLICATION, REBALANCING• Master creates chunk

• Place replicas on chunkservers with below-average disk utilization

• Limit number of recent creates per chunkserver

• New chunks may be hot

• Spread replicas across racks

• Re-replicate

• When number of replicas falls below goal

• Chunkserver unavailable, corrupted, etc.

• Replicate based on priority (fewest replicas)

• Master limits number of active clone ops

Page 34: THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015

CREATION, RE-REPLICATION, REBALANCING

• Rebalance

• Periodically moves replicas for better disk space and load balancing

• Gradually fills up new chunkserver

• Removes replicas from chunkservers with below-average free space