29
OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez

OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

OpenNebula Techday

Ceph Introduction (2014-06-24) – Jean-Charles Lopez

Page 2: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Presentations Objectives

©  2012-­‐2014  Inktank,  Inc.   2  

By the end of this presentation, you should be able to: •  Understand Ceph

–  Project history –  Concepts –  Versions

•  Understand Ceph components –  RADOSGW, –  MONs, OSDs & MDSs –  RBD, –  CRUSH, –  CephFS –  Clients

•  Know about the MIMO project

Page 3: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

STORAGE LANDSCAPE & CEPH

OpenNebula Techday

©  2012-­‐2014  Inktank,  Inc.   3  

Page 4: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Storage challenges today

©  2012-­‐2014  Inktank,  Inc.   4  

•  Money –  More data, same budgets –  Need a low cost per gigabyte –  No vendor lock-in

•  Time –  Ease of administration –  No manual data migration or load balancing –  Seamless scaling – both expansion and contraction

Page 5: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph - Foundations

©  2012-­‐2014  Inktank,  Inc.   5  

•  A new philosophy –  Open Source –  Community-focused equals strong, sustainable ecosystem

•  A new design –  Scalable (every component) –  No single point of failure –  Software-based runs on commodity hardware –  Self-managing –  Flexible –  Unified

Page 6: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

CEPH COMPONENTS

OpenNebula Techday

©  2012-­‐2014  Inktank,  Inc.   6  

Page 7: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph Storage Cluster Components

©  2012-­‐2014  Inktank,  Inc.   7  

•  They include: –  OSDs (Object Storage Devices) –  Monitors (Cluster Map management) –  MDSs (Meta Data Servers) –  RADOSGW (S3/Swift compatible Gateway) –  CRUSH (Data placement algorithm) –  Placement groups –  Pools –  Ceph journal

Page 8: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph - Monitors

©  2012-­‐2014  Inktank,  Inc.   8  

•  What do they do? –  They maintain the cluster map 1

–  They provide consensus for distributed decision-making

•  What they do NOT do? –  They don’t serve stored objects to clients

•  How many do we need? –  There must be an odd number of MONs 2

–  3 is the minimum number of MONs 3

M

Page 9: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

DISK

FS

CEPH OSD

Ceph - OSD Daemon

©  2012-­‐2014  Inktank,  Inc.   9  

•  Ceph Object Storage Device Daemon –  Intelligent Storage Servers 1

–  Serve stored objects to clients •  OSD is primary for some objects

–  Responsible for replication –  Responsible for coherency –  Responsible for re-balancing –  Responsible for recovery

•  OSD is secondary for some objects –  Under control of the primary –  Capable of becoming primary

•  Supports extended object classes –  Atomic transactions –  Synchronization and notifications –  Send computation to the data

Page 10: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph – Storage Node

©  2012-­‐2014  Inktank,  Inc.   10  

DISK

FS

DISK DISK

Ceph OSD

DISK DISK

CephOSD

CephOSD

CephOSD

CephOSD

FS FS FS FS btrfs xfs ext4

1

M M M

Page 11: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

CEPH DATA PLACEMENT

OpenNebula Techday

©  2012-­‐2014  Inktank,  Inc.   11  

Page 12: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph - CRUSH

©  2012-­‐2014  Inktank,  Inc.   12  

•  CRUSH Algorithm –  Data is first split into a certain number of sections (PGs) –  The number of placement groups (PGs) is configurable –  Uses placement rules to determine target PG in the cluster 1

–  This is a pseudo-random calculation 2

–  New map is retrieved every time changes occur –  New map is retrieved before any operation is performed –  The CRUSH Map is maintained by the Monitors

Page 13: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph – Placement Groups

©  2012-­‐2014  Inktank,  Inc.   13  

Page 14: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph – From Object To OSD

©  2012-­‐2014  Inktank,  Inc.   14  

Page 15: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph – Hierarchy In Action ceph@daisy:~$sudo ceph osd tree # id weight type name up/down reweight -5 0.03 root ssd -4 0.03 host frank 2 0.009995 osd.2 up 1 4 0.009995 osd.4 up 1 8 0.009995 osd.8 up 1 -1 0.06 root default -2 0.03 host daisy 0 0.009995 osd.0 up 1 3 0.009995 osd.3 up 1 6 0.009995 osd.6 up 1 -3 0.03 host eric 1 0.009995 osd.1 up 1 5 0.009995 osd.5 up 1 7 0.009995 osd.7 up 1

©  2012-­‐2014  Inktank,  Inc.   15  

Page 16: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph - Pools

©  2012-­‐2014  Inktank,  Inc.   16  

•  Pools –  Will "split" a RADOS object store into numerous single parts –  Are not a contiguous storage area in the OSDs –  Are just a "name tag" for grouping objects –  Have have their own set of permissions 1

o  Will let you choose which user can access which pool o  Will let you choose if this access is read or write o  So that not every user can access every pool

Page 17: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Placement Group (Primary Copy)

Placement Group (Replicas)

OSD

OSD OSD

Objects Synchronous Replication

Ceph - Replciation

© 2012-2014 Inktank, Inc. 17

Page 18: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

CEPH UNIFIED STORAGE

OpenNebula Techday

©  2012-­‐2014  Inktank,  Inc.   18  

Page 19: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph - Unified Storage

©  2012-­‐2014  Inktank,  Inc.   19  

Page 20: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph - LIBRADOS

•  Ceph offers access to RADOS object store through –  A C API (librados.h) –  A C++ API (librados.hpp)

•  Documented and simple API. •  Bindings for various languages. •  Doesn't implement striping 1

© 2012-2014 Inktank, Inc. 20

Page 21: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

S3 Swift RadosGW admin API

radosgw

Ceph Cluster

RESTful HTTP

Translated into Ceph Request (librados)

Ceph – RADOS Gateway

© 2012-2014 Inktank, Inc. 21

Page 22: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

M

M

M

CLIENT

01 10

data metadata

The Ceph File System (CephFS) is a parallel file system that provides a massively scalable, single-hierarchy, shared disk. At the current time, Ceph FS is not recommended for production environments. Ceph FS should be not be mounted on a host that is a node in the Ceph Object Store.

CephFS 1

©  2012-­‐2014  Inktank,  Inc.   22  

Page 23: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph – RADOS Block Device

•  Block-storage interfaces are the most common way to store data to disk

•  Allows for storage of virtual disks in the Ceph Object Store

•  Allows decoupling of VMs and containers

•  High performance is gained from striping the data across the cluster

•  Boot support in QEMU, KVM, and OpenStack (Cinder)

•  Mount support in the Linux kernel

©  2012-­‐2014  Inktank,  Inc.   23  

Page 24: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Ceph RBD – Native Mount

Machines (even those running on bare metal) can mount an RBD image using native Linux kernel drivers

©  2012-­‐2014  Inktank,  Inc.   24  

Page 25: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

M

M

M

VM

LIBRADOS LIBRBD

VIRTUALIZATION CONTAINER

Ceph RBD - Virtual Machines

©  2012-­‐2014  Inktank,  Inc.   25  

Page 26: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

CEPH RBD FEATURES

OpenNebula Techday

©  2012-­‐2014  Inktank,  Inc.   26  

Page 27: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Snapshots

•  How they work? –  Snapshots are instantly created –  Read only copy of the data at

the time of snapshot creation. –  No space is consumed until the

original data changes. –  The snapshot never changes –  Incremental RBD snap supported1

–  Data is read from the original data

©  2012-­‐2014  Inktank,  Inc.   27  

Page 28: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

Clones

•  Create a Clone by 1

–  create snapshot –  protect snapshot –  clone snapshot

•  A clone behaves exactly like any other Ceph block device image –  Read from 2

–  write to –  Clone 3 –  Resize

©  2012-­‐2014  Inktank,  Inc.   28  

Page 29: OpenNebula Techday...OpenNebula Techday Ceph Introduction (2014-06-24) – Jean-Charles Lopez Presentations Objectives ©"2012&2014"Inktank,"Inc." 2 By the end of this presentation,

OpenNebula Techday

Ceph Introduction (2014-06-24) – Thank You