Red Hat Enterprise Linux Cluster and Storage · PDF fileRed Hat Enterprise Linux Cluster and...

Preview:

Citation preview

Red Hat Enterprise LinuxCluster and Storage Solutions

Joachim Schröderjoachim.schroeder@redhat.comRed Hat GmbH

2

Red Hat Clustering Architecture

Red Hat Cluster Suite provides● Application failover● Improves application availability● Included with GFS● Core services for enterprise

cluster configurations Red Hat Global File System (GFS)

● Cluster-wide concurrent read-write file system

● Improves cluster availability, scalability and performance

● Includes Cluster Logical Volume Manager (CLVM)

Red Hat Enterprise Linux

Single node LVM2

Red Hat Cluster Suite

HA Services(Failover)

Core services*:DLM – Connection Manager – Service ManagerI/O Fencing – Heartbeats – Management GUI

IP LoadBalancing

Cluster LogicalVolume Manager

Cluster FileSystem

Red Hat Global File System (GFS)

3

4

5

Red Hat Cluster Suite Overview

Provides two major technologies● High Availability failover – su itable for unmodified applications● IP Load Balancing – enables network server farms to load share IP

load New with Cluster Suite v4

● Elimination of requirement for shared storage● Significantly reduces the cost of high availability clustering● Shared Quorum partition is no longer required

● Service state, previously stored in Quorum partition, is now distributed across cluster

● Online resource management modification● Allows services to be updated without shutting down (where

possible)● Additional fencing agents

6

Red Hat Cluster Suite Architecture for HA and Loadbalancing with Piranha

Primary Backup

RS0 RS1 RS2

Internet

Heartbeat

Virtual IP

Director

Real Server

7

Real Server Failure

Primary Backup

RS0 RS1 RS2

Internet

Heartbeat

Virtual IP

Director

Real Server

8

Director Failure

Primary Backup

RS0 RS1 RS2

Internet

Virtual IP

Heartbeat

Director

Real Server

9

Red Hat Cluster Suite: Core Cluster Services

Core functionality for both Clustering and GFS is delivered in Red Hat Cluster Suite● Membership management● I/O fencing● Lock management● Heartbeats● Management GUI

Support for up to 300 nodes

10

Reduce Complexity with Red Hat GFS

Replic

atio

n

SAN

Red Hat GFS

SAN

Storage Pool

Without GFS With GFS

● Eliminates data replication through shared, concurrent access

● Increases throughput & performance● Ensures continuous data availability● Simplifies data management & lower cost

11

Red Hat Global File System

Provides two major technologies● GFS cluster file system – concurrent file system access for database,

web serving, NFS file serving, HPC, etc. environments● CLVM cluster logical volume manager

Fully POSIX compliant Maximum filesize & file system size: 16TB with 32-bit systems, 8EB with

64-bit systems Online file system expansion Supports SAN, iSCSI, GNBD Support x86, x86-64 and ia64 in same cluster

● Platform Independent metadata

12

13

14

15

16

Red Hat Global File System Data and meta-data journaling (per-node journals, clusterwide

recovery) Dynamic Inodes allocation Context Dependent Path Names (CDPN)

● Based on hostname, OS, uid, gid, sys, mach ACL support Quota support Freeze/Unfreeze

17

Cluster Logical Volume Manager (CLVM) CLVM builds upon LVM 2.0 and the kernel device mapper

component included in 2.5 and 2.6 Linux distributions Essentially a cluster-aware version of LVM 2.x Commands, features, functions all work just fine in a cluster, any

Linux server may mount any volume Provides

● Cluster safe volume operations● Cluster-wide concatenation and stripping of volumes● Dynamic Volume resizing

18

Red Hat Global File System v.6.1

VFS

User Space

CLVMSLM/RLM

GFSCore Services Interface

   IP Network

NoLock

GFS 6.1 Software Architecture

    CMAN/DLM

19

GFS for Oracle RAC environments

Red Hat Enterprise Linux 5Storage Virtualization

21

Virtualization with Red Hat

1. Server/operating system virtualization● integrated into kernel and OS platform

2. Storage virtualization: Global data● Red Hat Global File System/CLVM

3. System management, resource management, provisioning● Red Hat Network

4. Application environment consistency with non-virtualized environments

22

No Virtualization

23

Using Virtualization

24

Cluster Suite Enables Fast Recovery

25

Virtualization Enables Easy Maintenance

26

New Fence Agents● Xen Fencing

● fence_xvm agent● fence_xvmd daemon on hosts

● fence_scsi (based on SCSI Reservations)● man fence_scsi

27

OpenAIS Cluster Framework● Eases creation of Cluster aware applications

● Faster, more stateful failover● Openais project based on SAForum● Common cluster infrastructure● Checkpointing, Events, Messaging● Framework registration● Monitoring● Distributed Lock Manager (DLM) Library

● Fast cluster resource protection

28

OpenAIS Component Architecture

29

OpenAIS Impact on Cluster Stack● Improved Support Capabilities.● All components that can be in user space are in user space.● Protocol implementation based upon 15 years of academic research.● Each RHCS component includes debugging technologies for field

failures.

30

Conga Project● The Conga project is a unified management platform for all of your

cluster needs.● It is a web based project with two components ricci and luci.● Allows our customers to easily deploy clusters.● Contains an rpm module that will install cluster configuration and

storage configuration rpms.

31

32

When adding a new node...

All necessary cluster packages are downloaded and installed The new node is added to the cluster configuration file and propagated to

 the cluster, as well as the new node The new node is rebooted and then given the directive to join the cluster Progress is shown graphically at any point in process

33

34

35

36

Ricci● Ricci is an agent that runs on any machine or cluster node to be

administered. ● Once Ricci is installed on a machine, Luci can make use of it for

clusters.● Ricci is the glue that makes the Conga project work.

37

Upstream● Both DLM and GFS2 have now been accepted in the upstream kernel!● As of the 2.16.19 kernel● Other distributions starting to pick it up

38

Changes to GFS from 1.0 to 2.0● GFS 2.0 yields higher performance than GFS 1.0.

● Generally, sync writing, in single directory● The metadata structure of GFS has changed in 2.0.● Journal's now files (not metadata) - you can convert GFS 1.0

filesystems to GFS 2.0.● GFS 2.0 is in the upstream Linux kernel● Root file systems (w/ Anaconda Support)

39

Why use single node GFS2?● GFS2 works well for large filesystems.● If you are an ISV, you might think of moving to a clustered environment

later.● If you are already running GFS2, then your transition to RHCS will be

easier.

40

SELinux and GFS● Both GFS and GFS 2.0 will have SELinux support in RHEL5. ● Expect to see people using SELinux with GFS.

41

GFS and the Journal● With GFS 2.0, you no longer have to grow the partition for the GFS

filesystem larger than the filesystem for the journal.● GFS 2.0 does not need space beyond the filesystem for the journal.

42

Device Mapper / Volume Management MPIO support introduced in RHEL 4, update 2

● Supported by EMC for Clariion & DMX● Adding support for other active/passive devices

DM/RAID 1 added in RHEL4.4 CLVM RAID 1 to be introduced in RHEL 4.5 & 5.1

43

QDisks● Add an external determinant for Quorum.● Targeted at Oracle clusters of 1 to 2 node configurations for use as a

network partition.● Packaged in cman.● Qdisks solve the problem in which it is desirable to sustain a majority

node failure of a cluster without introducing the need for asymmetric cluster configurations.

44

QDisks● The Qdisk Quorum Daemon requires a shared block device with

concurrent read/write access from all nodes in the cluster.● The shared block device can be a multi-port SCSI RAID array, a Fiber-

Channel RAID SAN, a RAIDed iSCSI target, or even GNBD.● The Quorum daemon uses O_DIRECT to write to the device.

Questions?

www.redhat.com

Recommended