10
EMC VPLEX - 1 Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Copyright © 2010 EMC Corporation. Do not Copy - All … VPLEX -5 Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved. The basic building block of a VPLEX system is

Embed Size (px)

Citation preview

EMC VPLEX - 1

Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

EMC VPLEX - 2

Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Virtualization is considered to be the first step to cloud computing. By virtualizing and

aggregating computing resources within and/or among data centers to a single pool, cloud

computing creates virtual image of computing resources. Only virtual view of the distributed

computing resources are provided to the cloud users, virtual to physical resource mapping

remains hidden to the users. Cloud computing performs mapping of virtual image on any server

in the environment, connected to the appropriate storage, accessed from anywhere within the IT

infrastructure based on optimal placement of data and service requirements. The virtual image

can be migrated from one location to another location within a data center or between data

centers to meet SLA requirements. This requires federation of computing resources and

transparent movement of data within and among data centers.

EMC VPLEX is an important component in the cloud infrastructure which removes physical

barriers within a single data center and multiple virtualized data centers, and ensures a single

copy of data to be shared, accessed, and relocated over distance with no application downtime.

VPLEX provides application mobility and enhanced business continuity across data centers.

EMC VPLEX - 3

Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

EMC VPLEX is a next-generation architecture for data mobility and information access. It is the

first platform in the world that delivers both local and distributed federation. Local federation

provides the transparent cooperation of physical elements within a site. Distributed federation

extends access between two locations across distance.

The VPLEX resides between the servers and heterogeneous storage assets and uses a unique

clustering architecture that allows servers at multiple data centers to have read/write access to

shared block storage devices. It combines scale out clustering and advanced data caching with

the unique distributed cache coherence intelligence to deliver radically new and improved

approaches to storage management.

EMC AccessAnywhere available with VPLEX, is a breakthrough technology from EMC that

enables a single copy of data to be shared, accessed and relocated over distance.

EMC VPLEX - 4

Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The VPLEX family includes two products available today VPLEX Local and VPLEX Metro.

VPLEX Local is implemented for managing data mobility and access within the data center and

VPLEX Metro for mobility and access across locations over synchronous distances. VPLEX Metro

leverages AccessAnywhere to enable a single copy of data to be shared, accessed and relocated over

distance.

VPLEX Geo will add support for data mobility and access over extended asynchronous distances and

is planned for 2011. The VPLEX Global, planned for a future release, will enable data mobility and

AccessAnywhere, including multiple locations and service providers with unlimited distances.

EMC VPLEX - 5

Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The basic building block of a VPLEX system is the Engine. Multiple engines can be configured to form a single VPLEX cluster for scalability. Each Engine includes two high availability Directors with frontend and backend Fibre Channel ports for integration with the customer's fabrics. Directors within a cluster communicate with each other via redundant, private Fibre Channel links called LCOM links. Each cluster includes a 1U Management Server with a public IP port for system management and administration over the data center’s management IP network. The Management Server also has private, redundant IP network connections to each director within the cluster.

VPLEX implementation fundamentally involves three tasks:

•Presenting SAN volumes from backend arrays to VPLEX engines via each director’s backend ports

•Packaging these into sets of VPLEX virtual volumes with the desired configurations and protection levels

•Presenting virtual volumes to production hosts in the SAN via the VPLEX frontend.

Currently a VPLEX system can support a maximum of two clusters. In a dual cluster implementation, the two sites must be less than 100 km apart, with round-trip latency of 5ms or less on the FC links. VPLEX clusters can communicate via FC over the directors FC MAN ports. VPLEX implements a VPN tunnel between the Management Servers of the two clusters. This enables each Management Server to communicate with directors in either cluster via the private IP networks.

EMC VPLEX - 6

Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

VPLEX engine is designed with a very highly available hardware architecture and based on Intel

multi-core processors. VPLEX engine is responsible for the virtualization of the I/O stream. It

hosts two directors with a total of 32, 8Gb/s Fibre Channel ports (16 FE and 16 BE). The engine

is built for performance with a large cache (64GB), and has fully redundant power supplies,

battery backups and EMC Call Home capabilities to align with support best practices.

GeoSynchrony is the operating environment which provides VPLEX features and functionality.

It provides both the foundational federation capability (VPLEX Local) and the distributed

federation capability (VPLEX Metro).

EMC VPLEX - 7

Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The VPLEX Local configuration starts small with a single engine supporting up to 8,000

virtualized LUNs in a modular, cost-effective package that meets the needs for basic migration

requirements. Up to four engines can be added in a VPLEX Local configuration to scale

performance and add resiliency. VPLEX Local is appropriate when the virtual storage

capabilities such as workload relocation, workload resiliency, and simplified storage

management are desired within a single data center and the scaling capacity of VPLEX Local is

sufficient to meet the needs of single data center.

If a larger scale is needed, consider deploying a VPLEX Metro, or consider deploying multiple

instances of VPLEX Local.

EMC VPLEX - 8

Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

With VPLEX distributed federation, it becomes possible to configure shared volumes to hosts

that are in different sites or failure domains. A VPLEX Metro configuration adds a second

cluster of up to four engines with support for “AccessAnywhere” or shared access of a

virtualized LUN by hosts connected to either cluster. Clusters can be located within the same

data center or between data centers over synchronous distances of approximately 100

kilometers. Each cluster can support up to 8,000 LUNs. Two clusters provides two pools of

8,000 LUNs, or 16,000 total LUNs. When sharing LUNs between clusters, each LUN is

subtracted from the cluster. For example, if 2,000 LUNs are shared between both clusters, each

cluster would support 6,000 non-shared LUNs.

Sharing LUNs between VPLEX clusters can be combined with host-based cluster technologies

to transparently move or relocate applications between host clusters or locations. Support for

VMware to enable VMotion over distance between ESX clusters.

A deployment of VPLEX Metro between two data centers is appropriate when the additional

workload resiliency benefits of having an application’s data present in both data centers is

desired. This deployment is also desirable when applications in one data center want to access

data in the other data center, or when one wants to redistribute workloads between the two data

centers, or when one data center has run out of space, power, or cooling.

EMC VPLEX - 9

Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

To begin using a VPLEX cluster, you must provision and export storage so that hosts and applications can use the storage. Provisioning and exporting storage refers to the tasks required to take a storage volume from a storage array and make it visible to a host. This process consists of the various tasks as listed in the slide.

Starting from the bottom, in the figure shows the storage volumes that are claimed. These volumes are divided into multiple extents, however you can create a single full size extent using the entire capacity of the storage volume. Devices are then created to combine extents or other devices into one large device. From this large device, a virtual volume is created.

The virtual device is presented to the host through a storage view. A storage view defines which hosts access which virtual volumes on which VPLEX family ports. It consists of the following components:

•Registered initiators (hosts) to access the storage

•VPLEX family ports (front-end) to export the storage

•One or more virtual volumes to export

Typically, one storage view is created for all hosts that require access to the same storage.

EMC VPLEX - 10

Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The VPLEX environment is very dynamic and uses a hierarchy to keep track of where I/O goes.

An I/O request can come in from anywhere and it will be serviced by any available engine in the

VPLEX cluster. VPLEX abstracts the ownership model into a high-level directory that is

updated for every I/O and shared across all engines. The directory uses a small amount of

metadata and tells all other engines in the cluster, in 4k blocks, which block of data is owned by

which engine and at what time.

This model also enables VPLEX to stretch the cluster, as we can distribute this directory

between clusters and therefore, between sites. Overall, VPLEX has minimal overhead and is

very efficient, and it enables communications to occur simply over distance.