42
EMC ® ViPR Version 1.1.0 Concepts Guide 302-000-482 02

EMC® ViPR™ 1.1.0 Concepts Guide 1 ViPR Introduction The purpose of this guide is to explain the high-level architecture, core concepts, and entities associated with ViPR

Embed Size (px)

Citation preview

EMC® ViPR™Version 1.1.0

Concepts Guide302-000-482

02

Copyright © 2013-2014 EMC Corporation . All rights reserved. Published in USA.

Published February, 2014

EMC believes the information in this publication is accurate as of its publication date. The information is subject to changewithout notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind withrespect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for aparticular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicablesoftware license.

EMC², EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and othercountries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com). Fordocumentation on EMC Data Domain products, go to the EMC Data Domain Support Portal (https://my.datadomain.com).

EMC CorporationHopkinton, Massachusetts 01748-91031-508-435-1000 In North America 1-866-464-7381www.EMC.com

2 EMC ViPR 1.1.0 Concepts Guide

5

ViPR Introduction 7

About.............................................................................................................. 8Introduction to ViPR Controller........................................................................ 8

Discover physical infrastructure......................................................... 9Define ViPR abstractions.................................................................. 10Provision storage............................................................................. 10

Introduction to ViPR Data Services.................................................................11Interfaces......................................................................................................12

API................................................................................................... 12UI.....................................................................................................13CLI................................................................................................... 13

ViPR integration with compute stacks............................................................13

ViPR-specific Resources 17

ViPR resource conceptual overview............................................................... 18Virtual data center ........................................................................................19Virtual array ..................................................................................................19Virtual pool .................................................................................................. 22Tenant...........................................................................................................24Project.......................................................................................................... 25Service catalog..............................................................................................25

Storage Resources Managed by ViPR 27

Storage system............................................................................................. 28Storage pool................................................................................................. 28SAN switch (fabric manager)......................................................................... 28Network........................................................................................................ 29Volume......................................................................................................... 30Volume export...............................................................................................30File system.................................................................................................... 31Snapshot...................................................................................................... 31Host.............................................................................................................. 31vCenter Server/cluster...................................................................................32Data store..................................................................................................... 32Object........................................................................................................... 33Bucket.......................................................................................................... 34

User Roles and Access Control Lists (ACLs) 37

User roles......................................................................................................38Access Control Lists (ACLs)............................................................................40Adding users into ViPR.................................................................................. 41User role resource views................................................................................42

Figures

Chapter 1

Chapter 2

Chapter 3

Chapter 4

CONTENTS

EMC ViPR 1.1.0 Concepts Guide 3

CONTENTS

4 EMC ViPR 1.1.0 Concepts Guide

Virtual pools.................................................................................................................. 10ViPR abstraction of underlying physical storage............................................................. 11ViPR resource conceptual overview................................................................................18Virtual arrays in the ViPR virtual data center...................................................................20Virtual array partitioning................................................................................................ 21Manual and automatic port assignments in shared network among virtual arrays.......... 22Creating a block virtual pool.......................................................................................... 23ViPR data services virtual pool and related resources.....................................................24Network connectivity in the ViPR virtual data center.......................................................29Data store mapping to physical file shares.....................................................................33Using buckets for object storage and ingesting file share data.......................................34

1234567891011

FIGURES

EMC ViPR 1.1.0 Concepts Guide 5

FIGURES

6 EMC ViPR 1.1.0 Concepts Guide

CHAPTER 1

ViPR Introduction

The purpose of this guide is to explain the high-level architecture, core concepts, andentities associated with ViPR. To use ViPR effectively, you should understand theconcepts introduced in this guide. This chapter contains the following topics.

u About...................................................................................................................... 8u Introduction to ViPR Controller.................................................................................8u Introduction to ViPR Data Services.........................................................................11u Interfaces..............................................................................................................12u ViPR integration with compute stacks....................................................................13

ViPR Introduction 7

AboutEMC

® ViPR™ is a software-defined platform that abstracts, pools, and automates a datacenter's underlying physical storage infrastructure. It provides a single control plane forheterogeneous storage systems to data center administrators.

ViPR enables software-defined data centers by providing:

u Storage automation capabilities for heterogeneous block and file storage (controlpath).

u Object data management and analytic capabilities through Data Services that createa unified pool (bucket) of data across file shares (data path).

u Integration with VMware and Microsoft compute stacks to enable higher levels ofcompute and network orchestration.

u A comprehensive RESTful interface for integrating ViPR with management andreporting applications.

u A web-based User Interface (UI) that provides the ability to configure and monitorViPR, as well as perform self-service storage provisioning by enterprise users.

u Comprehensive and customizable platform reporting capabilities, including capacitymetering, chargeback, and performance monitoring through the included ViPRSolutionPack.

Introduction to ViPR ControllerViPR solves a very difficult problem faced by enterprise IT departments and data centeradministrators today: how to achieve a highly efficient cloud-style operating model in amulti-vendor storage environment that can manage and share storage from a central viewwhile still using and maximizing existing storage system capabilities?

ViPR software-defined storage is advantageous in the following enterprise scenarios:

u Enterprises experiencing explosive storage growth with flat IT budgets that want toconsistently plan, configure, protect, migrate, and manage heterogeneous storageresources (data center storage automation).

u Enterprises transforming their IT infrastructure to provide storage-as-a-service(STaaS) for IT-as-a-service (ITaaS).

u Enterprises deploying private and hybrid clouds to facilitate the delivery of higher-value services.

The business problems that ViPR solves are summarized below.

Existing Storage Environments Storage Environments with ViPR- -Many storage control points within adata center.

One storage control point per physical data center(which is the equivalent of a ViPR virtual data center).One ViPR instance controls all the storage resourceswithin the virtual data center.

Disparate, complex storagemanagement abstractions.

Simple, user- and developer-friendly storagemanagement abstractions.

Complex IT processes needed to deliverstorage.

Administrators set policies and definitions (on virtualarrays, pools, and storage services in the UI) once;

ViPR Introduction

8 EMC ViPR 1.1.0 Concepts Guide

Existing Storage Environments Storage Environments with ViPR- -

storage provisioning is automated thereafter.Developer access to self-service storage.

Lack of visibility into storage usage,availability, performance, and overallhealth for data center administratorsdue to storage silos.

Clear visibility into storage usage, availability,performance, and overall health for data centeradministrators.

End result: Delivery is too slow, and thecost of storage operations is too high.

End result: Agile and efficient storage operations.

The ViPR Controller accomplishes the above by taking over the storage control plane andsimplifying/automating the management of block and file storage. It allows users tooperate at the level of ViPR virtual abstractions, so that ViPR administrators can definethe policies for storage consumption in the ViPR software and provide services to theirend users.

ViPR aggregates and pools physical storage systems and pools into virtual arrays andpools in a manner similar to VMware vCenter. Policy and management functions areapplied in the aggregate, thus simplifying the process.

The ViPR Controller allows you to:

u discover physical block and file storage to bring under ViPR management.

u abstract block and file physical storage into defined ViPR abstractions such as virtualarrays and virtual pools (also referred to as Virtual Storage Arrays and Virtual StoragePools).

u automate storage tasks and deliver storage through a self-service catalog in the ViPRUI to enable self-service storage provisioning.

u centralize management and file/block provisioning operations across physical andvirtual environments.

Discover physical infrastructure

Once ViPR is deployed in a data center, ViPR discovers the physical infrastructureincluding storage systems, Fibre Channel storage area networks (SANs), and hosts sothat ViPR can understand the full topology of the data center. ViPR discovers the physicalstorage pools and the storage ports for each storage system registered with ViPR. Thephysical pools and ports on the storage systems are used by ViPR to make the storagedevices visible to the hosts.

ViPR discovers and manages the block and file storage systems listed below.

Storage System Block Storage File Storage- - -EMC VPLEX X

EMC Symmetrix VMAX X

EMC VNX X X

Isilon X

NetApp X

ViPR Introduction

Discover physical infrastructure 9

Define ViPR abstractions

After discovering the physical infrastructure, ViPR administrators define the followingViPR abstractions:

u Virtual arraysu Virtual poolsu End user services using virtual arrays and pools

These abstractions are key to enabling software-defined storage, and provide ViPRadministrators a way to implement easy-to-understand policies for managing storage.

Virtual arrays are logical groupings of underlying physical storage systems, and virtualpools contain one or more underlying physical storage pools. A virtual pool defines a setof storage capabilities that describe the quality of storage, such as type of disks, thick/thin devices, and protection features such as snapshots, replication, and highavailability.

The ViPR Controller manages storage at the virtual level through virtual arrays and pools.This automates storage provisioning tasks and allows users to provision their ownstorage for hosts and applications. With ViPR, the block and file data paths areunchanged; data is still managed directly by the underlying block and file arrays. Thisallows users to continue to take full advantage of the underlying intelligent storagetechnologies, without introducing I/O latencies.

ViPR services provide a user-friendly and highly customizable experience for the selfservice user. ViPR administrators define the services for the end users based on thedefined virtual arrays and virtual pools as well as other discovered entities like hosts andvCenter servers. An example of a ViPR service is the Create Block Volume for Host service,which allows the end user to specify a host for which storage must be provisioned, and avirtual array and pool from which the block storage has to be derived.

Figure 1 Virtual pools

Provision storage

ViPR simplifies the provisioning of block and file storage. End users consume storagefrom virtual pools of storage made available to them by a System Administrator. Whenprovisioning storage, all end users have to know is the type of storage (virtual pool) andthe host/cluster to which the storage should be attached - they do not have to specify

ViPR Introduction

10 EMC ViPR 1.1.0 Concepts Guide

detailed storage parameters. Users such as server/virtual infrastructure administratorswould know this information and can therefore perform self-service provisioning usingViPR.

The figure below shows how self-service provisioning works: an end user could create avolume using the Mission Critical virtual pool (which has premium storage qualityincluding high levels of data protection and redundancy) and then use this volume inapplications that are critical to business operation. The volume (or the file system, as thecase may be) is created from the underlying physical storage, without showing theunderlying complexity to the end user.

The end storage users in the organization can use ViPR's service catalog to executestorage provisioning operations (for example, create block volume for host) withouthaving to know anything about the underlying storage infrastructure.

ViPR's comprehensive REST API also allows higher level orchestration engines to deliverricher value-added services to end users.

ViPR not only provides the virtual abstractions, which are a key component in enablingsoftware-defined storage, but also provides comprehensive value-added automation. Forexample, when a user executes a service to Create Block Volume for Host, ViPRprovisions and formats the volume, creates a file system on it, exports it to the hosts,programs the SAN fabric, and forces a host level I/O scan to mount the volume, therebyproviding an end-to-end service to the user.

Figure 2 ViPR abstraction of underlying physical storage

ViPR Virtual Data Center

ViPR clients

ViPR CLI client

ViPR virtual appliancein a 3 VM configuration

physical storage systemwith physical storage pools and ports

(This ViPR-managed storagesystem is only visible to auser with a ViPR System Administrator or System Monitor role.)

abstracted

storage discovery of heterogeneousphysical storage

ports

Physical storage pools A and B are abstracted into the “Mission Critical”virtual pool.

End users only see the Mission Critical virtual pool for self-service provisioning, not the underlying physical storage system.

physical pool A

physical pool B

physical storage pool A

physical storage pool B

physical storage pool A physical storage physical storage physical storage physical storage

physical storage physical storage physical storage physical storage physical storage pool B pphysical storage physical storage

Mission Critical virtual pool

Introduction to ViPR Data ServicesViPR aggregates multi-vendor heterogeneous storage into a unified storage platform. Thisstorage platform forms the underlying infrastructure for hosting a range of data servicesto support the storage and manipulation of object data, and to support collecting,managing and utilizing unstructured content at massive scale. ViPR Data Services arecritical enablers for Cloud and Big Data applications.

ViPR currently supports the following Data Services:

ViPR Introduction

Introduction to ViPR Data Services 11

Object Data ServiceProvides the ability to store, access, and manipulate unstructured data as objects onfile-based storage systems, such as EMC VNX, Isilon and NetApp. The Object DataService is compatible with existing EMC Atmos, Amazon S3, and OpenStack SwiftAPIs. The Object Data Service also provides the ability to access a set of objects asfiles directly on the underlying file storage device, with native file systemperformance, enabling in-place access to the data for file-based applications.

HDFS Data ServiceProvides Hadoop Distributed File System (HDFS) support, enabling existing file-based storage to be leveraged to build robust, scalable data processingenvironments based on the Hadoop framework.

The HDFS Data Service allows organizations to analyze existing data on file arrayswithout moving it to a separate repository. With the HDFS Data Service, the ViPRvirtualized storage environment can be used as a Big Data repository against whichHadoop analytic applications can be run.

ViPR Data Services allow you to:

u Ingest data from existing file systems, store the data as objects, and make itavailable for use by ViPR object and HDFS services. The object data can be accessedas files using file access mode or the HDFS service. In both cases the directorystructure of the ingested data is preserved.

u Access and manipulate the same data using the Object and HDFS services. Objectcontainers created using the object APIs can be made available as HDFS storage,enabling data brought into ViPR as objects, or created in ViPR as objects, to be usedas the target for Hadoop map and reduce jobs, whilst still being available for updateby the object services.

u Use existing Hadoop infrastructure with ViPR-managed data without any changes bysimply referencing ViPR HDFS as the data source.

InterfacesViPR includes three management interfaces to provide maximum user flexibility:

u A RESTful application programming interface (API)

u A command line interface (CLI)

u A graphical user interface (GUI) called the ViPR Admin and Self-Service UI (referred toas the UI)

APIViPR provides a powerful REST API. Through the API, you can extract information, create,delete, modify, monitor, and meter logical storage resources.

All data and resources managed by ViPR are accessible via the API. The ViPR API allowsdevelopers to write applications without regard to the underlying hardware and software.

The ViPR REST API for the Object Data Service supports the Amazon Simple StorageService (S3), OpenStack Swift, and EMC Atmos storage APIs. Developers can writeapplications to multiple cloud APIs and execute those workloads on ViPR in an enterprisedata center or a service provider’s cloud. The open API enables both enterprises andservice providers to build developer communities that attract developers andindependent software vendors to create value-adding data services.

ViPR Introduction

12 EMC ViPR 1.1.0 Concepts Guide

UIViPR includes a built-in management UI that provides a browser-based user interface forthe capabilities available in ViPR.

This includes adding physical storage resources such as arrays and switches into ViPR,creating virtual storage resources such as virtual arrays and pools from the physicalresources, and managing these resources from one central control plane. The UIsimplifies configuring the ViPR virtual data center and managing the diverse underlyingphysical storage infrastructure, without reducing the strength of EMC array features.Storage provisioning is simplified, but users still have access to array-specific featuressuch as snapshots and RecoverPoint and VPLEX data protection features.

The UI presents the most common storage use cases as services through the servicecatalog. A user with a System Administrator or Tenant Administrator role can configurethe service catalog to customize the services offered and to control access to theservices. Users can then perform self-service provisioning by choosing the appropriatestorage service from the service catalog and choosing the virtual pool and array thatmeets their storage needs.

CLIThe ViPR CLI provides an interface for developers and data center administrators tomanage storage resources.

The CLI install software is packaged with ViPR and is installed on a stand-alone clientmachine.

ViPR integration with compute stacksViPR-managed storage integrates with several VMware and Microsoft applications so thatstorage can be seamlessly provisioned, managed, and monitored within theseapplications.

ViPR integration with VMware and Microsoft applications provides the following benefits:

u Streamlines the interaction between data center administrators and server/virtualinfrastructure administrators to increase operational efficiency.

u Allows the ViPR administrators to define the storage policies and boundaries andthen delegate authority to the server/virtual infrastructure administrators to self-manage their storage.

u Provides the server/virtual infrastructure administrators the ability to do this self-management from their native tools (VMware vCenter and Microsoft System CenterVirtual Machine Manager (SCVMM).

ViPR has a built-in storage provider for VMware vCenter and installable components thatallow ViPR data and resources to be managed and monitored through the followingVMware and Microsoft applications.

ViPR component Integrates with- -EMC ViPR Storage Provider for VMware vCenter(service built into the ViPR virtual appliance)

VMware vSphere/vCenter Server

EMC ViPR Plug-in for VMware vCenterOrchestrator

l VMware vCenter Orchestrator client or RESTAPI

l VMware vSphere/vCenter Server

ViPR Introduction

UI 13

ViPR component Integrates with- -

l VMware vCloud Automation Center

EMC ViPR Analytics Pack for VMware vCenterOperations Management Suite

VMware vCenter Operations Management Suite

EMC ViPR Add-in for Microsoft System CenterVirtual Machine Manager

Microsoft Service Center Virtual MachineManager

EMC Virtual Storage Integrator (VSI) for VMwarevSphere Web Client v6.0

VMware vSphere/vCenter Server

EMC ViPR Storage Provider for VMware vCenterThe ViPR Storage Provider for VMware vCenter integrates ViPR with the vCenter Server andallows vCenter administrators to view ViPR virtual pools and use them to select theappropriate storage when creating new virtual machines. It also reports events andalarms originating from ViPR.

Refer to the EMC ViPR Storage Provider for VMware vCenter Registration and Usage Guidefor more information.

EMC ViPR Plug-in for VMware vCenter OrchestratorThe EMC plug-in for vCenter Orchestrator provides an orchestration interface to the ViPRsoftware platform.

The ViPR plug-in has pre-packaged workflows which consist of both building blockworkflows providing more granular operations, and higher level workflows to carry outcommon activities such as provisioning storage for an entire cluster.

The ViPR plug-in must be installed and configured from the vCenter OrchestratorConfiguration interface. Once it is installed, the functionality provided with the ViPR plug-in can be invoked through:

u vCenter Orchestrator client, web client, or REST API

u VMware vSphere Web Client

u VMware vCloud Automation Center with the EMC ViPR Enablement Kit for vCloudAutomation Center. Refer to the Cloud Automation with EMC ViPR Enablement Kit forvCloud Automation Center White Paper for complete details.

Refer to the EMC ViPR Plug-in for VMware vCenter Orchestrator Installation andConfiguration Guide and the workflow documentation, which can be generated fromvCenter Orchestrator, for further information.

EMC ViPR Analytics Pack for VMware vCenter Operations Management SuiteThe EMC ViPR Analytics Pack for VMware vCenter Operations Management Suite isdesigned to:

u Import ViPR inventory, metering, and event data to VMware vCenter OperationsManagement Suite.

u Provide pre-configured dashboards for troubleshooting issues in ViPR.

u Provide a collection of volume, storage port, storage system, and virtual pool data forcomputing key resource status scores used in ViPR.

u Present dashboard views that summarize resource details, the behavior of individualmetrics, and ViPR event alerts.

u Improve the health scores of ViPR resources by utilizing performance data from VNX/VMAX adapters.

ViPR Introduction

14 EMC ViPR 1.1.0 Concepts Guide

Refer to the EMC ViPR Analytics Pack for VMware vCenter Operations Management SuiteInstallation and Configuration Guide for more information.

EMC ViPR Add-in for Microsoft System Center Virtual Machine Manager (SCVMM)The EMC ViPR Add-in for Microsoft SCVMM integrates prepackaged services with SCVMMfor allocating and managing ViPR storage with the SCVMM hypervisors and hosts throughthe System Center Virtual Machine Manager console (VMM console).

The prepackaged services allow you to perform the following functions on ViPR storage:

u Provision a volume for a cluster.

u Expand a volume for a cluster.

u Delete a volume.

u Create pass-through disks.

u Expand a pass-through disk.

Refer to the EMC ViPR Add-in for Microsoft Virtual Machine Manager Installation andConfiguration Guide and the EMC ViPR Add-in for Microsoft Virtual Machine ManagerOnline Help for more information.

EMC VSI for VMware vSphere Web Client version 6.0EMC VSI allows you to easily provision and manage ViPR storage for ESX/ESXi hosts. VSIis an architecture that enables the features and functionality of ViPR storage. Tasks thatyou can perform with VSI include storage provisioning, storage mapping, and viewinginformation such as capacity utilization. VSI interfaces within the vCenter to access theViPR storage.

Refer to the EMC VSI for VMware vSphere Web Client Version 6.0 Product Guide for moreinformation.

ViPR Introduction

ViPR integration with compute stacks 15

CHAPTER 2

ViPR-specific Resources

This chapter contains the following topics.

u ViPR resource conceptual overview....................................................................... 18u Virtual data center ................................................................................................19u Virtual array ..........................................................................................................19u Virtual pool .......................................................................................................... 22u Tenant...................................................................................................................24u Project.................................................................................................................. 25u Service catalog......................................................................................................25

ViPR-specific Resources 17

ViPR resource conceptual overviewA high-level conceptual diagram of the ViPR virtual data center with its primary storageresources is shown below.

Figure 3 ViPR resource conceptual overview

ViPR Virtual Data Center

Block Control Service File Control Service Data Services

Tenants

File Systems

Projects

- self-service provisioning, metering,

and monitoring

- role-based authorization

- programmable, elastic

- different virtual pools

- policy-based management

- integration into VMware

cloud stack and Microsoft

Hyper-V

- reporting on storage capacity,

allocation, and health

Object StorageFile StorageBlock Storage

Buckets Volumes

- millions of storage resources

- heterogeneous storage

- partitioning of host, network, and

storage resources into virtual arrays for fault tolerance

physical storage systems

physical pools

VirVirtuatual l ArrArray ay ay ay ay ay ay ay

NetNetNetNetNetNetworworworksksksNetNetNet

hoshoshoshoshoshoshoshoshoshost pt pt pt pt pt pt pt pt pt pt pororororororororororortstststststststststs

poopoolslsphyphyphysicsicsical al al poopoolspoopools

stostostostoragragragrage pe pe pe porortsts

poolslspoopoopoolslsal poopoo

VirVirVirtuatuatual Pl Pl Poolool

Ne

physical pools

NetNetNetNetNetNetNetworworworksksksNetNetNet

hoshoshoshoshoshoshoshoshoshost pt pt pt pt pt pt pt pt pt pt pororororororororororortstststststststststststs

poolslsphyphysicsical poopools

stostostostoragragragrage pe pe pe porortsts

ESX hosts ESX hosts

poolslssicsical al poopoopoolsphysical poopoo

VirVirtuatual Pl Poolool

VirVirtuatual l ArrArray ay ay

Virtual PoolVirtual Pool

Virtual PoolVirtual Pool

Virtual PoolVirtual Pool

Virtual PoolVirtual Pool

Virtual PoolVirtual Pool

Virtual PoolVirtual Pool

Virtual PoolVirtual Pool

Virtual PoolVirtual Pool

Virtual PoolVirtual Pool

Data Center Infrastructure

Multi-Tenant

Storage as a Service (STaaS)

physical storage systems

The concepts in ViPR relate to the ViPR-defined logical abstractions (or resources) that areunique to ViPR and the physical resources that are managed by ViPR. These are explainedin detail in the following sections.

ViPR-specific resources

u Virtual data center on page 19

u Virtual array on page 19

u Virtual pool on page 22

u Tenant on page 24

u Project on page 25

u Service catalog on page 25

ViPR-specific Resources

18 EMC ViPR 1.1.0 Concepts Guide

Physical resources managed by ViPR

u Storage system on page 28

u Storage pool on page 28

u SAN switch on page 28

u Network on page 29

u Volume on page 30

u Volume export on page 30

u File system on page 31

u Snapshot on page 31

u Host on page 31

u vCenter server/cluster on page 32

u Data store on page 32

u Object on page 33

u Bucket on page 34

Virtual data centerThe virtual data center represents the ViPR storage control point in a physical data center.

The virtual data center is a collection of storage infrastructure that is managed as acohesive unit. Geographical co-location of storage systems in a virtual data center is notrequired. However, high bandwidth and low latency is assumed within the virtual datacenter.

One ViPR instance can control all the storage resources within the virtual data center.Typically one ViPR virtual appliance is deployed for each physical data center.

Note

In future releases, it will be possible to deploy ViPR as a geographically-dispersedconfiguration, where several ViPR instances control multiple data centers in differentlocations. In this future configuration, ViPR instances could be implemented as a loosely-coupled federation of autonomous virtual data centers. However, federation ofgeographically-dispersed virtual data centers is not supported in the ViPR 1.0 release.

Storage resources – volumes, file systems, and objects – are provisioned into the virtualdata center. All ViPR resources are contained and managed within the virtual data center;the virtual data center is the top-level resource in ViPR.

Virtual arrayA virtual array is an abstract or logical array that is created by a System Administrator topartition a virtual data center into a group of connected compute, network, and storageresources. A virtual data center is typically partitioned into virtual arrays for purposes offault tolerance, network isolation, or tenant isolation.

A virtual array can span multiple physical arrays and conversely a physical array can bepartitioned into multiple virtual arrays. Virtual arrays can also be connected throughdisaster recovery and high-availability links in environments using RecoverPoint andVPLEX Metro configurations.

ViPR-specific Resources

Virtual data center 19

Figure 4 Virtual arrays in the ViPR virtual data center

ViPR Virtual Data Center

Networks

virtual pools

Virtual Array

virtual pools

Virtual Array

Hopkinton, MA Data Center

EMC/NetApp

Storage

Westboro, MA Data Center

EMC/NetApp

Storage

DR

HA

Networks

A virtual array is essentially defined by network connectivity and includes:

u SAN switches/fabric managers within the networks

u IP networks connecting the storage systems and hosts

u Host and storage ports connected to the networks

u ViPR virtual pools (In addition to network connectivity, a virtual array is associatedwith one or more virtual pools.)

In a data center environment, examples of virtual arrays could be large-scale enterpriseSANs or computing fabric pods. Although the end user provisioning storage may beaware of virtual arrays, they are not aware of the underlying infrastructure components(such as shared SANs or computing fabrics). This information is only available to theSystem Administrator.

Tenant access to each virtual array is controlled by an Access Control List (ACL). Onlytenants that are included in the virtual array's ACL are permitted provisioning access tothat virtual array. The ACL for a virtual array is set by the System or Security Administrator.If no ACL is set, the virtual array can be used for provisioning by all tenants in the ViPRvirtual data center.

Consider the example shown in the figure below. Virtual Array 1 includes (or is mappedto) Networks A and B with their collection of connected host and storage ports, as well asVirtual Pool A on the array. All physical devices within a virtual array must be able tocommunicate with each other within the virtual array for ViPR to manage the storageinfrastructure properly. When a user requests a volume or file system export from an arrayto make the storage visible on a host, ViPR determines which network in the virtual arraycontains the desired host initiator ports and storage ports, and then selects the switchthat can manage that network.

ViPR-specific Resources

20 EMC ViPR 1.1.0 Concepts Guide

Figure 5 Virtual array partitioning

ViPR Virtual Data Center

Virtual Array 1 Virtual Array 2

Network A

VSAN

Network B

VSAN

storage ports

physical storage system

V Virirtutualal A Arrrrayay 1 1

N Netetetetwowowoworkrkrk A

VSVSVSVSVSVSANANAN

NeNeNeNeNeNetwtwtwtwtwtworororororork k k k k BBB rkrkrk A A A A

VSVSVSVSVSVSANANANANANANANAN

stststororor

Network C

VSAN

Network D

VSAN

ViVirtrtuaual l ArArraray y y 22

portrtrtrtrtrtsss

N N N N N Netetetetwowowowoworkrkrk C C

VSVSVSVSVSANANAN

rkrkrk C C C NeNeNeNeNeNetwtwtwtwtwtwtwtwororororork k k DDDDDDD

VSVSVSVSVSVSVSANANANANANANANANAN

host

host initiator ports host initiator ports

host

physical pools physical pools physical pools physical pools physical pools

Virtual Pool A

physical pools physical pools physical pools physical pools physical pools

Virtual Pool B

volume/file system export path from storage port to host initiator port

The above figure depicts a scenario where there are no shared networks among virtualarrays. In this scenario, all the physical storage ports and pools associated with anetwork are included in the network's assigned virtual array. For example, all the physicalports and pools in Network A are included in Virtual Array 1.

However, a network (SAN fabric or IP network) can be assigned to more than one virtualarray. When a network is assigned to multiple virtual arrays, all the storage ports andpools associated with the network are automatically added to the assigned virtual array.However, you may choose to manually select a subset of ports or pools from a network toassociate with a virtual array. Consider the following scenario where Network A isassigned to two virtual arrays: Virtual Array 1 and Virtual Array 2. Network A has fourstorage ports in it. By default, all four storage ports are automatically added to bothVirtual Array 1 and Virtual Array 2. But the System Administrator decides that s/he wantsto manually assign two ports (ports 1 and 2) to Virtual Array 1, as shown in the figurebelow. These two ports are now exclusively included in Virtual Array 1; they cannot beassigned to another virtual array. So after the manual assignment of ports 1 and 2 toVirtual Array 1:

u Virtual Array 1 includes ports 1, 2, 3 and 4. (Ports 1 and 2 were manually assigned,and ports 3 and 4 were automatically assigned to this virtual array when the networkwas assigned.)

u Virtual Array 2 includes ports 3 and 4.

ViPR-specific Resources

Virtual array 21

Figure 6 Manual and automatic port assignments in shared network among virtual arrays

Network A

p2 p3 p4p1

Virtual Array 1 Virtual Array 2

Virtual poolThere are three types of virtual pools in ViPR: block, file, and data services.

Block and file virtual pools are sets of block and file storage capabilities that are createdto meet various storage performance and cost needs. Rather than provisioning capacityon storage systems, the System Administrator can give users the ability to use block andfile virtual pools that meet their unique requirements.

Data services virtual pools are used to store object data and are backed by storage onunderlying ViPR-managed file arrays.

The System Administrator is responsible for creating and configuring block, file, and dataservices virtual pools within a virtual data center.

Block and file virtual poolsFor block and file virtual pools, the System Administrator defines a set of storage servicecapabilities, such as type of storage (file or block), protocol (FC, iSCSI, CIFS, NFS), storagesystem type (VPLEX, VMAX, VNX block or file, Isilon, NetApp), protection, andperformance characteristics. The System Administrator then associates the virtual pool tophysical storage pools on the ViPR-managed storage systems.

ViPR automatically matches existing physical pools on the ViPR-managed storagesystems to the virtual pool characteristics specified by the System Administrator. TheSystem Administrator has the option of allowing ViPR to automatically associate thematching physical pools to the virtual pool that he or she is creating, or the SystemAdministrator can manually select a subset of the matching physical pools to associateto the virtual pool. This is an important step; a System Administrator must carefully setup the block and file virtual pools, because the virtual pools are the driver for all futureblock and file provisioning tasks performed by end users.

Once the System Administrator has created the virtual pools, users can create file andblock storage using the virtual pool that has the storage characteristics they require.

The file and block stores provide a set of built-in virtual pool capabilities and enable theSystem Administrator to define custom capabilities, if desired. For example, the SystemAdministrator could define two virtual pools in the file store:

u A “Tier 1” virtual pool with high-speed I/O and data protection capabilities optimizedto minimize disruption to database access for critical business applications.

u A “Tier 2” virtual pool that has lower I/O speed and no data protection capabilitiesthat will be used for internal development and testing purposes.

In the figure below, the System Administrator creates a block virtual pool named MissionCritical. The defined set of storage capabilities indicate that the Mission Critical blockvirtual pool is associated with the VMAX_VA_1 virtual array, the storage system type isEMC VMAX, the protocol is Fibre Channel, the data protection is RecoverPoint, and so on.

ViPR-specific Resources

22 EMC ViPR 1.1.0 Concepts Guide

When an end user wants to create a volume with these storage characteristics, he or shesimply selects the Mission Critical block virtual pool when the volume is created.

Figure 7 Creating a block virtual pool

ViPR Virtual Data Center

System Administrator associates

physical storage pools on a VMAX

storage system to the virtual pool.

Physical storage pools A and B

are virtualized into the Mission

Critical virtual pool.

physical pool A

physical pool B physical pool A

physical pool B

physical pool Aphysical pool Aphysical pool A

physical pool Bphysical pool Bphysical pool B

Mission Critical virtual pool

System Administrator creates a virtual

pool named “Mission Critical” with a

defined set of storage quality parameters.

virtual pool

Name: Mission Critical

Storage Type: Block

Provisioning Type: Thick

Virtual Array: VMAX_VA_1

Protocol: Fibre Channel

Number of Paths: 4

Drive Type: SSD

System Type: EMC VMAX

Data Protection: RecoverPoint

AutoTiering: FAST support

ports

End users only see the

Mission Critical virtual

pool for self-service

provisioning, not the

underlying physical

storage pools and

storage system.

VMAX storage system

Note

When a virtual pool is created, a virtual array must be defined. A virtual pool can beassociated with more than one virtual array.

Tenant access to each virtual pool is controlled by an Access Control List (ACL). Onlytenants that are included in the virtual pool's ACL are permitted access to that virtualpool. The ACL for a virtual pool is set by the System or Security Administrator. If no ACL isset, the virtual pool can be accessed by all tenants. Refer to ACLs for more information.

A virtual pool has a maximum total storage capacity (quota) associated with it thatcannot be exceeded. The used and maximum total capacity values for a virtual pool areincluded in ViPR metering records.

Data services virtual poolsTo create a data services virtual pool, the System Administrator only needs to specify thename, description, and the type of the data services virtual pool. The type can be eitherObject, HDFS, or Object and HDFS, depending on whether the System Administratorwants to set up the data services virtual pool to provision object storage using the ObjectData Service or the HDFS Data Service, or both.

The data services virtual pool is backed by storage from ViPR data stores. Each data storemaps to a physical file share and the storage quality of the data store is defined by thestorage capabilities of the associated file virtual pool. Refer to data stores for moreinformation.

The figure below shows a high level view of how object storage is provisioned in ViPR. Inthis example, Tenants 1 and 2 set up default projects in which to store objects and theyspecify a ViPR data services virtual pool as the default virtual pool to provision theirobject storage. The data services virtual pool is provisioned with two ViPR data storesthat were created by a System Administrator from a ViPR file virtual pool. The file virtualpool maps to a physical file storage pool on an Isilon filer.

ViPR-specific Resources

Virtual pool 23

Figure 8 ViPR data services virtual pool and related resources

ViPR data store

ViPR data store

ViPR-managed �ler

physical file storage pool

ViPR �le virtual pool (10 GB)ViPR �le virtual pool (10 GB)

1 GB

ViPR data services virtual pool(object, HDFS, or object + HDFS)

ViPR data store

ViPR data store

ViPR data services virtual pool(object, HDFS, or object + HDFS)

Tenant 1

default project 1

Tenant 2

default project 2

3 GB

physical file shares

TenantA tenant represents an organization/company operating within the ViPR virtual datacenter. Tenants are created in the ViPR virtual data center to isolate organizations fromeach other in a cloud service provider infrastructure.

As an example, imagine Company PDQ and Company XYZ are two tenants sharing thesame infrastructure within the ViPR virtual data center: security isolation between thetenants ensures that no one from Company PDQ can know of or affect anything inCompany XYZ and vice versa. When using ViPR Data Services, users from differenttenants see a tenant URL namespace that is limited to the information associated withtheir tenancy. For example, users in the Company XYZ tenant only see information relatedto Company XYZ.

Each tenant is configured with its own list of mapped users who are authenticated toperform provisioning operations within that tenant. ViPR is designed to operate in amulti-tenant environment where each tenant has its own list of authenticated users.When a Tenant Administrator creates a tenant, he or she maps users into the tenant byspecifying the user domain(s), user attribute(s), or group membership(s) that exist in theViPR virtual data center.

These domains/user groups are available in the ViPR virtual data center because theSecurity Administrator adds users into the system by accessing user domains/groups inexisting Active Directory (AD) or Lightweight Directory Access Protocol (LDAP) accounts.These AD/LDAP user groups/domains/attributes are specified in the authenticationproviders set up by the Security Administrator to bring users into the entire virtual datacenter. The Tenant Administrator can then use the existing user domains and related userattributes to specify which groups of users he or she wants to map into their tenant.

ViPR-specific Resources

24 EMC ViPR 1.1.0 Concepts Guide

The ViPR virtual data center is organized such that there is a root tenant called theprovider tenant. The provider tenant would be the cloud service provider in a public clouddeployment or an entire enterprise IT organization in a private cloud deployment.

In an enterprise private cloud scenario, there is only the provider tenant, with nounderlying tenants (subtenants). In the cloud service provider scenario, a user in theprovider tenant with the Tenant Administrator role may create tenants under the providerroot tenant (for example, the Company PDQ and XYZ tenants), where each tenant isconfigured with its own list of mapped users who are allowed to operate within thetenant. Note that in the API and CLI, the tenants are referred to as subtenants.

A tenant has a maximum total storage capacity (quota) associated with it that cannot beexceeded. The used and maximum total capacity values for a tenant are included in ViPRmetering records.

Tenant access to virtual arrays and virtual pools can be controlled by an Access ControlList (ACL). Note that virtual arrays and virtual pools are accessible to all tenants bydefault. However, a System or Security Administrator can assign an ACL to a virtual arrayor virtual pool to restrict them to be used by specified tenants only. Refer to ACLs on page40 for more information.

ProjectA ViPR project is a grouping of resources mapped to applications, virtual data centers,departments, or other entities meaningful to the user.

Users can create their own projects within their tenant, and they can provision multiplestorage resources from different data services to their projects (for example, storagevolumes, file systems, or objects such as files or images). Resources from one project canbe shared between users under the same tenant. Examples of using projects could be:

u A user creates a project for a Photo Album application and provisions one blockvolume for a user account database and one data store for storing the pictures.

u A user creates a project called VDC Data Stores and provisions it into multiplevolumes for use by their ESX cluster.

u A Tenant Administrator creates a project for use by a specific department.

A user must be assigned the Tenant Administrator or Project Administrator role in order tocreate projects. One level of projects can be created under a tenant. A TenantAdministrator has full management access to all projects within the Tenant. A ProjectAdministrator can only create projects; they cannot manage the resources within theproject.

A project has a maximum total storage capacity (quota) associated with it that cannot beexceeded. The used and maximum total capacity values for a project are included in ViPRmetering records.

Service catalogThe service catalog in the ViPR UI enables users to select a pre-configured service whichis appropriate to the storage operation they want to perform. Services encapsulate themost common storage operations that ViPR provisioning users will want to perform. Usersrun services from the service catalog in order to create and manage block and filestorage.

The services are grouped logically into categories such as:

u Block storage services

ViPR-specific Resources

Project 25

u Block protection services

u File storage services

u File protection services

u Block storage services for Linux

u Block storage services for Windows

u Block storage services for VMware

u File services for VMware

Examples of services in the block storage services category include:

u Create block volume for a host

u Export volume to a host

u Expand block volume

u Remove block volumes

u Change virtual array

u Change virtual pool

ViPR-specific Resources

26 EMC ViPR 1.1.0 Concepts Guide

CHAPTER 3

Storage Resources Managed by ViPR

This chapter contains the following topics.

u Storage system..................................................................................................... 28u Storage pool..........................................................................................................28u SAN switch (fabric manager)..................................................................................28u Network................................................................................................................ 29u Volume................................................................................................................. 30u Volume export.......................................................................................................30u File system............................................................................................................ 31u Snapshot.............................................................................................................. 31u Host...................................................................................................................... 31u vCenter Server/cluster...........................................................................................32u Data store............................................................................................................. 32u Object................................................................................................................... 33u Bucket.................................................................................................................. 34

Storage Resources Managed by ViPR 27

Storage systemThe first step in configuring ViPR after deploying the virtual appliance is to add physicalstorage systems into the ViPR virtual data center.

Adding storage systems to ViPR can only be performed by a user with a SystemAdministrator role. When storage systems are added, ViPR automatically discovers thestorage pools and ports on each storage system and adds them into the ViPR virtual datacenter.

When storage systems with their associated storage pools and ports are added into ViPR,they are automatically registered, meaning they can be managed by ViPR. The SystemAdministrator can unregister storage systems, pools, and ports if he or she does not wantthese physical storage resources managed by ViPR.

Only a System Administrator has visibility and control over the physical storagecomponents in ViPR. (A System Monitor can also see the physical storage components,but has read-only access.) Other users in the ViPR virtual data center do not see theunderlying physical storage infrastructure.

ViPR supports EMC VPLEX, VMAX, VNX, Isilon, and NetApp storage systems for block andfile storage provisioning. For object storage, ViPR supports VNX file, Isilon, and NetAppfilers. If a System Administrator tries to add a storage system into ViPR that does notmeet the minimum version supported by ViPR, the storage system will be identified asIncompatible, and will not be discovered and managed by ViPR.

Refer to the EMC ViPR Data Sheet and Compatibility Matrix on support.EMC.com for thespecific storage system models and versions supported.

Storage poolPhysical storage pools on a physical storage system are automatically discovered andadded into ViPR when a System Administrator adds the physical storage system into theViPR virtual data center.

The System Administrator can associate ViPR-managed physical storage pools to a virtualpool. This virtual pool is what is presented to end users for self-service provisioning; theydo not see the underlying physical storage pools. Virtual pools simplify storageprovisioning tasks for the end user by abstracting the underlying physical storage pools.(Refer to virtual pools on page 22 for more information.)

Note that a physical storage pool can be associated to more than one virtual pool. Aphysical storage pool has to be associated to a virtual pool and a virtual array by theSystem Administrator before it can be used for self-service provisioning by end users.

SAN switch (fabric manager)After adding storage systems into ViPR, a System Administrator adds SAN switches intoViPR. SAN switches are called fabric managers in the UI.

When a Fibre Channel switch (such as a Cisco or Brocade switch) is added to ViPR, ViPRdiscovers the Fibre Channel SAN, (that is, its connected host and storage ports) and oneor more networks are automatically created that contains the switch’s connected ports.One network is created for each Cisco VSAN or Brocade fabric that is discovered on theSAN switch. For example, on a Cisco switch, if there is a VSAN named “VSAN 3180” withseveral host and storage ports, a network named “VSAN_3180” is automatically createdwith the appropriate endpoints populated.

Storage Resources Managed by ViPR

28 EMC ViPR 1.1.0 Concepts Guide

Fibre Channel networks are automatically discovered and created from SAN switches, butthe networks are not initially associated with a virtual array. The System Administratormust assign the discovered networks to virtual arrays when he or she sets up the ViPRvirtual data center.

NetworkNetworks are located within virtual arrays and represent connectivity within the virtualarrays.

A network maps to physical switch connectivity. One network maps to each VSAN/fabricdiscovered on a switch with its collection of endpoints; that is, the storage system portsand host ports (initiator ports on a host ESX server) to which the switch is connected.

Networks control the paths of storage resources. A network is important for exportingvolumes and file systems – it is the physical connection path that a volume/file systemtakes when it is exported from a storage system to make it visible on the host to the user.Note that networks for VNX block storage systems require a SP-A and SP-B port pair ineach network.

There are two types of networks in ViPR: SAN and IP networks. SAN networks control thepaths of volumes and IP networks control the paths of file systems and object datarequests.

IP networks must be created manually by the System Administrator and then assigned toa virtual array for file data service operations. The System Administrator can also assignan IP network to the Object/HDFS Data Service, if object data path operations will beperformed in the ViPR virtual data center.

The figure below shows SAN network connectivity in a virtual data center with typicalFibre Channel multi-pathing using two SAN networks per virtual array.

Figure 9 Network connectivity in the ViPR virtual data center

Virtual Array 1

Blue dotted lines represent the path a volume takes through the SAN when it is exported from a storage system to make it visible on an ESX host. Networks can be configured for automatic SAN zoning, meaning that ViPR automatically picks the best path a volume takes through the SAN.

Network A

VSAN

Network D

VSAN

Network B

VSAN

Network C

VSAN

storage ports

CoS

Virtual Array 2

Virtual Pool A Virtual Pool B

CoS

CoS

volumesvolumesCoS

CoS

N N Netwetwetwetwetwetwetworkorkorkorkork D D D D D D D

VS VS VS VS VS VSAN AN AN AN AN AN

N N N Netwetwetwetworkorkork C C C C

V V VSANSANSANSANSANSANSAN

V Virtirtualual Ar Arrayray 2 2

V Virtirtirtualual Po Pool ol BBCoSCoSCoSCoS

CoSCoSCoSCoS

volvolumeumessvolvolvolvolCoSCoS

Vi Virturtual al ArrArray ay 11

Ne Ne Ne Netwotwotwotwotwork rk rk rk AA

V V V VSANSANSAN

N N Netwetwetworkorkorkorkorkork B B B B rk rk rk rk AAA

VSAVSAVSAVSAVSAN N N N N N N SANSANSANSANSAN

stostostostostostoragragragragrage pe pe pe pe portortortortortsss

VirVirtuatual Pl Poolool A A

volvolumeumesssssCoSCoSCoSCoS

CoSCoSCoSCoSCoSCoS

host initiator ports host initiator ports

CoS

CoS

ViPR Virtual Data Center

physical storage system

storage ports

Storage Resources Managed by ViPR

Network 29

Each virtual array is shown with two SAN networks; however there is no limit to thenumber of networks in a virtual array. Note that on a single physical storage system thestorage ports can be connected and participating in networks located in two virtual arraysfor high availability, while the initiator ports on the ESX hosts are connected andparticipating in networks located in only one virtual array. The membership of a physicalstorage system to a virtual array is via the network connectivity and one physical storagesystem may participate in many virtual arrays and many networks.

The diagram also shows that a specific volume exported from Virtual Pool A to a specifichost initiator port can take two potential paths through either Networks A or B withinVirtual Array A. That is because ViPR allocates at least one path in each of the networks towhich the target host is connected. In this case, the target host is connected to twonetworks, so there are two volume export paths; one through each network connected tothe host.

VolumeA volume is a unit of block storage capacity that has been allocated by a user to a project.

Volume provisioning operations are performed after the System Administrator has set upthe ViPR virtual data center. Volumes are provisioned within the context of a project byusers who meet one of the following criteria:

u have a Tenant Administrator role.

u are a project owner.

u have an ALL access control list (ACL) permission on the project.

See ACLs on page 40 for more information on project ACL permissions.

When a volume is created, the user must select the project and virtual array in which thevolume will reside, and the virtual pool that will define the volume's storage performancecharacteristics. Once the volume is created, it can be exported (within its virtual array) tomultiple hosts. In this way, a host cluster can be provisioned with shared block storage.

Volume exportExports are used to export a volume to one or multiple host initiator ports, therebymaking a volume visible on one or multiple hosts. A volume can only be exported tohosts which reside in the same virtual array.

Volume exports are useful for setting up shared storage within a cluster. Shared storagecan be a list of volumes and volume snapshots previously provisioned within a virtualarray. When the export is created, a list of initiators from the hosts in the cluster arespecified; the shared storage will be visible to these hosts.

Once a volume export is created, incremental changes can be made to it, such as:

u Add volume or volume snapshot to the shared storage pool.

u Remove volume or volume snapshot from the shared storage pool.

u Add new host to the cluster by adding the initiator port from that host to the export.

u Remove visibility of shared storage to a host by removing initiator ports from theexport.

Similar to block storage volume provisioning, exports are also created within the scope ofa virtual array. Therefore, volumes and snapshots that are added to an export mustbelong to the same virtual array. Host initiator ports must be part of SANs belonging tothe same virtual array as the export.

Storage Resources Managed by ViPR

30 EMC ViPR 1.1.0 Concepts Guide

If fabric managers (switches) are discovered, and the initiators are Fibre Channelendpoints, SAN zones will be created on the switch as a side effect of creating the exportif:

u At least one of the fabric managers can provision the VSAN or fabric in which eachendpoint exists, and

u The virtual array has Auto SAN zoning set.

The SAN zones each consist of a host initiator port and a storage port that is selected.The number of SAN zones created is determined from the number of required initiatorport-storage port communication paths.

File systemA file system is a unit of file storage capacity that has been allocated by a user to aproject.

File system provisioning operations are performed after the System Administrator has setup the ViPR virtual data center. File systems are provisioned within the context of aproject by users who meet one of the following criteria:

u have a Tenant Administrator role.

u are a project owner.

u have an ALL access control list (ACL) permission on the project.

See ACLs on page 40 for more information on project ACL permissions.

When a file system is created, the user must select the project and virtual array in whichthe file system will reside, and the virtual pool that will define the file system's storageperformance characteristics. Once the file system is created, it can be exported tomultiple hosts. In this way, a host cluster can be provisioned with shared file storage.

Unlike block volumes, there is no virtual array restriction when exporting file systems. Filesystems can be exported from any virtual array specified by the user. If no virtual array isselected for the file system export, the ViPR file store will pick the virtual array.

SnapshotA snapshot is a point-in-time copy of a volume or a file system. Snapshots are intendedfor short-term operational recovery.

Snapshots have the following properties:

u A snapshot can be exported/unexported to a host, and you can delete it.

u A snapshot’s lifetime is tied to the original volume/file system: when the originalvolume/file system is deleted, all of its snapshots will also be deleted.

u A volume/file system may be restored in place based on a snapshot.

u A snapshot is associated with the same project as the original volume/file system.

u A new volume/file system may be created using a snapshot as a template.

u Multi-volume consistent snapshots can be used to snapshot multiple volumes atonce.

HostHosts are computers that use software including Windows, Linux, and VMware for theiroperating system. In ViPR, hosts are tenant resources like volumes, file systems, and

Storage Resources Managed by ViPR

File system 31

buckets. Unlike those resources, however, hosts are imported and discovered rather thanprovisioned by ViPR.

Hosts must be imported into ViPR by the Tenant Administrator before storage may beexported and attached to them. By default, hosts are not assigned to a project whichmeans only the Tenant Administrator may see them and export/attach storage to them. Iffurther delegation is required, the Tenant Administrator may assign a host to a project.Anyone who has privileges to manage resources in that project may then see and export/attach storage to that host.

Hosts are not explicitly associated with virtual arrays. The host-to-virtual arrayassociation is implied based on network connectivity.

vCenter Server/clusterLike hosts, vCenter Servers and the ESX hosts and clusters within them are tenantresources. The Tenant Administrator must register the vCenter Server in ViPR beforeexporting or attaching storage to any of its ESX hosts. The hosts and clusters present inthe vCenter Server are automatically imported and registered.

The Tenant Administrator may assign ESX clusters or individual ESX hosts to projects todelegate self-service for storage. By assigning a cluster to a project, the TenantAdministrator is implicitly giving the rights to export/attach storage to all hosts in thecluster.

Data storeA data store provides file-based storage for the data services virtual pool. It is a ViPRvirtual resource that is mapped to an underlying physical file share on a ViPR-managedfile array used for object storage. A data store is owned by the Object/HDFS Data Serviceto place all its data.

A data store points to physical storage on a specific file array and A data store is basedon the file storage capabilities of an underlying file virtual pool: the storage capabilitiesdefined in the file virtual pool provide the storage quality for the data store. Parametersfor creating a data store on a file share include name, description, size, virtual array, dataservices virtual pool, and file virtual pool name.

A ViPR data store is created on a file share by the System Administrator. Parameters forcreating a data store on a file share include name, description, size, virtual array, dataservices virtual pool name, and file virtual pool name. The file virtual pool specifiedduring data store creation determines:

u which ViPR-managed file array should be used to create the file share for the datastore.

u the storage quality for the data store.

For example, if the System Administrator creates a data store with the followingcharacteristics:

name= Payroll

description= object storage for Payroll dept

size = 3 GB

virtual array = Boston

data services virtual pool = EMC

file virtual pool= VNX Tier 1

Storage Resources Managed by ViPR

32 EMC ViPR 1.1.0 Concepts Guide

The created data store named Payroll provides file-based storage for the EMC dataservices virtual pool and will have the file storage capabilities defined in the VNX Tier 1file virtual pool. The 3 GB file share on which the data store resides was created on aViPR-managed VNX file array associated with the VNX Tier 1 file virtual pool.

The figure below shows that the EMC data services virtual pool is provisioned by datastores that map to physical file shares on a ViPR-managed file array.

Figure 10 Data store mapping to physical file shares

dev data store payroll data store

ViPR-managed �ler

physical file storage pool

“VNX Tier 1” �le virtual pool

dev data store payroll data store dev data store dev data store

“VNX Tier 1” �le virtual pool

dev data store payroll data store

3 GB

“EMC” data services virtual pool “EMC” data services virtual pool

Tenant 1

1 GB

/payroll /dev

physical file shares

Project 1

Tenant 2

Project 2

ObjectAn object is the entity being stored (for example, a document, image, video, and so on).

Object data operations are performed by tenant users after:

u Object storage has been set up in the ViPR virtual data center by the SystemAdministrator.

u An object namespace has been assigned to the tenant by the System Administrator.(The tenant namespace includes default parameters for the data store and projectwhere objects will be stored.)

When an object (or bucket) is created, it is stored in the default data services virtual pooland project associated with the tenant namespace, unless otherwise specified by theuser. Refer to data services virtual pool on page 23 and project on page 25 for moreinformation on these virtual object resources.

Tenant users must have a secret key (user-specific password) to authenticate themselveswhen performing ViPR object data operations (managing objects and buckets) usingAmazon S3, OpenStack Swift, or EMC Atmos APIs. Each tenant user can generate amaximum of two secret keys.

Local object storage users who are not part of a tenant must receive a secret key from theSystem Administrator before they can perform object data operations within ViPR. These

Storage Resources Managed by ViPR

Object 33

users have not been mapped into a tenant via AD/LDAP authentication providers and canonly access the ViPR Data Services (not the block and file storage services).

BucketA bucket is a container for objects. It is a logical grouping of objects that uses a specificdata services virtual pool, but can span multiple data stores and hence multiple filestorage arrays. Buckets are to objects as directories are to files. Note that objects areunique only within the bucket they are created.

If a tenant user wants to use the ViPR Data Services to store objects, he or she wouldcreate a bucket to store objects. When a tenant user creates a bucket, he or she canspecify a specific data services virtual pool and project in which to locate the bucket. Ifno data services virtual poolor project is specified during bucket creation, ViPR will use asa default the data services virtual pool and project that were previously set for the tenantnamespace.

Buckets contain objects that can span multiple data stores within the data servicesvirtual pool associated with the bucket. In the figure below, objects in the Accountsbucket span the payroll and dev data stores, while the objects within the Test Plansbucket reside on only the dev data store.

Figure 11 Using buckets for object storage and ingesting file share data

ViPR-managed �ler

physical file storage pool 1

ViPR object virtual pool ViPR object virtual pool

payroll ViPR data store

Tenant 1

Project 1

3 GB

dev ViPR data store

/payroll /dev

physical file shares

Accounts bucket

Tenant 2

Project 2

Test Plans bucket

physical file storage pool 2

1 GB4 GB

corpViPR data store

Corporate bucket

3 GB1 GB

non-ViPR-managed �ler (VNX/Isilon/NetApp)

physical file shares

discovery by ViPR of the new filer and file shares

ViPR-managed �ler

/corp

4 GB

Storage Resources Managed by ViPR

34 EMC ViPR 1.1.0 Concepts Guide

Using buckets to ingest data from a file shareNew buckets for Data Services can be created to ingest existing data from a file share. Asshown in the figure, a VNX/Isilon/NetApp file array that is not managed by ViPR can beadded into ViPR; ViPR discovers the filer and its file shares and brings them under ViPRmanagement. To ingest the data from a file share on this filer, the Tenant Administratormust first:

u create an empty bucket where s/he wants to migrate the file share data.

u ensure that the file share has no current exports.

In the example shown in the figure, the Tenant Administrator wants to ingest the 4 GB ofdata on the /corp file share into ViPR Data Services. To do this, s/he must specify:

u the name of the new data store to be created (corp)

u the project (project 2)

u the name of the existing empty destination bucket (Corporate)

The /corp file share data is now ingested into the Corporate bucket and can be managedby ViPR Data Services. File share data can now be accessed as object data.

Storage Resources Managed by ViPR

Bucket 35

CHAPTER 4

User Roles and Access Control Lists (ACLs)

This chapter contains the following topics.

u User roles..............................................................................................................38u Access Control Lists (ACLs)....................................................................................40u Adding users into ViPR.......................................................................................... 41u User role resource views........................................................................................42

User Roles and Access Control Lists (ACLs) 37

User rolesThere are seven possible user roles in ViPR: Security Administrator, SystemAdministrator, System Monitor, System Auditor, Tenant Administrator, Tenant Approver,and Project Administrator. Roles can be generally categorized into two groups; roles thatexist at the ViPR virtual data center level, and roles that exist at the tenant level.

Virtual data center-level rolesSecurity Administrator, System Administrator, System Monitor, System Auditor are virtualdata center-level roles. Virtual data center-level roles can only be assigned to users/groups from the provider tenant. These roles define what the user can do at the virtualdata center level. The following table lists the authorized actions for each user role at thevirtual data center level.

Virtual Data Center-Level Role

Authorized Actions

- -Security Administrator l Manages the authentication provider configuration for the ViPR

virtual data center to identify and authenticate users. Authenticationproviders are configured to add specified users into ViPR usingexisting Active Directory/Lightweight Directory Access Protocol (AD/LDAP) user accounts/domains.

l Assigns roles and project ACLs to users or groups.

l Configures ACLs for virtual arrays and virtual pools to control whichtenants may use them.

l Restores access to tenants and projects, if needed. (For example, inthe event that the Tenant Administrator locks himself/herself out,the Security Administrator can reset user roles to restore access.)

l Changes ViPR virtual data center properties.

l Shuts down/reboots/restarts ViPR services.

l Manages ViPR virtual data center software and license updates.

System Administrator l Sets up the physical storage infrastructure of the ViPR virtual datacenter and carves the physical storage into two types of virtualresources: virtual arrays and virtual pools. Authorized actionsinclude:

l Adding physical storage resources into ViPR such as arrays,ports, pools, switches, and networks.

l Creating virtual pools; defining storage capabilities andassigning physical storage pools to virtual pools.

l Creating virtual arrays; partitioning storage into discrete pods ofcompute, network, and storage resources to control file system/volume/object pathing through SAN/IP networks.

l Sets up object storage in the ViPR virtual data center; this includescreating the object virtual pool and data stores, assigning an IPnetwork to the Object Data Service, and assigning an objectnamespace to the tenant.

l Configures Access Control Lists (ACLs) for virtual arrays and virtualpools to control which tenants may use them.

User Roles and Access Control Lists (ACLs)

38 EMC ViPR 1.1.0 Concepts Guide

Virtual Data Center-Level Role

Authorized Actions

- -l Manages the ViPR virtual data center resources that are not

managed by tenants.

l Retrieves ViPR virtual data center status and health information.

l Retrieves bulk event and statistical records for the ViPR virtual datacenter.

System Monitor l Has read-only access to all resources in the ViPR virtual data center.

l Retrieves bulk event and statistical records for the ViPR virtual datacenter.

l Does not have visibility into security-related resources, such asauthentication providers, ACLs, and role assignments.

l Retrieves ViPR virtual data center status and health information.

System Auditor Has read-only access to the ViPR virtual data center audit logs.

Tenant-level rolesTenant Administrator, Tenant Approver, and Project Administrator are tenant-level roles.Tenant roles can be assigned to users/groups from the corresponding tenant. These rolesdefine what the user can do at the tenant level. The following table lists the authorizedactions for each user role at the tenant level.

Tenant-Level Role Authorized Actions- -Tenant Administrator l In a multi-tenancy environment, users mapped to the provider

tenant and assigned the tenant administrator role are able to createother tenants. (Note that users mapped to tenants other than theProvider Tenant cannot create other tenants.)

l In a single-tenant enterprise private cloud environment, there is onlyone tenant - the Provider Tenant and the Tenant Administrator(s) hasaccess to all projects.

l Maps AD/LDAP users into their tenant to define who can log into thetenant.

l Creates, modifies, and deletes projects in their tenant.

l Manages tenant resources, such as Hosts, vCenters, and projects.

l Configures ACLs for projects and the Service Catalog in their tenant.

l Assigns roles to tenant users. (Can assign Tenant Administrator orProject Administrator roles to other users.)

Tenant Approver l Approves or rejects Service Catalog orders in their tenant.

l Views all approval requests in their tenant.

Project Administrator l Creates projects in their tenant, getting an OWN ACL on the created

project.

User Roles and Access Control Lists (ACLs)

User roles 39

Access Control Lists (ACLs)An Access Control List (ACL) is a list of permissions attached to a ViPR resource (such as avirtual array, virtual pool, or project) that specifies which users are authorized to access agiven resource as well as what operations are allowed on a given resource.

For each virtual array, virtual pool, or project resource, there is a limit of 100 ACLs. Inother words, there is a maximum of 100 user/group assignments for projects and 100tenant assignments for a virtual array or virtual pool.

Virtual array and virtual pool ACLsVirtual arrays and virtual pools are public by default; they are accessible to all tenants. ASystem or Security Administrator can assign an ACL to a virtual pool or virtual array torestrict them to be used by specified tenants only. In this way, the System or SecurityAdministrator can make certain virtual arrays or pools private, (that is, accessible to onlyspecific tenants).

The ACL permission associated with virtual arrays and pools is of the type USE. If aspecific tenant has a USE ACL on a virtual pool, this means that all the users who aremapped to that tenant will be allowed to use that virtual pool in their provisioningoperations. The USE ACL permission is a parameter that displays in some of the API calls.

All newly created virtual arrays and pools will have an empty ACL. It is the responsibilityof the System or Security Administrator to manage the ACL. If no ACLs are set, the virtualarrays and pools will remain accessible to the provider tenant and all tenants in the ViPRsystem.

Project ACLsA Tenant Administrator can access all projects for their tenant, but a Project Administratorcan only access the projects for which he or she is the owner.

The ACL permission associated with projects can be of two types: ALL or BACKUP. Thereare also two internal ACLs of the type ANY and OWN. The project ACLs are assigned tothose self-service provisioning end users who are allowed access to a particular project.The project ACLs can be created or modified by a Tenant Administrator, a SecurityAdministrator, or a user with an internal OWN ACL (that is, the project owner).

A description of the project ACL permissions is provided in the table below.

Project ACL Description- -ALL The user can manage the resources in the project (that is, perform create,

read, update, and delete (CRUD) operations on file systems, volumes,snapshots, exports, and buckets).

BACKUP The user has read-only access to the first-level resources under the project(that is, volumes, file systems, and buckets) and full access to snapshotoperations (can create/delete/export snapshots).

OWN OWN is an internal ACL for identifying the user as a project owner. The internal

OWN ACL on a project is modified by editing a project's properties, not the

project's ACL. (In the API, the project update API call is used, not the updateproject ACL call.) A user with an OWN ACL can:

l perform (CRUD) operations on project resources.

l set ACLs on the project. (This includes making another user the owner ofthe project by setting him or her as the new owner using the projectupdate API.)

User Roles and Access Control Lists (ACLs)

40 EMC ViPR 1.1.0 Concepts Guide

Project ACL Description- -

l delete the project.

l set project properties such as the project name and owner.

ANY ANY is an internal ACL for identifying users with any of the above ACLs on a

project. The internal ANY ACL cannot be modified by using the project ACL

assignment API.

Newly created projects will have an empty ACL. It is the responsibility of the TenantAdministrator or the Security Administrator with an OWN ACL on the project (that is, theproject owner) to manage the ACL.

Adding users into ViPRA user with the Security Administrator role is responsible for adding users into the ViPRvirtual data center via authentication providers. User authentication is done throughauthentication providers that have been added to ViPR.

After ViPR is initially deployed, there is a built-in local administrative root user. The rootuser includes all the privileges associated with the Security Administrator, SystemAdministrator, System Monitor, and Tenant Administrator roles. The root user (or any userwith the Security Administrator role) can add users into the ViPR virtual data center fromexisting Active Directory (AD) or Lightweight Directory Access Protocol (LDAP) accounts.

Authentication providers can be added in the Administrator view of the UI, or in the APIand CLI interfaces. When adding an authentication provider, the Security Administratorspecifies parameters such as whether it is an AD or LDAP source, the domains, the serverURLs, and the group attribute and whitelist values.

Note

AD and LDAP are directory services that provide information on user lists andpermissions.

When users are added into ViPR via authentication providers, a domain (such asmycompany.com) is specified which is used for the users to log in. The login for users isof the form username@domain (for example, [email protected]).

All AD/LDAP users are mapped to the provider tenant unless there is an explicit usermapping on tenants (subtenants) that restricts the use of that tenant to only the specifiedusers mapped into the tenant. The procedure to map users into multiple tenants isdescribed in the EMC ViPR Installation and Configuration Guide. AD/LDAP users can bemapped into tenants in three ways: by using a domain, group, or AD/LDAP attributespecified in the authentication provider. Note that adding users into tenants must bedone through the API or CLI interfaces, it is not supported in the UI.

As an example, if a Tenant Administrator wanted to map all the users from themycompany.com domain into the tenant named Company A, he or she would specify thisdomain as the user mapping when the Company A tenant is created. If the TenantAdministrator did not want all the users from a domain to map into one tenant, but tomultiple tenants, he or she could specify the user attributes or group membershipsassociated with that domain to map the users into the tenants.

User Roles and Access Control Lists (ACLs)

Adding users into ViPR 41

Local usersIn addition to the local root user there are three local system user accounts (svcuser,sysmonitor, and proxyuser), but these are used mostly internally by ViPR. The svcuserand sysmonitor users have the System Monitor role and can access the UI. The svcuser isused for read-only support. The sysmonitor user can perform read-only monitoring of theViPR virtual data center; this user account is used by SolutionPack to collect ViPR data.The proxyuser is an internal application user who can run service operations on behalf ofother users.

Other than the special built-in administrative users (root, svcuser, sysmonitor, andproxyuser), there are no local users in ViPR. Users who can log in, and who are assignedroles or ACLs, must be found through an authentication provider added to ViPR.

User role resource viewsThe ViPR resources visible to users depends on their user role.

System Administrator viewA user with the System Administrator role is able to view all ViPR virtual data center-levelresources, but not the tenant-level resources such as projects and their associatedresources (volumes, file systems, buckets, snapshots, exports, and consistency groups).A System Administrator can manage (that is, perform read/write operations on) physicalresources such as storage systems, pools, ports, and networks, and virtual resourcessuch as virtual arrays and virtual pools.

System Monitor viewA user with the System Monitor role is able to view all ViPR virtual data center-levelresources, including the tenant-level resources. A System Monitor can see all resourcesin the ViPR virtual data center, but only has read-only access to the resources.

Tenant Administrator viewA user with a Tenant Administrator role can view all tenant-level resources in ViPR.

Project Administrator viewA user with a Project Administrator role is able to see and manage all projects that theycreated and the resources under these projects. The Project Administrator can transferthe ownership of any of the projects they created to a different user, giving up theirownership on that project.

User Roles and Access Control Lists (ACLs)

42 EMC ViPR 1.1.0 Concepts Guide