Emc vnx2 technical deep dive workshop

Embed Size (px)

Citation preview

2015 POWERPOINT TEMPLATES

EMC VNX Technical OverviewCustomer nameLAST UPDATED: June 2015

#EMC CONFIDENTIALINTERNAL USE ONLYEMC CONFIDENTIALINTERNAL USE ONLYNote to Presenter: This technical VNX deck complements the modular, business oriented decks available on EMC ONE. It can be used to add technical depth to the business oriented decks when customers need a deeper dive on the product level details behind the solutions oriented discussion. Introduce your agenda and validate the level of knowledge of your audience.#TITLE

R_MOD_00-Introduction-to-VNX-Unified-Managementhttp://ouo.io/S53LiGR_MOD_01-Unisphere_Security_and_Basic_Managementhttp://ouo.io/mW6RaR_MOD_02-Storage_System_Configuration_and_Managementhttp://ouo.io/XDiPIR_MOD_03-Block_Storage_Provisioning_and_Managementhttp://ouo.io/dm9rFR_MOD_04-Managing_Host_Access_to_Block_Storagehttp://ouo.io/aCJdER_MOD_05-Host_Integration_Basicshttp://ouo.io/97VugR_MOD_06-Windows_Host_Installation_and_Integration_for_Blockhttp://ouo.io/LbzwscR_MOD_07-Linux_Host_Installation_and_Integration_for_Blockhttp://ouo.io/q25auvR_MOD_08-ESXi_Host_Installation_and_Integration_for_Blockhttp://ouo.io/9Dq5lR_MOD_09-Advanced_Storage_Conceptshttp://ouo.io/2167OKR_MOD_10-VNX_Block_Local_Replication_Principleshttp://ouo.io/rEVICtR_MOD_11-SnapView_Snapshothttp://ouo.io/9tg1N4R_MOD_12-VNX_Snapview_Cloneshttp://ouo.io/dmva6R_MOD_13-VNX_Snapshothttp://ouo.io/9qUGtR_MOD_14-Basic_Network_Configuration_for_Filehttp://ouo.io/KJN16iR_MOD_15-Virtual_Data_Movershttp://ouo.io/tPSnMR_MOD_16-Configuring_File_Systemshttp://ouo.io/tGyo9LR_MOD_17-Exporting_NFS_File_Systems_to_UNIX_ESXihttp://ouo.io/2CRKWR_MOD_18-Configuring_CIFShttp://ouo.io/doDYcVR_MOD_19-Networking_Featureshttp://ouo.io/WAGV7uR_MOD_20-VNX_SnapSure_2http://ouo.io/7VY5uhR_MOD_21-Data_Mover_Failoverhttp://ouo.io/Btome

Student Guide & Workshop & Internal Training & Confidential Update Dailyhttps://goo.gl/VVmVZ0

#EMC CONFIDENTIALINTERNAL USE ONLYEMC CONFIDENTIALINTERNAL USE ONLY

This Technical Deck is used to supplement the business specific EBC decksVNX Hardware Technical Overview VNX Management and Integration Advanced NAS functionalityCore Block and File ArchitectureData Protection

# Copyright 2015 EMC Corporation. All rights reserved.Winning with VNX and VNXe transition slide.

This deck is used to supplement the Modular EBC decks available on EMC One. The sections can be pulled into the other decks to provide a complete and technically detailed overview.#TITLE

THE VNX FAMILYTHE RIGHT PLATFORM FOR ALL YOUR CUSTOMERS NEEDS

Scales up to 1500 drivesUp to 1,000,000 IOPS

Starts under $12KUp to 100,000 IOPSVNXe3200VNX5200VNX5400VNX5600VNX5800VNX7600VNX8000

Software defined VNXCommunity Edition Free download!

NEW

# Copyright 2015 EMC Corporation. All rights reserved.With VNX, you have the flexibility to start small and scale up to capacities up to 1500 drives or 6 PB.

Newly introduced at EMC World is the vVNX Community Edition a software defined version that is freely downloadable for use cases where agility for rapid deployment of test/dev environments is critical!

We are also announcing a new VNXe3200 All Flash Array with 3TB of flash starting at $25K street price rated at 75K IOPS (1:1 R/W 8K block size)

#TITLE

Storage Connectivity Profile Is ChangingIncreasing emphasis on Ethernet-based connectivity optionsEMC offers all major storage connectivity options todaySource: IDC Worldwide Disk Storage System Forecast, March 2014

2010-2018 CAGRFibre Channel SAN 2.7%NAS + iSCSI + FCoE 6.8%Network-attached NAS 5.8%iSCSI SAN 6.4%External DAS -4.5%Fibre Channel over Ethernet70.9%Switched SAS 14.9%

Worldwide External Storage (US $B)

2018

# Copyright 2015 EMC Corporation. All rights reserved.How storage is deployed has been going through a process of change for a number of years now. That change will continue for the coming years. Some interesting things to note from this slide:Fibre channel revenues are still a strong force in the industry and having been flat for some time are now on an upward trajectory. The storage market continues to embrace this technologyThe main area of growth is in the Ethernet space:Note to Presenter: Click to highlight the Ethernet technologiesiSCSI and NAS will continue to grow aggressively as the connectivity option of choice in the sub $75K storage space due to simplicity and the ubiquity of the IP network and NAS is gaining traction in the transactional use cases (Virtualization and Database). The highest growth rates are seen in the markets for the newer connectivity options, Fibre Channel over Ethernet (FCoE) and Switched Serial Attached SCSI (SAS). Although starting from a small base these technologies will become more relevant (particularly FCoE) starting in 2014.Over the past few years, the market share loser in the storage connectivity options market has been external DAS. While customers recognize that in order to truly enable the virtualized data center, a fluid and flexible storage connectivity model is required, there has been a number of applications that have recommended direct attached or internal storage (eg Exchange).The main point to remember is that EMC is the leader or 2nd in all the markets in this chart except the DAS market. EMC is committed to fully support all these technologies today or as the market demands on the VNX platform#TITLE

More processing power. Self-optimizing pools. Any network.Powerful, Flexible Modular Architecture

HARDWARE

Multi-core and multi-node scale: MCx technologyMCx technology extends performance to unprecedented levelsAdd X-blades for the right amount of file sharing powerScales to 96 CPU cores and 4,000 drives (multiple back-ends)

Self-optimizing storage poolsActive Data is automatically moved to Flash for fastest performanceInactive Data is automatically moved out of Flash to large disks for lowest capacity costFully Automated. Always on. No management intervention needed. Set-it-and-forget-it.Lowest transaction cost and lowest capacity costsimultaneously!

Unified multi-protocolFull support for any networkUnified block, file and objectShare volumes and files Fully provisioned LUNs

STORAGE POOL

SP

SP

SP

SP

SP

SP

SP

SP

Data Mover

Data Mover

Data Mover

Data Mover

Data Mover

Data Mover

Data Mover

SSDHDD

Future proofFlexible IO optionsPlug and playLow-end DC Unified Options available*

SANBLOCKiSCSIFCFCoE

NASFILECIFSNFSpNFSMPFS

CLOUDOBJECTRESTSOAP

* DC NEBS Systems planned for Mid Q3

# Copyright 2015 EMC Corporation. All rights reserved.The VNX series is based on an industry leading architecture that allows you to configure purpose built components that are designed specifically for the different workloads required. For different connectivity options like: SAN (i.e. block connectivity with iSCSI, Fibre Channel or Fibre Channel over Ethernet), NAS (i.e. CIFS, NFS with pNFS or Multi-Path File System) or Cloud ( i.e. with Atmos for REST or SOAP), the VNX platform addresses each with its purposed built modular architecture, simultaneously. The benefits of the modular unified architecture include: Offering a modular design with optimized controllers for the protocols and workload to be served on. You can add and scale out the Data Movers independently and without impacting the overall systemBoth controllers benefit from a central storage pool for LUN provisioning ensuring no stranded unused resources. Frequently accessed data is automatically moved to high-performance Flash drives and infrequently accessed data is moved to high-capacity/low-cost disk drives.With the next gen VNX systems, the power of the VNX systems have been radically improved so the overall performance and scalability of the system has never been greater however if more performance is required, another advantage of the modular architecture is that EMC has packaged the Data Movers into a NAS gateway model. This gateway supports FC SAN connect to EMC Block storage (Symmetrix, VNX and CLARiiON) and scales with support for up to four storage arrays. So, if you are looking for more storage scale, adding memory, adding drives over and above what is typically supported, add a gateway in front of four VNX5800s and you can scale up to 8 Data Movers, up to 8 storage processors and up to 4,000 drives. All the power you need from the VNX series modular architecture.In August 2014, we will support DC NEBS certified 5200/5400/5600 systemsNote to Presenter: The graphic depicting eight storage processors is based on a configuration that includes a VNX gateway front-ending four VNX Series storage arrays.Note to Presenter: Click to show the red box, indicating that the following slides (9-48) cover hardware and base software for File and Block components of the VNX solution#TITLE

New VNX Clean-Sheet DesignUnlocking the Power of FLASH

SP A

SP B32 LANES CMI

32CPUCORES

160LANESPCIE G3

1500*SSDs or HDDs

SAS 2.0

SAS 2.0

FC, iSCSI, NFS, pNFS & CIFS

22PCIEIO SLOTS

FC, iSCSI, NFS, pNFS & CIFS

# Copyright 2015 EMC Corporation. All rights reserved.So how do you design a system take advantage of the new technologies to unlock the power of Flash? With the first generation VNX, we had the capabilities of a Ferrari but the technology, while highly capable was constrained and could not take advantage of the large core counts becoming available today. It was like driving your Ferrari on a single lane road. With the Next Gen VNX, we have cleared the road so that the power of the system can be given its full opportunity to excel.Note to Presenter: Click now in Slide Show mode for animation. First start with a large number of flash drives, up to 1500Note to Presenter: Click now in Slide Show mode for animation. Next - use the latest Intel multi-core technology with up to 32 CPU coresNote to Presenter: Click now in Slide Show mode for animation. To avoid any bottlenecks moving data to and from the drives, you need multiple high speed lanes 160 with PCI Express Gen 3Note to Presenter: Click now in Slide Show mode for animation. With all this horsepower, you need a lot of connectivity to the servers up to 22 I/O slots for FC, FCoE, iSCSI, and NASNote to Presenter: Click now in Slide Show mode for animation. Finally - you need to add hard drives and FAST to handle the inactive dataThis clean cheat design of the new VNX Series is the key to unlocking the power of Flash.#TITLE

Dynamic Multicore Optimization Static Core UtilizationBreakthrough Midrange Innovation!

RAID

I/O

DRAM Cache

FAST Cache

Data Services

Management

Available

43

# Copyright 2015 EMC Corporation. All rights reserved.Note to Presenter: View in Slide Show mode for animation. In addition to the new hardware, the other key to the new VNX performance is multi-core optimization or MCx.With the previous VNX operating environment based on FLARE, key services such as RAID were run on a specific CPU core. However as Intel increased the number of cores, this strategy became a bottleneck and limited the overall array performance.Now with MCx, you can see that all services can be spread across all the cores. This is why VNX with MCx can deliver up to 3 to 5 times the number of transactions.

#TITLE

VNX Software SuitesSuitesContentsPackagesVNX Operating Environment(REQUIRED)VNX OE for File (base file services)VNX OE for Block (base block services)This includes protocols, thin provisioning, block deduplication, block compression,file deduplication and compression, SAN Copy, and ODX EnablerVNX Unisphere Management Suite(REQUIRED)Unisphere (Unisphere for Block, Unisphere for File or Unisphere for Unified)Unisphere CentralUnisphere Analyzer, Unisphere QOSVNX Family Monitoring and Reporting (storage-only version of ViPR SRM)EMC Storage AnalyticsVMware vRealize Operations Manager for VNXEMC Adapter for VNXEMC Data at Rest EncryptionEMC Data at Rest EncryptionVNX Encryption and Retention Suite File Level RetentionCommon Event Enabler (CEE) Common AV Agent and Common Event Publishing AgentTotal Efficiency PackSoftware Essentials Pack*VNX FAST SuiteFAST Cache & FAST VPVNX Local Protection SuiteSnapSure, SnapView, VNX SnapshotsRecoverPoint SE CDPTotal Protection PackVNX Remote Protection SuiteReplicatorMirrorView A/SRecoverPoint SE CRRVNX Application Protection SuiteAppSync

* Software Essentials Pack Includes (15) RP VM licenses (5) VPLEX VE licenses

# Copyright 2015 EMC Corporation. All rights reserved.From a software packaging stand point, the New VNX platform maintains the software suites (group software titles) that roll up into three main software packs, namely the Total Protection pack the Total efficiency pack and the new Software Essentials Pack which is a combination of the most popular titles for VNX2 systems at an even more attractive price.So, lets start at the top and work our way down.The VNX OE and Unisphere Management suite are two required software's, i.e. they will automatically be added to every array order.And as you can see, VNX OE includes; VNX OE for File and VNX OE for Block, all protocols, thin provisioning, block compression, file deduplication and compression, SAN Copy, and ODX EnablerUnisphere Management suite includes: Unisphere (Unisphere for Block, Unisphere for File or Unisphere for Unified), Unisphere Central, Unisphere Analyzer, Unisphere QOS and VNX Monitoring and Reporting (storage-only version of Watch4Net)Note to Presenter: Software enablers are required to be installed on the system in order for certain functionalities to be used. The Traditional ODX, Unisphere Analyzer and SANCopy enablers are installed in the factory (and do not consume DRAM memory). The FAST Enabler, the Block Compression Enabler, the VirtualProvisioing (Thin) Enabler and the Block Deduplication Enabler are installed in the field (and will consume some DRAM memory).Storage Analytics for VNX is a standalone software title and comprises of VMware vCenter Operations Manager for VNX and the EMC Adapter for VNX. Note, this is not a required software.Now lets look at the software suitesVNX Events & Retention Suite: includes File Level Retention, Common Event Enabler (CEE) Common Anti Virus Agent and Common Event Publishing AgentVNX FAST Suite: includes FAST Cache and FAST VPVNX Local Protection Suite: includes SnapSure, SnapView, VNX Snapshots, RecoverPoint SE CDPVNX Remote Protection Suite: includes Replicator, MirrorView A/S, RecoverPoint SE CRRVNX Application Protection Suite: includes AppSync,

NOTE TO PRESENTER: New with the next gen VNX are:The FAST Suite now just contains FAST VP and FAST Cache and the individual titles are no longer available a la carte.The VNX Management Suite is significantly broadened to include the Monitoring and reporting suite as well as other complimentary components such as Analyzer, Unisphere Central and Unisphere QoS.Also please notice that the default block provisioning methodology is Thinly provisioned LUNs.#TITLE

To be used with Overview, Continuous Operations and VNX for NAS ModulesVNX Hardware Technical Overview VNX Management and Integration Advanced NAS functionalityCore Block and File ArchitectureData Protection

# Copyright 2015 EMC Corporation. All rights reserved.VNX Hardware transitional slide#TITLE

VNX: Modular Unified and GatewayEasy to deploy, simple to manageScale capacity at good performanceMulti-protocolFile: NFS (including pNFS), CIFS, MPFSBlock: iSCSI, Fibre Channel, FCoEObject: REST, SOAPLeverage existing storage investmentMaximum scale of performance and capacityShared storageAdd to existing block implementationFile: NFS (including pNFS), CIFS, MPFSObject: REST, SOAPUNIFIED STORAGEGATEWAYVNX series

FileObjectBlock

Servers

Gateway

FibreChannel SAN

VNX GWaySymmetrix / VMAXVNX Series

FileObject

HARDWARE

# Copyright 2015 EMC Corporation. All rights reserved.EMC offers two types of storage implementation options in the VNX series:VNX unified storage platforms support the NAS protocols (CIFS for Windows and NFS for UNIX/Linux, including pNFS that is part of NFS v4.1), patented Multi-Path File System (MPFS) as well native block protocols (iSCSI and Fiber Channel) all included at no additional charge. In addition, Object storage (the REST and SOAP protocols) are available as a solution offering leveraging Atmos VE connected to single or multiple VNX systems (via FC, iSCSi or NFS). VNX unified storage platforms are the right choice when you are looking for an easy to deploy integrated platform with advanced functionality that is flexible and scalable. VNX Series gateway platforms are NAS heads, only. These platforms access external storage such as block based VNX, CLARiiON, Symmetrix, or combinations of these platforms for optimal performance or TCO. VNX gateways allow the back-end storage to be pooled among the NAS, MPFS and FC/iSCSI SAN, which improves storage usage and consolidates management. Gateways are ideal for environments with existing Fibre Channel/iSCSI SANs.A VNX gateway is the best choice when you require both performance and capacity scaling. A VNX gateway supports up to four back-end arrays concurrently, delivering increased I/O bandwidth to the front-end Data Movers. NOTE TO PRESENTER: With the introduction of the next generation of the VNX Unified systems, the Gateways will continue to leverage the legacy VG2 and VG8 technology#TITLE

VNX System Architecture

Flash drivesSAS drives

LCCLCC

SPSSPSPower SupplyPower Supply

LANNear-Line SAS drives

VNX SP FailoverClientsDatabase ServersExchange serversVNX Unified StorageApplication serversSAN

FailoverVNX Data Mover

VNX SPVNX OE FILEVNX OE BLOCK

VMware and Hyper-V

FC

iSCSI

TBD

FC

iSCSI

FCoEFCoEVNX Data MoverVNX Data MoverVNX Data Mover

10Gb Enet10GbEnet

Object:Atmos VE

HARDWARE

TBD

VNX Data MoverVNX Data MoverVNX Data MoverVNX Data Mover

# Copyright 2015 EMC Corporation. All rights reserved.The servers on the top of the slide reflect the true value of unified storage by enabling servers to connect to VNX via SAN (iSCSI, Fibre Channel or Fiber Channel over Ethernet) and NAS.This picture illustrates a unified storage product with scalable Data Movers. The VNX5200 supports up to 3 Data Movers, the VNX 5400/5600 supports up to 4 Data Movers the VNX5800 supports up to 6 Data Movers and the VNX7600.8000 up to 8 Data Movers. Each system can be configured with at least one X-Blade to be a failover standby blade (the 5200, 5400 and 5600 can be ordered with a single X-Blade if File HA is not critical). The 5400 and above can also be configured with more than one standby blade, if desired. This is useful if youre running two separate applications on different networks, or the environment is very critical and the pool of standby blades provides continued access even in the extremely unlikely event of a double blade failure.The failover between Data Movers is controlled by the Control Station (not shown). The Control Station is used for configuring the system, and for monitoring the health of the primary blades and initiating failover to the standby blade, if the primary blade fails. The VNX Series supports 1 or 2 control stations for increased availability. The VNX includes a fully integrated Block processing component, the Storage Processor. The new VNX models offer up to five times the performance of the previous generation VNX (VNX8000 compared to VNX7500). The back end storage connectivity is via Serial attached SCSI (SAS) via a 4 lane 6 Gbit connection. The SAS implementation also employs a point to point bus technology for improved performance and resiliency. The internal SP architecture is via PCI-E Gen 3 up to 16x. Memory sizes have increased up to 128GB per SP and leverage DDR 3 technology up to 1330MHz. VNX uses the latest XEON 5600 Processors (Sandy Bridge), and leverages the turbo functionality to provide increased performance beyond their specification when conditions allow. The CPU and memory in VNX have been physically separated from the I/O complex, which makes servicing and upgrading the systems much easier, and also provides the basis for the VNX UltraFlex technology.The disk technology used are both 2.5 or 3.5 technology and includes Flash drives, and 7.2K 10K and 15K rpm SAS drive types, connecting natively to the SAS interface. The 7200 rpm high capacity drives are also referred to as Near-Line SAS. Note to Presenter: On this slide, SPS = secondary power supply, and LCC = link controller card. Note to Presenter: On the schematic in the slide, the FLEX IO options are a logical view, in actuality, FC, FCoE and iSCSI Modules are exposed from the SPs and the 10GbE and 1GbE for NAS are exposed from the Data Movers.Note to Presenter: The Secondary Power Supplies (SPS) in the diagram refer to the VNX8000 (SPE based) which uses indpependent (2x2U) SPSs where the other platforms use a DPE which has the SPS built within the VNX SP (Battery On Board)#TITLE

VNX Series DetailsMODULAR UNIFIEDGATEWAYVNX5200VNX5400VNX5600VNX5800VNX7600VNX8000VG2VG8Max. Drives1252505007501000150040004000Max FAST Cache (GB)60010002000300042004200N/AN/ADrive types2.5 and 3.5; Flash, SAS and NL-SASBE dependentBE dependentFILEConfigurable I/O Slots per DM33344535Data Movers (DM)1 to 31 to 41 to 42 to 62 to 82 to 81 or 22 to 8Capacity per Data Mover (in TBs) 256TB512TB 256TB512TBCPU / Cores / Memory (per blade)2.13GHz / 4 / 6 GB2.13GHz / 4 / 6 GB2.13GHz /4 /12 GB2.4Gz / 4 / 12 GB2.8GHz / 6 / 24 GB2.8GHz / 6 / 24 GB2.4GHz / 4 / 6 GB2.8GHz / 6 / 24 GBProtocols NFS, CIFS, pNFSBLOCKConfigurable I/O Slots per SP3455511N/AEmbedded I/O ports2 Back-End SAS ports0N/ACPU / Cores / Memory (per SP)1.2GHz / 4 / 16 GB1.8GHz / 4 / 16 GB2.4GHz / 4 / 24 GB2.0GHz / 6 / 32 GB2.2GHz / 8 / 64 GB2x 2.7GHz /8/ 128GBN/AProtocols FC, iSCSI, FCoEN/A

HARDWARE

# Copyright 2015 EMC Corporation. All rights reserved.Here are the technical specifications for the VNX product line, take a minute to review this information.Lets take a moment to look at the specs of the systems in the family. Note how the hardware scales with the systems and performance will align with the physical hardware configurations. Also note that FAST Cache sizes are also increased relative to their predecessors, such as on the high end, 2.1TB max being increased to 4.2TB max (NOTE: maximum FAST Cache capacities require 200GB FAST Cache Optimized SSDs). Note to presenter: Here are some additional Considerations around the Next Gen VNX Hardware that may be useful for answering questions. They are included here but not expected to be presented:Some other points to note around the release of VNX NG:- Ease of use and reliability are significantly improved with One button shutdown within the Unisphere System List tab. Similar to VNXe, the single software button will gracefully shut down the whole system (File only, Block Only or Unified) in one action.- Built in future proofing: 4 drives x 300GB capacity required for vault on all Next Gen VNX systems. The space Includes VNX OE, Vault for write cache, new MCx structures as well as reserved space for future leverage of new functionality that may consume vault space.- Investment Protection: Drives and DAEs (not data) from VNX can be re-used for NG VNX, (early VNX drives MAY require new firmware to handle MCR functionality)- Cost effective and optimized components: No SLIC ports reserved for File on Block only systems, although you need to plan accordingly if looking at adding file later to a block only system bear in mind that its always more cost effective to buy Unified than add it later.

#TITLE

VNX5200VNX5400VNX5600VNX5800VNX7600VNX8000Max Pools 152040406060Max Pool LUNs 1,0001,0001,1002,1003,0004,000Max Classic LUNs204820484096409681928192Max Storage Groups2565121024204840964096Max VNX Snaps800080008200164003200032000Max VNX Snaps CGs128128128256256256

System Limits

# Copyright 2015 EMC Corporation. All rights reserved.Several system limits have been increased in the Salmon River release 33 xxx.096 bringing all VNX2 systems on par with the VNX1 systems they replaced in terms of Pools and LUN limits#TITLE

VNX5200VNX5400VNX5600VNX5800VNX7600VNX8000Max Drive Slots12525050075010001500Data Movers @ initial GA1 or 21 or 21 or 21 - 32 - 42 - 8Planned Data MoversUpgrade (Q1 14)1, 2, or 31 - 41 - 42 - 62 - 828Current Max FileCapacity per system w/failover256 TB256 TB256 TB512 TB768 TB1.8 PBPlanned Max FileCapacity w Failover~500TB768TB768TB1.25 PB3.5 PB*3.5 PB*

Data Mover Capacity (as of Salmon River)

* Maximum configuration based 512TB per Data Mover for these systems only

# Copyright 2015 EMC Corporation. All rights reserved.Speaking PointsThe main goal of these changes is to ensure that should a customer require to leverage the VNX platform as an all File based implementation they will be able to reach near maximum capacity and leverage ALL the performance of the back end.The 5200, 5400 and 5600 all support up to 4 data movers max (these platforms all can operate with a single (unprotected) Data Mover, where the 5800 and above have to have a failover Data MoverIncreasing the 5800 up to 6 data movers max; 5 active and 1 (minimum) standbyIncreasing the 7600 up to 8 data movers max; 7 active and 1 (minimum) standbyThe Planned Max capacity is a factor of the maximum physical capacity supported in the platform and the maximum addressable capacity of the supported X-Blades. In the case of the VNX5200 and 5400, the system is limited by the max drive slots and in the others by the addressable capacity of the max number of X-Blades. NOTE TO PRESENTER: In Q1, we increased DM counts and in Salmon river release in Q1 15 we will be able to support 512TB per X-Blade on the VNX7600 and VNX8000 platforms which have the most powerful Data Movers. This will double the max file capacity for systems that are DM capacity restricted e.g. VNX8000 will scale all the way to 3.5PB#TITLE

Broad Set of Capacity, Firepower & Connectivity ChoicesVNX Highlights New Flexible Platforms

Mid-Market PlatformFront ViewRear View

Performance Platform

Front ViewRear View

Up to 6PB and 1.1M IOPS>200,000 IOPs in 3UUnified across the familyUp to 32 cores per systemOptimized for Applications

Up to 72 FC Ports PCI-e Gen 3

HARDWARE

# Copyright 2015 EMC Corporation. All rights reserved.So what do these new systems look like?Heres a peak

For the VNX Series were refreshing the entire line, incorporating the newest Intel Sandy Bridge CPU with a PCIe Gen 3 backplane. Well offer a DPE (Disk / Processor Enclosure; 25 drives with the Storage Processors) up through VNX5200 to VNX7600 systems and a SPE (Storage Processor Enclosure) for the new VNX8000 system.As you can see on the right, the Performance Platform has a few more IO slots than weve offered before up to 9 available SLIC slots supporting FC, FCoE, and iSCSI, as well as the 2 slots reserved for back end connectivity.

Additional NotesVNX8000 PERFORMANCE PLATFORMVNX8000 is the new high-end VNX Modular platform. It stands on its own as the highest-performing dual-node scale-up storage platform on the planet. The controller housing the dual Storage Processors is a 4U unit (block only) that can be configured in EMCs standard 40U rack or a customers standard Telco rack. The VNX8000 will be new territory for those who know the VNX7500 that preceded it, as it takes a huge leap in terms of raw firepower.Processing is handled by dual Intel Xeon Sandy Bridge processors running at 2.7GHz, for a total of 32 cores across the 2 SPs (compare to 12 for VNX7500). This system comes standard with 128 GB of (1600MHz DDR3) memory per SP for a total of 256GB (compare to 96GB for VNX7500). The system supports 16 DIMMs per Processor so memory configurations can be expanded in the future as needed.The system is fully PCIe Gen 3 capable and supports up to 96 lanes of PCIe Gen 3 per SP, 192 lanes total (96 lanes of PCI Gen 3 = 96GB/s Note to presenter. We will not see this level of throughput through the system due to SLIC (Flex IO Module) constraints but it indicates the system is future proofed for 40Gb Ethernet and 16Gb FC).The system ships by default with 2 x 4 port SAS SLICs per SP (a total of 16 SAS ports per system) for back end disk connectivity and a single front end block SLIC. Customers will then add the combination of SLICs they require for their specific environment and Port limits are discussed below.File connectivity is provided by the proven legacy Data Movers, familiar from the original VNX platform. Existing File scalability and cost reductions means Sandy Bridge based Data Movers were unnecessary. New to the File space however is a new Control Station which provides improved administrative performance for file based Unisphere management. FC connectivity is the only protocol supported for connectivity to the Next Gen SPs.Also new to the VNX Next Gen line are Lithium Ion battery based uninterruptible power supplies. In the case of the VNX8000, 2 x 2U battery shelves are used, one for the VNX8000 SPs and the other for the vault pack in the first disk shelf. Li-Ion batteries are lighter and longer-lasting than lead acid batteries.MID-MARKET PLATFORMThis is the new mid-range and entry level VNX Modular platform. It is a flexible and compact, yet a very high-performing storage solution designed for Flash SSDs. The controller housing the dual Storage Processors is a 3U Data Processing Enclosure (DPE) unit that can be configured in EMCs standard 40U rack or a customers standard Telco rack. The DPE based platforms come in 5 scalable family options (125, 250, 500, 750 and 1000 drive versions) and can support up to 25 2.5 drives directly in the DPE (NOTE TO BLUE BOX CALL OUT: 25 x SSDs running at 7,000 IOs per second provides ~ 200, 000 IOPs). Processing is handled by Intel Xeon Sandy Bridge processors with varying numbers of cores and running at varying speeds dependent on model type:VNX7600 - 8 core 2.20GHz CPU and 64GB of 1600MHz DDR3 memoryVNX5800 - 6 core 2.0GHz CPU and 32GB of 1333MHz DDR3 memoryVNX5600 - 4 core 2.40GHz CPU and 24GB of 1066MHz DDR3 memoryVNX5400 - 4 core 1.80GHz CPU and 16GB of 1066MHz DDR3 memoryVNX5200 - 4 core 1.20GHz CPU and 16GB of 1066MHz DDR3 memoryThe systems will be data-in-place-upgradeable to larger systems (including to VNX8000) some time after GA. The system is fully PCIe Gen 3 capable. The system ships by default with 2 ONBOARD SAS ports per SP although all systems other than the 5200 and 5400 can have an additional SAS SLIC added for high performance workloads. Customers will then add the combination of SLICs they require for their specific environment. Port limits are discussed below.File connectivity is provided by the proven Argonaut Data Movers, familiar from the original VNX platform with the following specifications:VNX7600 - 2 to 8 blades, 2.80GHz Westmere CPU, 24GB memoryVNX5800 2 to 6 blades, 2.4GHz Westmere CPU , 12GB memoryVNX5600 - 1 to 4 blades, 2.13GHz Westmere CPU, 12GB memoryVNX5400 - 1 to 4 blades, 2.13GHz Westmere CPU, 6GB memoryVNX5200 - 1 to 3 blades, 2.13GHz Westmere CPU, 6GB memoryThe file implementation, like VNX8000, uses the new Dobie Control Station.Also new to the DPE based platforms are Battery-on-Board (BoB) Lithium Ion battery based uninterruptible power supplies. The batteries are located in the DPE to save space, although Li-Ion batteries require special handling and shipping. Li-Ion batteries are lighter and longer lasting than lead acid batteries.Note to presenter. The DPE by default ships with a 240 Volt power supply. With the Q1 release of VNX2, there is an option to order a 100V power supply (eg for Japan or the US). This power supply is only available for the 5200 and 5400 and does not support the 4-port 10GbE Copper Flex IO Modules.

#TITLE

Block components: storage or data processor enclosure*VNX Operating Environment for BlockDual active storage processorsAutomatic failoverFlexible IO connectivity optionsStandby power supplies (battery backup)SPE Only, DPE has SPS within the DPEFile components: Data Mover EnclosureVNX Operating Environment for FileFrom 2 to 8 Data Movers (DM) supported with configurable failover optionsFlexible IO connectivity optionsControl stations (1 or 2)Disk array enclosuresVNX Unified Storage Components * DPE contains disks (2.5 only); SPE does not contain disks

DPE (Disk Processor Enclosure)3UControl StationData Mover Enclosure (X-Blade enclosure)25x 2.5 Disk Array Enclosure15x 3.5 & 2.5 Disk Array EnclosureSPS (SPE Only)Disk Processor Enclosure/Storage Processor EnclosureOptional Control StationStart with FC or iSCSI or NASAdd other protocols seamlessly, as neededFlexible IO options:4-port 16Gb FC4-port 8Gb FC 2 port optical10GbE 2 port copper 10GbE 4 port Copper 1GbE 2 Port 10Gb FCoE4 Port 4 Lane 6Gb SAS

HARDWARE60x 3.5 & 2.5 Disk Array EnclosureSimple Upgradeability: Block/File to Unified**** VNX2 File/Block to Unified conversions will be in in a future release

# Copyright 2015 EMC Corporation. All rights reserved.The basic design principle for the VNX Series storage platform is to use existing hardware for the VNX for File (Data Mover or X-Blade) front end and latest technology for the VNX for block hardware for the storage-processor back end. The control flow is handled by the SP in block only systems and the Control Station in File Enabled systems. Note to Presenter: the diagram shows both Block and File components, as in a Unified configuration. Customers can also order block-only or file-only configurations.The Data Processor Enclosure (DPE) or Storage Processor Enclosure (SPE) uses dual active Storage Processors (SPs) for disk I/O. These processors run the VNX OE for Block a proven, robust RAID implementation. The SPE supports automatic failover should one of the SPs fail. Each SP supports UltraFlex I/O Modules that can be populated with Four ports x 8 Gb Fibre Channel; Four ports x 1 Gb BaseT (copper) iSCSI; Two ports x 10 Gigabit Ethernet optical iSCSI; 2 ports x 10 Gb BaseT (copper) iSCSI; Two ports x 10 Gigabit Ethernet Twinax iSCSI; Two ports Fibre Channel over Ethernet, 4 port 6Gb x 4 lane SAS V2.0The disk array enclosures are either 15x3.5disk shelves (Flash, SAS and NL-SAS) or 25x2.5 disk shelves for disk capacity (SAS). The SPE/DPE can be configured to also support Fibre Channel hosts, Native iSCSI hosts or FCoE hosts. Fibre Channel hosts can attach directly by adding Fibre Channel I/O modules (four ports per Fibre Channel I/O module) or with a standard Fibre Channel switch. Native iSCSI hosts can attach through switches to either 1 or 10 Gigabit Ethernet ports. Fibre Channel over Ethernet can attach through supported CEE switchesThe Data Mover Enclosure contains the file Data Movers (also known as X-Blades). To clients on the network, a VNX for File looks like any other file server. The Data Movers (DMs) feature EMCs VNX OE for File system software, which is optimized for file I/O. For the DMs, there are four UltraFlex I/O module options: Four ports x 1 Gb BaseT; Two ports x 1 Gb BaseT + two ports 1 Gigabit Ethernet optical; Two ports x 10 Gigabit Ethernet optical; 2 ports x 10 Gb BaseT (copper); Two ports x 10 Gigabit Ethernet TwinaxDMs types cannot be mixed in the same system. Each DM is configured with one 4-port 8Gb Fibre Channel I/O module for storage array connectivity and tape connectivity (for NDMP). The Control Station is used to configure, manage, and upgrade the DMs, as well as to manage DM failover.Two Blade VNX is typically configured as primary/standbythat is, with one DM designated to act as standby. The standby waits, fully booted, for the primary DM to fail. Because of this wait, there is no performance degradation on failover. In the event that the primary DM fails, the standby DM will take the load from the primary, then present itself to the network as the failed DM. In the other two-blade option, called primary/primary, both DMs are active. Should a DM fail, it will quickly reboot and present itself back to the network.Multi blade systems are typically configured with N+1 or N+M advanced failover (where N is the number of active DMs and M is a pool of standby DMs) where one DM is configured as standby or where a number of DMs are configured as a pool of failover DMs for the active blades. Note to Presenter: Primary/primary DM configurations (supported on VNX5200 and VNX5400) are not recommended for critical data. During the reboot, the data will be inaccessible. If the DM fails due to a hardware problem, the data will be inaccessible until it is replaced. UPGRADES: A block-only-to-unified upgrade will require the addition of a DM enclosure with one or two DMs, one or two Control Stations and the Unisphere Block-to-Unified license. The availability of these upgrades for the new generation VNX will not be supported at initial availability (which will be September 2013)In addition to Unified upgrades, VNX also supports the addition of conventional drive/disk array enclosure (2.5 and 3.5), X-Blade/Data Mover Enclosure, DM and array UltraFlex I/O modules, and additional software Which are supported at first availability with the next gen VNX.

#TITLE

VNX Form FactorsBlock Only BaseDAE = drives onlyDPE = storage processors + 2.5 drivesSPE = storage processorsDrivesAdd DAEs up to the maximum capacity allowedCan mix drive types in the same DAE (e.g. 7.2K rpm + SSD + 15K rpm +NL-SASCan mix different DAES in a system (e.g. 15 drives, 25 drives and 60 drive DAEs)File Only or Unified BaseNeed block hardware + file hardware

HARDWARE

CS (Control Station) 1 UDME(Data Mover Enclosure)2U2xSPS (Standby PS) 2x2UVNX8000DAE-15x 3.5 & 2.5 drives(Disk Array Enclosure)3UDAE-25x 2.5 drives(Disk Array Enclosure)2UDAE-25x 2.5 drives(Disk Array Enclosure)2UVNX5400 to VNX7600DPE (Disk Processor Enclosure includes 25 x 2.5 drive slots) 3UCS (Control Station) 1 UDME(Data Mover Enclosure)2UDAE-25x 2.5 drives(Disk Array Enclosure)2UDAE-25x 2.5 drives(Disk Array Enclosure)2UDAE-15x 3.5 & 2.5 drives(Disk Array Enclosure)3USPE(Storage Processor Enclosure)4U

# Copyright 2015 EMC Corporation. All rights reserved.The VNX series ships as a block only, file only or Unified file and block system. The file only and Unified systems ship with all the hardware indicated in the diagrams in the slide.The block-only systems comprises only the disk processor enclosure, and drives (and potentially disk array enclosures depending on the capacity required). A block-only SPE (VNX8000) comprises only the storage processor enclosure, Standby Power Supplies (2x2U), vault disk array enclosure, expansion disk array enclosures, and drives. The DPE based systems hold 25 drives in the block controller for reduced cabinet footprint and reduced cabling. The VNX series DPE includes 4 built-in ports of 4 lane 6 Gb/s SAS back-end buses (2 ports per SP)SAS and NL-SAS drives are supported, and while optional, EMC maintains the recommendation that large capacity (NL-SAS drives) be configured with RAID 6 to protect against the longer rebuild times associated with these drive types.VNX supports 2U 25 2.5 drive DAE and DPE for increased density and energy efficiency as well as 3U 15 drive 3.5 DAEs.Capacities per DM for all VNX series models is 256 Useable TB but larger capacities can be supported via RPQ.

#TITLE

Easily upgrade to higher performance VNX platform No lengthy data migrationsCost-effective scaling:Conversions across the VNX family reuse installed SAS DAEs, drives, and Flex IO Modules ConsiderationsConversions of DPE based systems only VNX5200 to VNX 5800For File or Unified platform, DM blades will be required to be swapped coincident with the back-end change-outSmall restriction on upgrade pathsSystems with NL-SAS as vault not upgradeable if not supported

In-Family Data-in-Place ConversionsExtend VNX Investment Protection; Scale with Minimal ImpactUpgrade any VNX to a more powerful VNX with data-in-place

HARDWARE

VNX5400..

VNX5800..

VNX7600

* Data in place conversions will be delivered in a future release.

# Copyright 2015 EMC Corporation. All rights reserved.There will be no MAJOR restrictions on upgrade paths (other than covered in the slide)Not necessary to do multi-hop upgrades eg If going from 5400 to 5800 we do not have to do 5600 first.When available, upgrade times will vary, but be comparable to earlier product upgrade time frames (12 hours for block only systems, 16 hours for File/Unified). Note to Presenter: The older VNX Gen 1 platforms (5300/5500 etc) cannot be upgraded to 2nd Gen VNX (5400/56000 etc) but the DAEs and disks can be re-used, although this will only happen in a future release (H1 2014) and there may be restrictions on certain (older) drive re-use.IP Upgrade caveats and considerations:If the sour.ce has NL SAS vault drives we will not support conversion to a platform not supporting NL-SAS as a vault. Namely VNX5800 and 7600

Conversions to VNX 8000s are not supported because the port ordering scheme in a DPE is different and incompatible with the VNX8000 SPE#TITLE

VNX8000: Scaling UP to leverage Intel RoadmapMCx: New Storage Processor Architecture

01234567CB

01234567CB

01234567CB

01234567CB

QPI

QPI

SP ASP B

8xCMI8xCMI8xCMI8xCMI

HARDWARE

# Copyright 2015 EMC Corporation. All rights reserved.So how do you design a system to take advantage of the new technologies to unlock the power of Flash? With the first generation VNX, we had the capabilities of a Ferrari but the technology, while highly capable was constrained and could not take advantage of the large core counts becoming available today. It was like driving your Ferrari on a single lane road. With the Next Gen VNX, we have cleared the road so that the power of the system can be given its full opportunity to excel.We use the latest Intel multi-core technology with up to 32 CPU coresTo avoid any bottlenecks moving data to and from the drives, you need multiple high speed lanes 160 with PCI Express Gen 3With all this horsepower, you need a lot of connectivity to the servers up to 22 I/O slots for FC, FCoE, iSCSI, and NASFinally - you need to add hard drives and FAST to handle the inactive dataThis clean cheat design of the new VNX Series is the key to unlocking the power of Flash.#TITLE

Block servicesVirtual Pools with thick and thin LUNsFlexible RAID options (1/0,5,6) provide optimal performance AND protectionAdministration/managementThrough Storage Processor Ethernet portsAggregated to single file/block view in UnisphereSingle point of management/controlHigh availability true Active/Active controller failoverConnects to Hosts via FC, FCoE, iSCSI for flexibility of connectionConnects to disk shelves via 4 lane 6Gbit SASEnabled for in-place Encryption in secure environmentsVNX Storage ProcessorTrusted, mature, high performance block servicesVNX Series

Storage Processors

PrivateNetworkAdministrator

FC/iSCSISANUnisphere

HARDWARE

# Copyright 2015 EMC Corporation. All rights reserved.Active/Active Controller Design: The VNX platform storage processors by design operate in Active/Active mode. Active-Active implies that both controllers are active/on-line and receiving host I/O simultaneously for the backend storage. VNX OE for Block with MCx runs on the SPs and has a LUN Ownership model where a LUN is either owned by SP-A or SP-B and both SPs serve I/Os to its set of LUNs, hecne the load is shared across the SPs. For most configurations High availability is maintained via ALUA (Asymmetric Logical Unit Access) Failover Mode (all Pool based LUNs require ALUA and Classic LUNs have the option to use full active/active access). ALUA allows host that is ALUA aware (most modern O/Ss are) to send I/O for a LUN via either SP. For Example: A LUN 200 may be owned by SP-A but host can send I/O for this LUN via both SP-A and SP-B. SP-B will internally and transparently re-direct I/O over high speed inter-SP interconnect to the SP-A and service the same. The secondary path is only used in the event of a primary path failure for whatever reason. The VNX Platform has ALUA as the default Failover mode, allowing MPIO ready O/Ss to benefit from this Active-Active access out of the box with no special configuration.With the new Active-Active mode of MCx (introduced with the Next Gen VNX line in September 2013), Classic LUNs can now be fully Active/Active. Unlike ALUA, this means that LUNs are NOT explicitly owned by an SP and can dynamically handle IOs from the same LUN on both SPs at the same time in the same way as a VMAX would do.Automatic Load Balancing: The VNX OE for block software has been designed to ensure the I/O is well balanced between the two SPs. Firstly, at the time of provisioning, the odd number LUNS are owned by one SP and even number LUNs are owned by the other, this results in LUNs being evenly distributed between the two SPs (when using true active/active mode in classic LUNs with MCx the load is dynamically balanced across SPs for all A/A LUNs).And in the event of a failover, LUNS would trespass (ALUA only) over to the alternate path/SP. This is where EMC PowerPath comes in and it restores the default path once the error condition is recovered. This brings the LUNs again in balanced state between the SPs. The VNX2 system also supports full AES256 bit encryption wither out of the box or as a transparent, data in place, after market conversion.

#TITLE

Up to eight independent file servers contained in a single systemScale by adding enclosures: 2 Data Movers per Data Mover enclosureManaged as one, high-performance, high-availability serverConnects data to the networkVNX Operating Environment for FileNo performance impact after failoverConcurrent Network File System (NFS), Common Internet File System (CIFS) and File Transfer Protocol (FTP)Hot-pluggableFlexible N-to-M failover optionsContinues to operate even if control station failsNo internal disks in the GatewayVNX Data MoversModular, scalable, runs the worlds most mature NAS OSData MoverData MoverData MoverData MoverControl StationControl StationData Mover Enclosure

VNX Block Storage(or VMAX if Gateway)

NetworkVNX series

HARDWARE

# Copyright 2015 EMC Corporation. All rights reserved.Note to Presenter: View in Slide Show mode for animation. This slide shows a gateway system (external storage connect via switch), although the functionality is exactly the same as in the non-Gateway VNX systems except the storage is connected directly to the VNX for file components). Data Movers are the muscle of VNX for File - they do all the work. Each Data Mover is an independent, autonomous file server that remains unaffected should a problem arise with another Data Mover.The multiple Data Movers (up to a maximum of eight in the VNX8000 and VG8 Gateway) are managed as a single physical entity. Data Movers are hot-pluggable and offer N+1 and N+M advanced failover. In addition, Data Movers will continue operation independent of any Control Station halts or restarts.The Data Movers run a mature EMC operating system called VNX OE for File, which is optimized to move data between the storage (VNX for block components in the case of an integrated VNX system or VMAX/CLARiiON/VNX* Block platform for the gateway) and the IP network.* Note to Presenter: VNX2 is currently not supported as a back end to any VNX gateway.#TITLE

InstallationAdministration/managementThrough Data Mover and Storage Processor Ethernet portsConfiguration changesMonitoring and diagnosticsHeartbeat pulse of Data MoverMonitors and manages Data Mover failoverEnterprise Linux-basedInitiates communications with Data Movers for greater securitySingle point of management/controlFailover redundancy optionVNX Control StationSecure management and control for VNX for file

Control StationControl StationVNX seriesControl Station

PrivateNetworkAdministrator

Unisphere

HARDWARE

Control Station

# Copyright 2015 EMC Corporation. All rights reserved.Note to Presenter: View in Slide Show mode for animation. The Control Station software provides VNX for Files controlling systemit runs an EMC value-added version of the Red Hat Enterprise Linux V5 industry-standard UNIX operating system. The Control Station also provides a secure user interface to all file-server componentsa single point of management for the whole VNX solution, which can be isolated to a secure, private network.Control Station software is used to install, manage, and configure the Data Movers; monitor the environmental conditions and performance of all components; and implement the call-home and dial-in support features for all protocols on the Unified system. The unified user interface used to manage the VNX talks directly to the Control Station when managing file functionality.Typical administrative functions for File include managing volume and file systems, configuring network interfaces, creating file systems, exporting file systems to clients, performing file-system consistency checks, and extending file systems (both manually and automatically).Control Station administrative functions are accessible via Unisphere, through a command-line interface on Telnet or secure shell, and via the VNX Startup Assistant and VNX Provisioning Wizard in the VNX installation toolbox. Control Station requires Java (JRE V1.6 is requiredThe CS in the new VNX (5400 to 8000) uses Quad core 3.1GHz Xeon with 8GB mem (only uses 4GB). CS leverages 32-bit RHEL5 kernel (2.6.18-308.1.1) with security patches applied

#TITLE

Configurable Data Mover failover optionsN-to-M, Automatic, manual, noneFailover triggers:Software issueInternal network failurePower failureNon-responsive Data MoverFailed Data Mover shut down to avoid split-brain syndromeIP, Media Access Control (MAC), and Virtual LAN (VLAN) addresses are transferredAutomatic call-home of eventNo performance impact after failoverVNX Data Mover FailoverHigh availability architecture with no performance impact

Network

Data pathtransferredData Mover Data Mover

Data remainsaccessibleNo client-performanceimpactData Mover Data Mover

Data Mover

VNX series

Control Station Control Station Data Mover

HARDWARE

# Copyright 2015 EMC Corporation. All rights reserved.Note to Presenter: View in Slide Show mode for animation. When a VNX is configured for File access, one or more Data Movers are designated to act as standby Data Movers. They wait, fully booted, for a primary Data Mover to fail. In most file serving as well as design and manufacturing environments, customers have found it sufficient to have one failover Data Mover for every three or four active Data Movers. In Financial, Telecommunications, and Internet environments, customers may elect to have a standby Data Mover for each active Data Mover. Failover can be selected to occur automatically, manually (operator intervention required), or never.Although very rare, failovers can occur because of software issues. Failover can also be triggered by failure of both internal networks, power failure, unrecoverable memory errors, or non-response. However, some customers have run for years without Data Mover failover.The Control Station monitors Data Movers through a redundant internal network. If a Data Mover has a problem, the first thing the Control Station does is disable that Data Mover. This prevents the split-brain phenomenon, in which a processor may be sick enough to cause a failover to a unit that tries to take over and writes to the primarys volume; meanwhile, the sick primary, unaware that it is sick, continues to write to that same volume. When a sick primary and a failover unit write to the same volume, you cannot recover. VNX systems make such situations impossible by separating data flow from control flow.There is no performance degradation on failover, as the standby Data Mover is waiting, booted, to take the load from the primary. The VNX systems fine granularity makes this possible, and prevents the propagation of faults.After the active Data Mover is shut down, the Control Station tells the standby Data Mover to get the active Data Movers network addresses (MAC [Media Access Control], IP, and VLAN [virtual local area network]). Then the standby Data Mover presents itself to the network as the failed Data Mover. Failover is transparent to UNIX clients, which do not maintain state. They may see a Server Not Responding message, but upon re-issuing the NFS request to the same address, they will get the information they want. Windows clients maintain state, so they will see a failover if they issue a CIFS request during a failover. Depending on the Windows application, the client may re-issue the request, need to reboot the application, or reboot the client. Failover times vary based upon the size of the configuration. We test a small, medium and large config. Small and Medium configs (up to 100 file systems, 400 snaps and 20 replication sessions) take up to 40 seconds to failover. Large configs (close to 1000 file systems) can take up to 90 seconds to failover. Be aware that most OSes and virtualization hypervisors have timeout settings that are adjustable so be aware to set the timeouts appropriate for the configuration in Windows and VMware implementations. Major improvements continue to be made in this area so ensure customers are always on the latest levels of code. #TITLE

3U 120 2.5 Drive Tray Dense Disk Array Enclosures

4U 60 3.5 Drive Tray

# Copyright 2015 EMC Corporation. All rights reserved.

#TITLE

3U Disk Array Enclosure (DAE)120 X 2.5 6G SAS drives6 drive banks (A-F)20 drives per bank10 fans5 in front (0-4)5 in rear (5-9)4 Power Supplies4 Power Zones

120-Drive DAE Overview

# Copyright 2015 EMC Corporation. All rights reserved.

#TITLE

Drive TypeCapacity15 Dr DAE*25 Dr DAE60 Dr DAE*120 Dr DAEFAST Cache Optimized SSD100GBFAST Cache Optimized SSD200GBFAST VP Optimized SSD100GBFAST VP Optimized SSD200GBFAST VP Optimized SSD400GBFAST VP Optimized SSD800GBFAST VP Optimized SSD1.6TB15K RPM SAS300GB15K RPM SAS600GB10K RPM SAS600GB10K RPM SAS900GB10K RPM SAS1.2TB7.2K RPM NL-SAS1TB

Drive TypeCapacity15 Dr DAE60 Dr DAEFAST Cache Optimized SSD100GBFAST Cache Optimized SSD200GB15K RPM SAS300GB15K RPM SAS600GB7.2K RPM NL-SAS2TB7.2K RPM NL-SAS3TB7.2K RPM NL-SAS4TB

VNX2 Drives

2.5 Drives3.5 Drives* Supported as Vault drives* 2.5 drives supported in 3.5 carriers

# Copyright 2015 EMC Corporation. All rights reserved.Lets take a look at the drives currently supported on VNX2 systems. Drives in red require the VNX Block OE 33 .096 code named Salmon RiverAs you can see from the tables, we continue to support 2.5 drives and 3.5 drives, although there are a number of new drives in the 2.5 form factor. Also note that we now offer 4 disk shelf (DAE) options, namely a 2U 25 drive DAE, a 3U 3.5 DAE, a 4U 60 Drive DAE and a 3U 2.5 120 Drive DAE (the 60 drive and 120 drive DAEs require the dense rack option). New to this latest VNX OE release are the 120 drive DAE, the 600GB 15K 2.5 drive and the 1.6TB 2.5 FAST VP SSD driveFAST Cache Optimized Drives are the familiar SLC technology we use today and have the highest levels of endurance available in the market place. FAST VP optimized drives are eterprise level MLC drives (eMLC) with slightly lower endurance and performance but are more competitively priced on a per GB basis. Be aware that FAST Cache SSDs can also be used in FAST VP Pools (Although FAST VP drives will not able to be deployed in the FAST Cache use case.).Lower endurance eMLC technologies (around 10 writes per day compared to SLCs 30 writes per day) still provide up to 3500 writes per second per drive for up to 5 years and will be used where data change rates are more moderate, ie in FAST VP. This table also clearly articulates which drive shelves the various drives fit into. The 2.5 form factor has become the industry standard and you can expect more, denser offerings in this space. We are still in the process of qualifying more 2.5 offerings in the 120 drive tray so this will be updated in the coming weeks. The 15 and 60 drive DAEs support 3.5 drives although the 2.5 drives will typically be supported in them with the use of a special carrier.

#TITLE

Flexible Storage TiersSAS back-end connect for performance and reliabilityUp to 24Gb (4x6 Gb) per SAS busPoint-to-point, robust interconnectFlash (SSD) optionsHighest performing drivesFast Cache SSDs (SLC)2.5 in 15 + 25 drive DAE100 GB, 200 GB~5,000 IOs per secondFAST VP SSDs (eMLC) 2.5; 100 GB, 200 GB, 400 GB, 800 GB, 1.6TB~3,500 IOs per second

Optimize TCO with tiered service levels

VIRTUALSTORAGE POOLAUTOMATIC DATA OPTIMIZATIONHighest capacityGood performanceHighest performanceNear-line SAS(7.2K rpm)SAS(10K/15K rpm)Flash

SAS (HDD) options3.5 drives (195 drives/ rack with 3U DAE, 555 drives / rack with 4U DAE)600GB, 900GB, 1.2TB 10K300 GB, 600 GB 15K~ 140 (10K) to ~ 180 (15K) IOs per second2.5 Drives (500 drives/ rack)600GB and 900GB 10K RPM300GB and 600GB 15K RPM~ 140 IO/s (10K), ~ 180 IOs per second (15K)Near-line SAS (HDD) options3.5 drives (195 drives/ rack)1TB, 2TB, 3TB and 4 TB 7.2K RPM 90 IOs per second2.5 drives (500 drives/ rack)1TB 7.2K RPM; 90 IOs per second

HARDWARE

SAN LUNs

HostLUNsNFS/CIFSNAS Volumes

# Copyright 2015 EMC Corporation. All rights reserved.The VNX implements SAS disk connectivity across the series. This provides for improved throughput 6Gbits x 4 lanes (24Gb) per SAS bus as well as improved Reliability and Robustness due to:Point to point topologyFast Fault Isolation and identification of fault backend components4 lane SAS cables offer inherently robust interconnectAll components will continue to run even with 3 out of 4 lanes damaged/nonfunctionalVNX supports 2 disk formats, 3.5 and 2.5. Both 2.5 and 3.5 drives are available in 10K and 15K rpm, NL-SAS drives and Flash drives options which are supported in a 25 drive 2U DAE (2.5) that carries up to 25 drives, with up to 500 drives per rack. The 3.5 DAE is 3U. Dense 3.5 drive DAE options are also available with 60 drives per 4U DAE and has a maximum configuration of 555 drives per cabinet (this is made up of 9 4U DAEs and 1 3U DAE please note that a 1U space is required beneath the first 4U DAE so we are not able to configure 10 4U DAEs in a single cabinet. 2.5 and 3.5 drives are supported in the 3U DAE (2.5 drives in 3.5 carriers) and all drive types are supported in the same DAEs e.g. the 3U DAE can be configured with Flash, 10K and 15K SAS or 7.2K SAS and combinations. 3.5 drives cannot fit in the 2U DAE2.5 drive technology provides significant density and power improvements over the 3.5 technology and we expect EMC and the industry to move toward 2.5 technology due to power efficiency and in fact our DPEs support only 2.5 e.g. a system with 480 3.5 drives consumes 3 racks and 100U. A new VNX system with 500 2.5 drives consumes 2racks and 44U and uses 53% less power and cooling.All the mixed drive combinations shown on the slide are supported in any of the VNX series systems. System TCO and performance can be concurrently optimized when implementing the FAST Software Suite which ensures the highest activity data is placed on the fastest possible disks (Flash) and the dormant data is dynamically moved to the capacity drives (NL-SAS).Note to Presenter: DPEs in the new VNX support ONLY 2.5 drives so NL-SAS as a vault is restricted to the 1TB 2.5 NL-SAS drive and that is only an option on the DPEs (all VNX 2 except VNX8000). VNX8000 does not support NL-SAS as a vault in either 2.5 or 3.5 formats Note to Presenter: There is a recommendation that any NL-SAS drive greater than 1TB be configured with Raid-6 to provide additional protection in situations where a long rebuild might occur. Note to Presenter: VNX OE SP3 release (a.k.a. Thompson River - Q1 2014) provides support for 1.2TB 10K SAS drives. Based on the faster drive speed and greater reliability , we still recommend these drives be configured with Raid-5. Of course the customer can choose to use Raid-6 should he feel the need for the added security.Note to Presenter: VNX OE SP4 release (a.k.a Snake River - Q3 2014) provides support for 800GB eMLC drives for use with FAST VPNote to Presenter: We have RAID recommendations based on error rates and rebuild times. All NL-SAS drives are recommended to use a Raid-6 configuration. All SAS drives are recommended to use a Raid 5 configuration, although other raid options can be configured dependent on customer performance/availability requirements.

#TITLE

Designed for reliability, data integrity and performance

Anatomy of an Enterprise FLASH DriveEnd to End CRC

HARDWARE

SAS or SATA ports

DRAM

SLC/eMLC NANDFLASH

Controller

# Copyright 2015 EMC Corporation. All rights reserved.In many ways, Flash drives are superior to mechanical hard drives in terms of performance, power use, and availability. Until recently, they were typically used in harsh environments and environments where physical space is at a premium. For example, they are embedded in industrial computers and military/aerospace applications. Recently, due to technological advances resulting in greater capacity and lower cost, these drives are used in commercial and consumer-grade computers including enterprise computer environments and high-performance computer workstations. In the past, the small size and low power usage of Flash drives were unimportant in commercial environments. That is also changing; now, high-density storage capacity, low power usage, and low-heat generation are important data center requirements. The near RAM speed of the drives response time is also important. It is an advantage not to have to rely on slower mechanical storage for data when large datasets (databases) are being manipulated or when the lowest application or user response time is needed. Flash drives are especially well suited for low-latency applications that require consistent, low (less than 1 ms) read/write response times. Since there is no rotational or seek latency in Flash drives, the greatest throughput occurs with small-block, highly-concurrent, random-read workloads. The elimination of mechanical overhead and data placement latency greatly improves application performance and efficiency. All of these factors help Flash drives deliver very high IOPS with very low response times. Besides the industry-standard techniques used in all VNX storage devices (like DRAM memory between the array and the storage media; a backup power circuit for the DRAM memory; error detection and correction; and bad block management). The VNX series supports both Single Level Cell (SLC) and Enterprise Multi-Level Cell (eMLC) technologies.#TITLE

EMC Flash = Enterprise Grade FlashFLASH Drive Reliability and Data Integrity

HARDWARE

Single Level Cell and Multi-Level Cell Technology Over 100K write CyclesCell overprovisioning, for recovery of worn cellsCell-level sparingLoad wearing, balances cell updates Writes are spread across all available cellsMTBF >2Million hours (HDD~1.5M Hr)SSD DRAM implements write amplification

# Copyright 2015 EMC Corporation. All rights reserved.Note to presenter: This slide is a build and looks peculiar if not seen in presentation mode.SSDs wear out (actually so do HDDs, although SSD Cell wearing is much more widely understood, is very predictable and it sometimes generates anxiety in our customers!). So, EMC uses the most robust Flash technology, purpose built for the enterprise use case and providing improved reliability and ensuring that each flash cell can be written to 100s of 1,000s of times. Even so, we have to make sure that we avoid hot spots and build high availability features to ensure they at least match our exacting MTPBF standards. Lets consider the technologies leveraged by ENCs Flash drives:NAND reserve cells. All enterprise SSDs keep spare cells on each chip in reserve for when a given cell on that chip becomes worn. The SSD controller automatically identifies the failing cell and re-maps the address to the spare area. Think of this as cell level hot sparing. Such re-mapping happens automatically and 100% transparent to the SSD user. Enterprise SSDs typically have 20%-25% of the NAND chips address space held in reserve. In addition the SSD has implemented:Load wearing. This mechanism automatically sprays new writes across all available NAND chips on the SSD. This ensures against hot spotting any one NAND chip. It also has the nice side effect that more chips in the SSD the more total writes the SSD can absorb before the whole SSD will need to be changed. Finally we have:Write amplification. By keeping very hot data in DRAM on the SSD in combination with lazy writes to the NAND, the life of the SSD is extended. In order to protect the data in flight on the DRAM before it gets written to the NAND, the SSD implements a power reservoir in form of a couple of high capacity capacitors (SuperCaps) that acts as batteries with enough power to run the SSD and drain any outstanding IO in case of power failure or un-plugging events. All this gives the enterprise SSD an:MTBF > 2M hours. Which is better than the 1M to 1,5M for 15K HDDs.These design principles apply whether we use eMLC or SLC technologyProactive sparing adding to the availability of the standalone flash drive reliability, this is a system level function where data is proactively copied from a potentially failing SSD drive to a sparing SSD drive automatically and without user intervention - monitors drive inventories and maintains a cadre of unbound drives to spare any failures. This dramatically speeds rebuild times and lowers system overhead.

#TITLE

FLASH Enhancements with VNX2Enterprise Multi-Level Cell Flash DrivesMultiple bits can be stored per celleMLC can be manufactured with endurance of SLCWe will initially use Mixed read-write eMLC for FAST VPLower cost and endurance than SLC (10 vs 30 writes per day)XtremSW Cache Management IntegrationManage VFCache and VNX from single pane of glassImproved FAST VP Granularity256MB segment sizeMore efficient use of system resources and performance benefits

Enterprise Multi-Level Cell Technology

HARDWARE

# Copyright 2015 EMC Corporation. All rights reserved.For a very long time, the market has talked about MLC technology and its cost advantage and reliability disadvantage compared to SLC and indeed many of our competitors have implemented it. In reality, the difference between SLC and MLC is not so clear cut, in fact eMLC can be architected to provide comparable reliability and performance. The SSD technology market place tends to use the term Endurance to reflect the reliability of Flash and this is measured in writes per day. High end SSD technology can deliver around 30 writes per day per drive. This means that each cell can be re-written 30 times in a day for 5 years also bear in mind that the writes are cycled around all the cells so for a 100GB drive (able to store 12.5 Million 8KB blocks), 30 x 8KB writes a day equates to 4,300 writes per second per drive, constantly for 5 years, before you will start to see cell fall out. These drives will be used where data change rates are the highest, ie in FAST Cache (and are referenced in the earlier slide as FAST Cache SSDs).Lower endurance eMLC technologies (around 10 writes per day) still provide up to 1500 writes per second per drive and will be used where data change rates are more moderate, ie in FAST VP (and are referenced in the earlier slide as FAST VP SSDs). #TITLE

VNX All Flash Pricing

Q4 F7000 expansion to 172TB via Viking dense platform

# Copyright 2015 EMC Corporation. All rights reserved.

#TITLE

Lead with VNX-F whenVersus orNeed high availabilityNeed lower $/GB orlower entry priceLead with XtremIOBest in class EMC flash arrayAdvanced data services (in-line dedupe, thin).5ms Latency

The Right Solution For The JobGreater scale,more performance, more featuresLower entry priceLower $/ Raw GB w/o dedupe VNX-F All Flash ArrayBlock OnlyXtremIO All Flash ArrayScale-Out Flash

/

# Copyright 2015 EMC Corporation. All rights reserved.Our all flash array offerings are truly built to address the different price points and requirements of the all flash array market. XTREMIO is our flagship all flash solution, with scale out performance and optimal efficiency technologies (a great and cost effective solution for highly de-dupable environments such as VDI. VNX-F is the price leader for all flash storage solutions in the market place today. When competing with Violin. TMS etc where the requirements are cheap raw capacity and minimal features, VNX-F is the ideal solution. Lead with XIO but be aware that there is a powerful and cost effective all flash tool in the VNX-F which is an easy shift (with a management experience similar to their existing storage infrastructure).

#TITLE

VNX-VSS

EMC2VNX-VSSVideo Surveillance Storage at the EdgeOptimized for highly distributed environmentsBased on the industry-leading, enterprise-proven VNXPlug & Play SimplicityAvailable in 2 Configurations (24 TB & 120 TB)Aggressive $/GB

# Copyright 2015 EMC Corporation. All rights reserved.

#TITLE

INDUSTRY LEADING VIRTUALIZATION INTEGRATION

AWARD-WINNING UNISPHERE MANAGEMENT; REMOTE MGMTPROVEN 5-9s AVAILABILITYSTARTING CONFIGS: 24 TB & 120 TB Video Surveillance Storage (VSS)CONNECT THE EDGE USING iSCSI, FC AUTO-CONFIGURABLE; AUTO-LOAD BALANCING

- F

VNX-VSS

EMC2DELIVER UP TO 500MB/SECRedefining video surveillance storage at the edgeRUN 100S OF CAMERAS@ MULTIPLE BIT RATES

# Copyright 2015 EMC Corporation. All rights reserved.

#TITLE

Storage Processor Port Types and LimitsPCI Gen 3 up to 10GB/s6Gb x 4 ports x 4 lanes OR6Gb x 2 ports x 8 lanes (w/ Y cable)Supported on 60 drive DAEDelivers ~ 2GB/s from single DAEPort Limits per SPVNX8000VNX7600VNX5800VNX5600VNX5400 *VNX5200 *Max SAS ports (BE)1666622Max FC ports (FE)362020201612Max FCoE ports (FE)1810101086Max iSCSI ports (FE)**161616161612Max 10Gb iSCSI ports**1610101086Max TOTAL FE ports (derived from limits above)362020201612Max SLICs per SP1155543

* Onboard ports (No SAS SLIC support)** Mix and match subject to SLIC slot availability (4 port 1GbE and 2 port 10GbE cards)

Flexibility and Future Proofing

HARDWARE

# Copyright 2015 EMC Corporation. All rights reserved.VNX Next Gen leverages and extends the scale of FLEX IO Modules (or SLICs), which allow the system to be easily customized for specific customer environments as well as for EMC to simply and seamlessly enable connectivity to any new connectivity option that may come along. The SLICs (Flex IO Modules) supported on VNX Next Gen are mostly the same SLICs as for VNX1. There are 2 major differences:There is a new SAS SLIC (physically denoted as 6Gb SAS v3 ) on the SLIC, which while still supporting 4 x 4 lane 6Gb SAS ports, is now fully PCIe Gen 3 capable so that full bandwidth operations on all 4 lanes and all 4 ports, as well as a 2 x 8 Lane 6Gb SAS option for Voyager 60 drive DAE connectivity.The El Nino SLIC (10GbE Optical dual port, supported only on the file Data Mover in the old VNX) is now supported on the Block SPs in all next gen VNX platforms for iSCSI connectivity. This SLIC also supports Twin-Ax copper connectivity. Each port operates at full 10Gb/s speeds which was not possible on the old block iSCSI SLIC. A special Y-cable is required to leverage the 2x8x6Gb connectivity to the 60 drive DAE and is required in high bandwidth workload type scenarios to enable full speed streaming from all 60 drives in the DAE.PCIe Gen 3 is vital to the architecture of VNX2 as it enables us to simply embrace 40GbE and 16Gb Fiber Channel at full port speeds when the market is ready for it.

With the current release (Salmon River) each iSCSI port can support up to 16 VLANS with a maximum of 128 VLANs per system.36#TITLE

File implementation: Data Mover enclosureVNX operating environment for fileConfigurable blade options two to eightPrimary/standby with automatic failoverPrimary/primary with quick reboots for entry platformsN+M advanced failover for four+ blade systemsFlexible IO connectivity options:Four x 1Gb Base-T Copper Ethernet portsTwo x Gigabit Ethernet optical plus two x 10/100/1000 copper portsTwo x 10 Gigabit Ethernet optical portsTwo x 10Gb Base-T copper Ethernet portsFiber Channel or FCoE connectivity to storageControl stations (one or two)Configuration and managementReliability, availability, serviceabilitySymmetrix, VMAX, VNX, or CLARiiON VG2 and VG8 Gateway ComponentsArchitecture and packaging

CS (Control Station) 1 UDME(Data Mover Enclosure)2U

SANVNX1 seriesVMAX 10K, 20K, 40KCLARiiON

FileVNX VG2/VG8

HARDWARE

# Copyright 2015 EMC Corporation. All rights reserved.The high-end VNX gateway system (NAS head only) is called the VG2 and VG8, which connects to the standard VNX for block system as well as CLARiiON or Symmetrix/VMAX back-end storage through a Fibre Channel SAN. The basic design principle of the VG8 isthrough Data Movers and control stationsto separate data flow from control flow, which is handled by the Control Station. VG8 supports up to eight Data Movers with advanced failover (N+1 or N+M) under a single point of management and controlall the benefits of a true cluster, without the management aggravation. Each VG8 Data Mover uses the UltraFlex I/O module concept, allowing up to five I/O modules per Data Mover. The I/O module options are four 10/100/1000 Ethernet (copper) ports, two 10/100/1000 Ethernet (copper) ports plus two Gigabit Ethernet optical ports, or two 10 Gigabit Ethernet ports (either 10/100/1000/10000 Copper or Optical). Each Data Mover is configured with one 4-port Fibre Channel I/O module for storage array connectivity and tape connectivity (for NDMP) or 1 dual port FCoE I/O module for storage connectivity. The Fibre Channel module can be either 4 Gb/s or 8 Gb/s Fibre Channel and the FCoE I/O module is 10Gb/s optical.Note to Presenter: Data Mover configuration types cannot be mixed in a VG8 system, although the FLEX IO Modules can be added in any combinations (the first slot must contain an 4-port FC Module) allowing Data Mover configurations with combinations of all possible connectivity types; copper Ethernet, copper plus optical Ethernet and optical 10 Gigabit Ethernet.The Data Movers feature EMCs VNX OE for File embedded system software, which is optimized for file I/O. The Control Station is used to configure, manage, upgrade, and failover the Data Movers, as well as to manage Data Mover failover. The VG2 offers a single blade with fast reboot or dual blade with N+1 failover and the VG8 offers N+1 and N+M advanced failover. N+M advanced failover can be thought of as Blade-RAID. Typical configurations would implement N+1 (RAID 5) with one standby for up to seven primary blades; although for the highest availability, it is possible to configure multiple standby blades that act as a pool (i.e., N+M where M is the pool of failover blades) to cater to multiple concurrent blade failuresfor example, six primary plus two standby (RAID 6) or even four primary plus four standby (RAID 1). The standby blades wait, fully booted, for the primary Data Mover to fail. There is no performance degradation on failover, and the standby Data Mover presents itself to the network as the failed Data Mover. VNX VG2 and VG8, like all the members of the VNX Series, has the high-availability features you expect from EMC, including dual power supplies, dual fans, full battery backup, and the call-home card for remote and predictive diagnostics. The VG2 and VG8 also provides dual Control Stations for Control Station failover as an option.Note to Presenter: In general the Gateways support new backend platforms within 1-3 months of GA. See the NAS Support Matrix on Powerlink for the current list of supported back-end storage: http://powerlink.emc.com/km/live1/en_US/Offering_Technical/Interoperability_Matrix/NASSupportMatrix.pdf.Note to Presenter: With the availability of the next gen VNX, we will not immediately offer a new gateway, and will continue to sell VG2 and VG8 for a period post GA and it will only support the new VNX as a back end in a post GA VNX OE for File software release some time in the second half of 2014 (Inyo SP5).#TITLE

High Availability Path Management for Mission-Critical Block ApplicationsStandardize path managementDeploy common technology across physical and virtual environments

Optimize data path poolsLeverage algorithms designed to optimize data paths to VNX series arraysAutomate multi-pathing policiesSimplify path failover and recoveryOptimize load balancing

All data paths are active, optimized for load balancing

EMC PowerPath

HARDWARE

# Copyright 2015 EMC Corporation. All rights reserved.Adding automated data path management, failover and recovery, and optimized load balancing further improves efficiency and performance, while simplifying management. Host-based PowerPath for Windows, Linux, and UNIX, and PowerPath/VE for vSphere and Hyper-V take the complexity out of data path failover and recovery and balancing workloads over active data paths to provide high-availability and better application performance. With support for both physical and virtual environments, PowerPath provides a single tool set for dynamic data centers to eliminate I/O bottlenecks that could cripple mission-critical applications and limit growth. PowerPath includes patented algorithms designed to automate and optimize a pool of data paths to VNX series arrays. It also supports other EMC and non-EMC storage. By optimizing data paths, data centers can get the most out of available compute, network, and storage resources to realize greater growth from existing investments.#TITLE

PowerPath/VEs intelligent load balancing recognizes that the paths are not equalPowerPath/VE redirects more I/O to the less busy pathsOptimizing the I/O paths results in overall greater throughput for the PowerPath/VE hostMPIO with Round Robin continues to use all equally resulting in longer I/O completion times and less throughput

PowerPath/VE for VNXIncreased Application Performance and AvailabilityPowerPath/VE compared to VMware Native MultipathingSource: ESG Lab: EMC PowerPath/VE - Automated Path Optimization for VMware Virtual Environments, April 2012

# Copyright 2015 EMC Corporation. All rights reserved.PowerPath/VE is the industries leading multipathing solution which uses patented algorithms to intelligently and efficiently balance loads across VMs while also orchestrating path failover and failback for added resilience. PowerPath/VE decreases latency and increases resilience for better application availability in a growing virtual environments.

In a typical SAN configuration, large or small, paths will rarely be perfectly balanced. PowerPath/VEs intelligent load balancing recognizes that the paths are not equal. It redirects more I/O to less busy paths while maintaining statistics on all of them. Optimizing the I/O paths results in greater throughput. By avoiding the busy paths, PowerPath/VE can get I/Os completed more quickly. Round Robin continues to use all paths equally. PowerPath/VE will reroute I/Os while Round Robin doesnt recognize the difference among paths states.39#TITLE

Specialized 3rd Party Hardware SolutionsIntegrated Hardware Offerings

VSPEX

VNX for SAP HANA

VCE Vblock

HARDWARE

# Copyright 2015 EMC Corporation. All rights reserved.

#TITLE

Storage vSphere-supported FC/iSCSI/NFS Storage can be from more than one arrayvSphere vSphere HCL supported servers Minimum 2 servers required Virtual machinesAtmos SW is installed on the VMsAccess methods are configured on VMsAtmos access/integration layerCustomer web application using Atmos REST/SOAP APIPre-integrated ISV application (e.g. Documentum)VNX Object SupportAtmos VE on VNX

HARDWAREIP/FC

Atmos Virtual EditionCustom Web ApplicationAtmos ISV ApplicationFile System AccessPolicies automate data servicesMulti-tenancy securely segregates dataGlobal scale namespace spans locationsREST and SOAP access methods

ESX Server

AtmosNode

AtmosNode

ESX Server

AtmosNode

AtmosNode

ESX Server

AtmosNode

AtmosNode

ESX Server

AtmosNode

AtmosNode

VNX5400

VNX5600

VNX5800

VNX7600

VNX8000

Modular Unified

Introducing

# Copyright 2015 EMC Corporation. All rights reserved.Customers can deploy Atmos VE (virtual edition) with VNX in a virtualized infrastructure. As shown, Atmos VE on VNX has four key components namely the backend VNX storage, vSphere, Atmos VMs and Application integration points or access methods.Lets start with the bottom of the stack Storage Atmos VE via vSphere supports FC/iSCSI/NFS storage. With the storage capacity coming from one or a number of arrays. However, storage has to be provisioned to all vSphere servers and the LUN or share capacity needs to be in line with the recommended constraints.Second vSphere Atmos VE supports any vSphere listed HCL servers and vSphere of 4.0 or greater required. If the customer already has an VMware vSphere farm, they can use it to deploy Atmos virtual edition as well.Third Atmos VMEach physical machine must have two virtual machines configured. Atmos SW is installed on each of the VMs. Each virtual machine on the vSphere will act as a virtual Atmos node and be configured for various access methods.Fourth - Integration layer Most often with Atmos VE on VNX REST and SOAP will be the access methods configured on the Atmos VM. Atmos nodes configured with access nodes can be used for custom or packaged application access. Atmos VE access points are also compatible with Atmos ISV solutions.In short, applications can consume underlying storage using different access methods and from various access nodes. The key here is that applications consuming storage from any VM will see the system as the large object store with a unified name space. #TITLE

VSPEXThree Paths To Cloud InfrastructureBest Of BreedInfrastructure ComponentsProvenInfrastructure

Converged InfrastructureVCE VBLOCK

EMC Products

Simpler, Faster, Lower TCO

# Copyright 2015 EMC Corporation. All rights reserved.Over time, we expect infrastructure to move from left to right with more and more infrastructure being delivered as converged infrastructure. IT organizations will concern themselves less and less with building or operating infrastructure and more and more with differentiating their business with new applications.#TITLE

VSPEX

VIRTUAL DESKTOPS

100

300VIRTUAL MACHINESPRIVATE CLOUDEND USER COMPUTING/VDIVIRTUALIZED APPLICATIONS

600

50

1000

250

1000

2000500

DATA PROTECTION2010/132010/132012

11G

EMC BACKUP EMC BACKUP

# Copyright 2015 EMC Corporation. All rights reserved.This slide shows the continued investment in the VSPEX family through 2013.The addition of the new VNX and VNXe platforms will allow greater scale and better performance.Our application solutions will continue to grow with the addition of SharePoint and Exchange 2013.Customers can also leverage complementary technologies such as the Xtrem Family which are proven by EMC in VSPEX for EUC and Applications.

#TITLE

SYSTEM 100

SYSTEM 200

SYSTEM 340*

SYSTEM 720*UCS C220M3Catalyst 3750-X VNXe 3150VNXe 3300UCS C220M3Nexus 5548UP, 1000vVNX 5300UCS 5108Nexus 5548UP1000v, 3048MDS 9148VNX5400VNX5600VNX5800 VNX7600 VNX8000Nexus 7010 5548UP, 5596UP, 1000v, MDS 9148UCS 5108VMAX 10KVMAX 20KVMAX 40KVision Intelligent Operations SoftwareVirtualization vSphere ESXi & vCenter ServerCompute

Storage

Network

* Foundation for Specialized Systems

# Copyright 2015 EMC Corporation. All rights reserved.

44#TITLE

SYSTEM 100

SYSTEM 200

SYSTEM 340*

SYSTEM 740*UCS C220M3Catalyst 3750-X VNXe 3150VNXe 3300UCS C220M3Nexus 5548UP, 1000vVNX 5300UCS 5108Nexus 5548UP1000v, 3048MDS 9148VNX5400VNX5600VNX5800 VNX7600 VNX8000Nexus 7010 5548UP, 5596UP, 1000v, MDS 9148UCS 5108VMAX 100KVMAX 200KVMAX 400KVision Intelligent Operations SoftwareVirtualization vSphere ESXi & vCenter ServerCompute

Storage

Network

* Foundation for Specialized Systems

# Copyright 2015 EMC Corporation. All rights reserved.

45#TITLE

Match to Your Business Requirements

SYSTEM 740HIGHEST SERVICE LEVELS

SYSTEM 340

PERFORMANCE AND SCALEAPPLICATIONOPTIMIZEDSPECIALIZED

Business criticalHigh Performance Data protectionSecureOpen SystemsMission critical Fault-tolerant performanceHighest availability Encrypted & secureOpen Systems and MainframeVblock Specialized System for High Performance DatabasesVblock Specialized System for SAP HANAVblock Specialized System for Extreme Applications

# Copyright 2015 EMC Corporation. All rights reserved.

46#TITLE

SAP HANA Delivery ModelsSAP Certified HANA Appliance ModelAdvantages:Simple: Fast deployment of defined configurationsSingle contact for supportSAP-Certified at installationTailored Datacenter Integration (TDI) ModelAdvantages:ROI: use existing storage investmentLow IT Impact: Use existing proceduresGreater scalabilityServer, network and storage are SAP-certified together with scalability chosen by the server vendorSAP HANA software is installed by the server vendorThe appliance is installed and supported by server vendorAll infrastructure is dedicated to a single HANA system

Storage is certified separatelyChoice of server vendorCustomer integrates SAP-certified components on-site, and loads SAP HANA softwareCustomer retains responsibility for validation and support of the configurationInfrastructure can be shared with non-HANA workloads.

# Copyright 2015 EMC Corporation. All rights reserved.SAP Certified Appliances: SAPs initial deployments used the appliance model, in which server vendors integrate servers, networks, and storage into an optimized hardware platform. The appliance is certified by SAP to meet functional and performance criteria. It is delivered with SAP HANA pre-installed and is supported by the server vendor. EMC has partnered with Cisco and VCE to deliver a range of scale-out appliances from Starter (up to 4 nodes/2TB) through Enterprise (up to 16 nodes/8TB). While easy to deploy, the appliance model has limitations in the choice of server and storage hardware.

New Tailored Datacenter Integration (TDI): The TDI model allows more flexibility in the choice of hardware, allowing customers to use existing hardware and operational processes. The 1st phase of TDI opens up the storage layer specific storage configurations are certified by SAP, allowing customers to mix and match from SAP-certified servers/networks and SAP-certified storage. EMC has SAP Certification, with well documented configuration recommendations, for the VMAX 10K, 20K, and 40K. Customers with these arrays, appropriately configured, can deploy SAP HANA on their installed VMAX, taking advantage of existing hardware investments and established operational procedures. The customer has greater responsibilities, including integration, software installation by an SAP Certified Technology Specialist, configuration validation with SAP, and on-going support.

#TITLE

SAP Certified Cisco, VCE, EMC Appliances

Real-Time Data Access

SAP HANA Infrastructure

HANA

HANA

HANA in Memory Database

EMC VNX Block Access

Cisco UCS Scale OutHANA

HANA

HANA

HANA

Flexible, multi-purpose, data-source agnostic, in-memory applianceAn integrated stack = SAP in-memory software + Cisco UCS blades + EMC storage infrastructure (VNX)Real-time analysis on Big Data files for real-time decision-making

# Copyright 2015 EMC Corporation. All rights reserved.Lets look at the EMC and Cisco appliance for SAP HANA in more detail.

The first available Cisco and EMC Solution for HANA can scale from 4 nodes to 16 nodes, and is based on the EMC VNX5300 platform. Moving forward, the vision of HANA for SAP customers is it will not just support analytics environments, but traditional SAP transactional applications like ERP.

HANA requires high performance scale-out infrastructure, and Cisco UCS blade servers on EMC VNX with FAST VP delivers the self-optimizing server and storage infrastructure needed across block or f