Upload
trinhdang
View
242
Download
1
Embed Size (px)
Citation preview
The Data Infrastructure Software Company
The DataCore Server
DataCore™ Hyper-converged Virtual
SAN Best Practices Guide
February 2017
Table of contents Table of contents 2
Overview 3
DataCore Hyper-converged and Virtual SAN Deployment Options on Windows 4
DataCore Hyper-converged and Virtual SAN Deployment Options on ESXi 5
SANsymphony vs Virtual SAN Licensing 6
Configuring Windows Server 2012 R2 for SANsymphony 7
Example 1: Single Node with DataCore Loopback Ports 8
Example 2: Single Node with two iSCSI loopbacks on two virtual NICs 8
Example 3: DataCore Loopback Ports and FC HBAs with direct connect cables 10
Example 4: iSCSI with a switch (Windows Server Failover Cluster) 11
Example 5: iSCSI with direct connect (Windows Server Failover Cluster) 12
SANsymphony in a guest VM on ESXi 6.0 Update 2 or newer 13
Example 6: Single Node Configuration 14
Example 7: Two Node Configuration 15
Known Issues 17
Appendix A 19
Useful Resources 19
Appendix B 20
Deployment tools 20
Previous Changes 21
Page | 3 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Overview
DataCore Hyper-converged solution uses either DataCore SANsymphony or DataCore
Virtual SAN package to create a high performing hyper-converged infrastructure using DAS
or internal storage. For the purpose of this design document both types of installations will
be referred to as ‘DataCore Server’ after initially explaining the difference. This document
covers the Best Practice Design Guidelines to configure a DataCore Hyper-Converged
Configuration.
Page | 4 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
DataCore Hyper-converged and Virtual SAN Deployment Options on Windows
Physical Windows Server (no server hypervisor installed) DataCore server runs directly on top of the Windows Server operating system. All
local block storage devices that are not initialized are automatically detected as
suitable for the pool. All existing filesystems can be used as pass-through disks. An
application such as Microsoft Exchange or SQL may be installed alongside the
DataCore Software. This is a typical Virtual SAN deployment that allows the running
application to take full advantage of DataCore Caching and storage capabilities.
Microsoft Failover Cluster or other clustering technology can be used to provide
application failover between servers.
Physical Window Server with Hyper-V DataCore server runs in the root partition (also referred to as the parent partition)
on top of the Windows Server operating system. All local block storage devices that
are not initialized are automatically detected as suitable for the pool. The Microsoft
Hyper-V hypervisor role is installed alongside SANsymphony. DataCore does not
recommend installing SANsymphony in a Hyper-V guest VM as it introduces
virtualization layer overhead and obstructs DataCore Software from directly
accessing CPU, RAM and storage.
Page | 5 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
DataCore Hyper-converged and Virtual SAN Deployment Options on ESXi
DataCore Server in a Windows Guest VM on ESXi
Backend disk storage configuration options:
Assign uninitialized storage devices from the server hypervisor to the DataCore
server virtual machine as raw storage devices (Physical RDMs in ESXi).
Present VMDK virtual disks that reside on VMFS datastores to the DataCore server
virtual machine.
Use DirectPath to map the storage controller (RAID or PCIe flash device) directly to
the DataCore virtual machine.
Using Fiber Channel Connections in a Virtual Machine (VMware ESXi only) option:
The DataCore Server must be running in a virtual machine with the HBAs set-up in
VMDirectPath I/O mode (see: http://kb.vmware.com/kb/1010789 ). This will
assign the Physical HBA directly to the Virtual Machine and allow the DataCore
Fibre Channel Driver to be able to bind to it.
For Fibre Channel HBA's supported by VMware using VMDirectPath I/O see
http://www.vmware.com/resources/compatibility/search.php
The DataCore Fibre Channel driver cannot bind to any VMware NPIV Fibre Channel
Adapters so these cannot be used for any SANsymphony Front-End or Mirror Ports.
Page | 6 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
SANsymphony vs Virtual SAN Licensing
There are two main differences between regular SANsymphony
licenses and Virtual SAN licenses.
Capacity Virtual SAN capacity is licensed per DataCore Server. Virtual SAN DataCore Servers do not
utilize group level capacity and do not share capacity with other DataCore Servers.
Regular SANsymphony licenses allow sharing DataCore Server and Group level capacity
across all DataCore Servers in the same Server Group.
Serving Virtual Disks Virtual SAN DataCore Servers can only serve virtual disks to themselves and one registered
host.
DataCore Servers using regular SANsymphony licenses do not have any restrictions serving
virtual disks.
Page | 7 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Configuring Windows Server 2012 R2 for SANsymphony
DataCore server in a Root Partition of Windows 2012 R2 Installing DataCore server in a root partition of Windows 2012 R2 means that the software
is installed directly on the system and not in a guest VM. In this type of deployment, the
DataCore Server runs alongside other Windows OS applications such as Hyper-V, MS SQL,
MS Exchange, etc.
Single Node Configuration Examples
Configuration Performance Cost Synchronous
Mirroring DataCore Loopback Ports
Highest performance.
Free - no additional hardware required.
Will require an FC HBA for failover when setting up a mirrored pair
iSCSI loopback on a virtual NIC
Throughput is currently limited by CPU and VM Queue to 350-400MB/s per port, however setting up two virtual NICs doubles the throughput. Adding more than two virtual NIC ports does not increase throughput.
Free – no additional hardware required.
Will require a NIC for failover when setting up a mirrored pair.
Page | 8 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Example 1: Single Node with DataCore Loopback Ports
In this configuration a virtual disk is being served over the DataCore Loopback Port. The
Loopback port is installed with DataCore Software. The end result is a very reliable and
extremely fast I/O path that takes advantage of RAM caching including the entire suite of
DataCore server features. The pool disk can be any enterprise RAW disk available in the
Windows OS. The application runs alongside the DataCore server can be anything including
SQL, Exchange or Hyper-V. Another DataCore node can be added to create synchronous
mirrors. FC HBAs can be installed to achieve multipath I/O. For more details please see
Example 3.
Example 2: Single Node with two iSCSI loopbacks on two virtual
NICs
Please add the Hyper-V role in order to create the Virtual NICs, even if there are no plans
on using Virtual Machines. In this example MS iSCSI initiator and DataCore target on the
same IP address. First path initiator is logging in to and from 10.0.0.1/24, second path the
initiator is logging in to and from 10.0.2.1/24, thus effectively creating two iSCSI loopback
connections. MS MPIO is setup with the default path policy of Round Robin which will aid in
utilizing both paths. This configuration can be scaled to provide multipath I/O by adding
another DataCore node. To take advantage of synchronous mirrors please add physical
NICs, for more details please see Example 6.
Page | 9 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Multi Node Configuration Examples
Configuration Performance Cost DataCore Loopback Ports and FC HBA
Highest Performance Requires FC HBAs for Frontend ports and Mirror ports
iSCSI with a switch Good Performance Requires a Network Switch iSCSI with direct connect cable (must use a virtual NIC for at least one Front-End port)
Good Performance Virtual NIC throughput is currently limited by CPU and VM Queue to 350-400MB/s per port, however setting up two virtual NICs doubles the throughput. Adding more than two virtual NIC ports does not increase throughput.
Network Switch is not required
Page | 10 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Example 3: DataCore Loopback Ports and FC HBAs with direct
connect cables
Two physical servers configured as a synchronous DataCore server pair with DataCore
Loopback and FC HBAs for multipath I/O. Mirrored vDisk1 is served to Host1 and
mirrored vDisk2 is served to Host2. The LUNs are not shared but are accessed locally by
their respective hosts. DataCore is mirroring the data at the vdisk level between DataCore
Server A and B. Mirroring is done over two redundant MR ports configured in
initiator\target mode which permits bidirectional I/O flow.
Page | 11 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Example 4: iSCSI with a switch (Windows Server Failover Cluster)
In this configuration iSCSI implementation with redundant switches to avoid a single point
of failure. MS iSCSI configuration is as follows:
Initiator Host Initiator IP Target IP Port Role Target Host Host1 10.0.1.1 10.0.1.2 MR Host2 Host1 10.0.2.1 10.0.2.2 MR Host2 Host1 10.0.3.1 10.0.3.1 FE Host1 Host1 10.0.4.1 10.0.4.1 FE Host1 Host1 10.0.3.1 10.0.3.2 FE Host2 Host1 10.0.4.1 10.0.4.2 FE Host2 Host2 10.0.1.2 10.0.1.1 MR Host1 Host2 10.0.2.2 10.0.2.1 MR Host1 Host2 10.0.3.2 10.0.3.2 FE Host2 Host2 10.0.4.2 10.0.4.2 FE Host2 Host2 10.0.3.2 10.0.3.1 FE Host1 Host2 10.0.4.2 10.0.4.1 FE Host1
Page | 12 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Example 5: iSCSI with direct connect (Windows Server Failover
Cluster)
With direct connect cables between servers at least one virtual NIC per DataCore instance
is required. This is to ensure that at least one local I/O path stays online, when the partner
server is shutdown, rebooted or otherwise unavailable. If the virtual NIC for local FE
connection is not configured the link down condition on the direct connected FE physical
NIC will take down the iSCSI network on the surviving node and I/O will stop.
MS iSCSI configuration is as follows:
Initiator Host Initiator IP Target IP Port Role Target Host Host1 10.0.1.1 10.0.1.2 MR Host2 Host1 10.0.2.1 10.0.2.2 MR Host2 Host1 10.0.4.1 10.0.4.1 FE (Virtual NIC) Host1 Host1 10.0.3.1 10.0.3.2 FE Host2 Host2 10.0.1.2 10.0.1.1 MR Host1 Host2 10.0.2.2 10.0.2.1 MR Host1 Host2 10.0.4.2 10.0.4.2 FE (Virtual NIC) Host2 Host2 10.0.3.2 10.0.3.1 FE Host1
Page | 13 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
SANsymphony in a guest VM on ESXi 6.0 Update 2 or newer
Configure ESXi host and Guest VM for DataCore Server for low
latency PHYSICAL NIC Disable interrupt moderation
See Page 6 of Best Practices for Performance Tuning of Latency-Sensitive - VMware
DATACORE GUEST VM
CPU Set to High Shares Reserve at least x4 Core CPU frequencies. Example: with Physical CPU at 2 GHz, set the reserved to 8 GHz
NUMA / Configure vCPU affinity
See Pages 4 and 11 of Best Practices for Performance Tuning of Latency-Sensitive - VMware and Configure Processor Scheduling Affinity in the vSphere Web Client
Memory Set to High Shares Reserve the entire amount of memory assigned to the guest VM
Disk Set to High Shares for all disks attached to DataCore Guest VM
Virtual SCSI Controller Change from LSI Logic Parallel to VMWare Paravirtual. https://kb.vmware.com/kb/1010398
Virtual NIC Use VMXNET3
Disable interrupt coalescing See Page 6 of Best Practices for Performance Tuning of Latency-Sensitive - VMware
Reduce idle - wakeup latencies
See Page 11 of Best Practices for Performance Tuning of Latency-Sensitive - VMware
ESXi Software iSCSI VMKernel
Disable Delayed ACK ESX/ESXi hosts might experience read or write performance issues with certain storage arrays
Page | 14 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Example 6: Single Node Configuration
Page | 15 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Example 7: Two Node Configuration
VMKernel on host ESXi A 10.0.1.3 is configured to login to two targets. First path is to the
local DataCore Server A 10.0.1.1 and the second path is to the DataCore Server B 10.0.1.2.
VMKernel on host ESXi B 10.0.1.4 is setup to login to two targets as well. First path is
connected to the local DataCore B 10.0.1.2 and the second path to DataCore A 10.0.1.1.
In this example there is only a single FE port per DataCore Server, this was done for clarity,
not to clutter the diagram. Feel free to configure 2 or 4 FE ports per DataCore Server for
increased throughput.
Do not configure VMKernel iSCSI port binding as DataCore iSCSI target does not support
multi-session iSCSI. For more information please see Answer 1556 - VMware ESXi
Configuration Guide.
Page | 16 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Pool Disk configuration on ESXi
It is best to have dedicated physical RDMs designated as pool disks for DataCore guest VM,
but that is not always possible because of SCSI Array controller limitations. In that case,
setup a dedicated Datastore and use a VMDK virtual disk as the pool disk. For best pool
disk performance setup one VMDK per Datastore. The VMDK should be provisioned as
Eager Zeroed Thick.
Page | 17 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Known Issues Preferred server is not honored for Virtual Disks served via iSCSI to the DataCore server
itself. This issue was addressed in SANsymphony 10.0 PSP6. However, if serving the
mirrored virtual disks outside of the mirrored pair the workaround below is still required.
Currently the ALUA protocol is not supported when serving virtual disks to the DataCore
server via iSCSI in a loopback fashion. This applies only to serving virtual disks to the
DataCore server itself, typically when using Virtual SAN license.
Please use the following workaround to control which paths are used as Active/Optimized:
Setting MPIO Path policy for iSCSI served Virtual disks. On each DataCore Virtual Disk
served via iSCSI set the remote server paths to Standby under the Microsoft MPIO.
In order to find the remote paths, use the MS iSCSI initiator to identify the disk Target:
a) Open MS iSCSI initiator and under Targets, select the remote target iqn address
b) Click devices and identify the Device target port with disk index to update
Addition information - https://technet.microsoft.com/en-us/library/ee338480(v=ws.10).aspx
Navigate to MS MPIO and view DataCore Virtual Disk.
c) Select a path and click Edit
d) Verify the correct Target device
e) Set the path state to Standby and click OK to save the change
Addition information - https://technet.microsoft.com/en-us/library/ee619752(v=ws.10).aspx
refer to section “Configure the MPIO Failback policy setting”
Page | 18 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Known Issues – Continued
Citrix XenServer Considerations
We are aware of a limitation on XenServer with regards to automatic reattachment of
Storage Repositories (SRs) after a reboot when the DataCore Server VM resides on the
same XenServer to which the SRs are served. Therefore only serve storage
from the DataCore Server's virtual machine to the other XenServers and not for use with
virtual machines that reside on the same XenServer that the DataCore Server's virtual
machine is running on. Please consult with Citrix for more information.
Page | 19 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Appendix A
Useful Resources
Use Answer 1626 - iSCSI Best Practices Guide to optimize TCP/IP and NIC settings
for low latency.
Answer 1556 - VMware ESXi Configuration Guide.
Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere
VMs. ( Note: Although VMware states to enable Turbo Boost to Tune Latency
sensitive workloads; it is recommend to keep this disabled to stop any disruption to
CPU)
Answer 1348 - DataCore's Best Practice guidelines.
Page | 20 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Appendix B
Deployment tools
To ease the deployment of Hyper-converged solutions DataCore Software provides the
following installation packages.
DataCore Hyper-converged Virtual SAN for Windows
DataCore Hyper-converged Virtual SAN for vSphere
Software download link https://datacore.custhelp.com/app/downloads/downloads.
Page | 21 The DataCore Server – DataCore™ Hyper-converged Virtual Best Practices Guide
Previous Changes 2016
September
New document created
2017 February
Updated the VSAN License feature to include 1 registered host
Removed example with MS Cluster using Loopback ports (currently not supported)
Updated DataCore Server in Guest VM on ESXi table (corrected errors, added references
to VMware documentation and added Disable Delayed ACK)
Removed ESXi 5.5 and added a requirement of at least ESXi 6.0 update2
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=display
KC&externalId=2129176
COPYRIGHT Copyright © 2017 by DataCore Software Corporation. All rights reserved. DataCore, the DataCore logo and SANsymphony are trademarks of DataCore Software Corporation. Other DataCore product or service names or logos referenced herein are trademarks of DataCore Software Corporation. All other products, services and company names mentioned herein may be trademarks of their respective owners. ALTHOUGH THE MATERIAL PRESENTED IN THIS DOCUMENT IS BELIEVED TO BE ACCURATE, IT IS PROVIDED “AS IS” AND USERS MUST TAKE ALL RESPONSIBILITY FOR THE USE OR APPLICATION OF THE PRODUCTS DESCRIBED AND THE INFORMATION CONTAINED IN THIS DOCUMENT. NEITHER DATACORE NOR ITS SUPPLIERS MAKE ANY EXPRESS OR IMPLIED REPRESENTATION, WARRANTY OR ENDORSEMENT REGARDING, AND SHALL HAVE NO LIABILITY FOR, THE USE OR APPLICATION OF ANY DATACORE OR THIRD PARTY PRODUCTS OR THE OTHER INFORMATION REFERRED TO IN THIS DOCUMENT. ALL SUCH WARRANTIES (INCLUDING ANY IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, FITNESS FOR A PARTICULAR PURPOSE AND AGAINST HIDDEN DEFECTS) AND LIABILITY ARE HEREBY DISCLAIMED TO THE FULLEST EXTENT PERMITTED BY LAW. No part of this document may be copied, reproduced, translated or reduced to any electronic medium or machine-readable form without the prior written consent of DataCore Software Corporation