Upload
moses-fitzgerald
View
216
Download
0
Tags:
Embed Size (px)
Citation preview
Spark the future.
May 4 – 8, 2015Chicago, IL
The Power of the Windows Server Software Defined Datacenter in ActionPhilip MossManaging Partner IT - NTTX
BRK2469
Domain Controllers
DNSInternalPublic
Exchange
SharePoint
Lync
SQL
WDS
File Servers
App-V
UE-V
RDSH
VDI
DPM
DHCP
MDM
Bespoke Client Line of Business Applications
Todays workloads
Engineering goals
Engineering goalsSupport for multiple diverse workloads
Full end-to-end high-availability
100% virtualisation
100% automation
Sub-system scale-out Storage Networking Compute
Cost to serve reduction
Removal of middleware
Hardware platform agnostic
Use of commodity hardware
Just in time hardware provisioning
Architecture
Architecture – software defined datacentre
Storage
Networking
Compute
SOFS and Storage spaces
SMB 3.0 and Software defined networking
Hyper-V clustering, HNV
Core Platform AD, DNS, DHCP, WSUS
Services RDS, VDI, DPM
Productivity Applications Exchange, SharePoint, Lync
Storage
Storage SOFS and Storage spaces
Networking
Compute
SMB 3.0 and Software defined networking
Hyper-V clustering
Core Platform AD, DNS, DHCP, WSUS
Services RDS, VDI, DPM
Productivity Applications Exchange, SharePoint, Lync
Why software defined storage Deliver a high
performance and scalable delivery platform for virtual machine virtual hard disks using commodity hardware
Mitigate the use of dedicated hardware solutions SAN’s Direct attached hardware RAID
Use a common, industry standard data transport
Implement storage optimisation and management within software
Drive down deployment and operational cost
Data Delivery – Scale Out File ServerScale Out File ServerStorage Spaces - Windows Server as the storage controllerSMB 3 as data transport
Replaces iSCSI and Fibre ChannelCheap generic JBOD’s
Multi-point highly availableContinuous availabilityFull scale out
Removes requirement for SAN
JBOD
Sofs Node Sofs Node Sofs Node Sofs Node
JBOD JBOD JBOD
Spaces 2012 R2 TiringSpeed up data read and writeIntroduced in 2012 R2SSD layer used for high-IO dataData moved to SSD via “heat” logic
1MB data chunks – not all of a large file needs to fit on SSDGained write back-cache
Pinning allows files to be locked onto SSD layerInteroperation with the CSV cacheHeat does not work with CSV cache
Does not work with redirected IOPlanning considerations
A Space using tiring without CSV, could be slower than a no-tired Space using CSVCan still pin files to SSD
Spaces 2012 R2Write back cache on SSDDramatically increase write performanceUse 1GB
It is possible to set it higher; do not do this
Dynamic rebuild using spare capacityNo longer a requirement for dedicated hot-spaceSimply leave unallocated headroom in disk pool
Storage Spaces – design considerationsData integrity2 way mirror provides only limited disk failure protection
Suitable solution if using application HA
3 way mirror gives a good level of disk failure tolerance
Very costly is disk usage (66% raw capacity loss)
Party Space now supported for clusterPerformance is not good
Enclosure awarenessProvides protection against entire JBOD failureSetup considerations
3 JBOD’s for 2 way mirror, single enclosure failure3 JBOD’s for 3 way mirror, single enclosure failure5 JBOD’s for 3 way mirror, duel enclosure failure
Storage Spaces – 2012 R2 design considerationsLarger column counts are importantColumn count defines how many disks are written across for any given write operationRead operations use all copies of data, therefore significant performance increase
Colum count 4, 2 way mirror; read = 8 disk performanceColum count 4, 3 way mirror; read = 12 disk performance
Potential latency issues when using column counts of over 4
Column count shared between SSD and HDDSSD’s can become limiting factor
Larger pools are more efficientMaximum pool size is 240 disksIncreases disk failure planning complexityDo not exceed more than 80 disks per pool
De-DuplicationSupported for VDI and DPM workloads
Tiring key for de-dup deploymentDe-dupped data (the chuck store IO will be massive)Chunk store cannot be pinned
Use heat logic built into tiering
CPU and RAM ConsiderationsCan now run on hot (open) VHDx files, consumes resourcesPer volume, therefore planning required so that CPU and RAM are not exhorted.
No De-Dup With De-Dup0
5000
10000
15000
20000
25000
30000
35000
40000
45000
100 VDI Clients
Disk (GB)
Design Considerations - SoFS nodesSMB client connection redirectionReduces load / requirements for CSV networkApplies to 2012 R2 / Win 8.1 and laterIncoming connection “moved” to the node that owns the storageCarful planning required if SoFS is to be used with a DFS namespace
Increased RAM and CPU overhead Requirement driven by heat and de-dup overheads2012; single physical proc and 12GB of RAM was fine2012 R2, duel CPU and 128GB plus of RAM
NetworkingPlan for SMB multi-channel
If using RDMA no teaming optionsAs SoFS is a clustered solution; separate IP required for each NIC interface
Planning considerations on Hyper-V hostsLACP is an option, however potential challenges
Distribution hash settings on switches
10TB is the maximum recommended volume size.
Storage
Networking
Compute
SOFS and Storage spaces
SMB 3.0 and Software defined networking
Hyper-V clustering
Core Platform AD, DNS, DHCP, WSUS
Services RDS, VDI, DPM
Productivity Applications Exchange, SharePoint, Lync
Network
Why software defined networking Simplification of
network physical topology
Utilisation of commodity switching and cabling
Reduction in NIC port and switch / core port requirements
Removal of dedicated hardware
Network “appliance” activities moved to virtual machines or software roles
Network performance optimisation and management performed within software
Network isolation and segmentation performed in software
Software defined networking 101Decoupled data delivery – VHDx via standard protocolSMB 3.0
Physical load-balancing and failoverTeaming (switch agnostic)
Load aggregation and balancingSMB multi-channel
Commodity L2 switchingCost effective networking (Ethernet)
RJ45SFP+QSFP+
Quality of ServiceMultiple-levels
Hyper-V host workload overhead reductionRDMA
Easily Scale
Switch Agnostic NIC TeamingIntegrated Solution for Network Card Resiliency and load balancingVendor agnostic and shipped inboxEnables teams of up to 32 NICsAggregates bandwidth from multiple network adapters whilst providing traffic failover in the event of NIC outageIncludes multiple nodes: switch dependent and independentMultiple traffic distribution algorithms: Hyper-V Switch Port, Hashing and Dynamic Load Balancing
NIC Teaming
PhysicalNetworkadaptors Team network
adapterTeam network
adapter
Operating system
RDMA ConsiderationsWhy RDMAGreatly reduces overhead of CPU in Hyper-V and scale out file server hosts
Allows more resources for running VM’sImproved data transport speed
Two main Ethernet based optionsRoCEiWARP
Primary vendor optionsChelsioMellonox
ComparisoniWARP RoutableRoCE
Requires DCB (Data Centre Bridging)Confirm support is available from your switching platform
Which is better?Pro’s and cons to both
Operational throughputSetup complexityDeployment scenario requirements; is routing between subnet’s requires, etc
Infiniband an optionIf you have investment is IB an excellent route to take
Now is the time for RDMA:
Hardware is availableSoftware is matureVendors are on-board
To Be Or, Not To Be (Converged) – That is that question
Converged networking = Big Win’sReduces complexity and costIncreases flexibly
Fully convergedSingle network, no dedicated service networksUse Windows networking capability to define system
NIC TeamingHyper-V vSwitch QoSSMB QoS
Universal vSwitch bindingParent loopback for SMB data to hostGain complete control through QoSExcellent resource utilization, managing networking resources between workloads
SMB 3.0VM trafficLive Migration
Semi-ConvergedDedicated NIC’s for SMB 3.0Dedicated (teamed) NIC’s for VM traffic
Critical for RDMA deployments in 2012 R2RDMA does not work via vSwitchNo teaming support
Parent OS
Switch agnostic team
Digging deeper - fully converged
pNIC pNIC pNIC pNIC
vSwitch
VM VM NIC
VM VM NIC
VM VM NIC
VM NIC
VM NIC
VM NIC
VM NIC
QoS
Parent OS
Switch agnostic team
Digging deeper - semi converged
pNIC pNIC pNIC pNIC
vSwitch
VM VM NIC
VM VM NIC
VM VM NIC
QoS
RDMA
Gen 1; 1Gbps using multiple connections Very cheap NIC’s and switch ports Attractive as teaming / SMB multi channel in Windows made this viable Cabling nightmares
Gen 2; 10Gbps using multiple connections Cost viable due to NIC and port cost reductions Significant throughput achievable with 4 connection in each server Cabling challenges remain Deployment over very cost effective RJ45
Gen 3; 40Gbps NIC and port costs are still high, available speed makes tradeoff acceptable
Requires QSFP+ Expensive cables and transceivers
Performance from only 2 ports very high Cabling issues mitigated Avoids the requirement for VM teaming, mitigates many vRSS challenges
vRSS places significant overhead on host Makes very high performance VM’s simpler to deploy with increased flexibility
Network speed considerations
Not viable for next generation solutions
Compute
Storage
Networking
Compute
SOFS and Storage spaces
SMB 3.0 and Software defined networking
Hyper-V
Core Platform AD, DNS, DHCP, WSUS
Services RDS, VDI, DPM
Productivity Applications Exchange, SharePoint, Lync
Why software defined compute “Virtualize everything”
provides enormous system benefits in terms of flexibility and scale System portability High-availability DR Migration and upgrades
Manage quality of service and system “stress point” situations within software
Easily swop between scale up or scale out Without changes in hardware
Achieve segmentation and resource isolation without investment in dedicated hardware
Hyper-V 2012 R2 – the basic’sHyper-V63 node clusters
8000 VM limit preventing clusters approaching this numberModern hardware allows for huge VM count for node
SMB 3.0 supportDynamic RAMvGPU support
Inter-cluster version live migrationKey for 2012 to 2012 R2 migrationsLive migration compressionSMB prioritisation
De-Coupled data deliverySMB 3.0 supportIntroduced in Server 2012
Access VHDx over file; not block storage\\servername\sharenameReplaces iSCSI or FC solutions
Simplifies Hyper-V solution designScale out / scale-up require less plumbingSupported from multiple vendors:MS scale out file serverSAN providers
Hyper-V 2012 R2Generation 2 VM’sUEFI basedSecure boot supportWDS support without using legacy NICNo support for IDE VHDx
Dynamic VHDx resizeEnables dynamic increase or decrease in VHDx size without taking VM offlineKey feature for IaaS clients
Dynamic quorum selectionIntroduced in 2012 R2Very useful for clusters that grow over time
Networking – Hyper-VvRSS support new available on vNIC’sAddress limitations of vNIC being limited to 1 CPU core and therefore maxing outAllows for very high performance VM’svRSS puts significant load on host CPUNo vRSS to parent; not viable to drive high network bandwidth into parent.
New teaming algorithm; DynamicCombines Hyper-V port with address hashRecommended setting
VM based NIC teamingDriving vNIC’s at above the wire speed of physical host NIC is very difficultAvoid the requirement for teaming though the use of higher speed physical NIC’s
SRIOV – Choices and trade offsKey for low latency / high-performance VM workloadsLimits VM deployment options as requires host with dedicated spare NIC’sDedicated NIC requirements increase if VM requires HA NIC capability
For a service provider, this level of rigidity creates considerable challenges
Quality of Service – Hyper-VStorage QoSDefine storage IOPS limits on a per VM bases.
v-Switch is your friendDefine QoS behaviours on a per VM basesIf using parent loopback, define QoS to control SMB trafficQoS is only on outbound connections
QoS helps you deal with “noisy neighbour” syndromeNo solution for CPU or RAM challenges today
SMB contains it’s own channel prioritisation logic
Services
Virtual network isolationSoftware based network isolation
Tenant “bring your own subnet”Introduced in 2012Solves the requirement to do vLAN tagging
Mitigates the 4096 vLAN’s celling
Multi-Tenant Site-to-Site gateway : 2012 R2
Physical server Physical network
Green virtual machine
Purple virtual machine
Purple network Green network
VIRTUALIZATION
Network virtualization gatewayBridge Between VM Networks and Physical NetworksMulti-tenant VPN gateway built-in to Windows Server 2012 R2Integral multitenant edge gateway for seamless connectivity Guest clustering for high availabilityBGP for dynamic routes updateMultitenant aware NAT forInternet accessIPSEC VPN – 400MbpsGRE – 2.4Gbps
Contoso Fabrikam
ResilientHNV
Gateway
Resilient HNVGateway
Internet
ResilientHNV
Gateway
Service Provider
Hyper-V Host Hyper-V Host
Remote Desktop ServicesWindows Server 2012Full VDI supportIncreased RDSH performanceHigh-availability brokerFull automation support
Significant improves in audio and video capabilitiesHardware graphics acceleration
Remote desktop user experienceRDP 8.1UDP SupportvGPU
DirectX 11.1Audio / VideoTouch RemotingUSB bus redirection
Touch and audio / video performance improvementsImproved “region” detection and codec’s
Connection / Reconnection performance improvementsSignificant improvements in user experience when using remote app
Screen / resolution dynamic resizePlug into an external monitor or change resolution and the connection automatically resizes
1st party clients available for multiple platformsWindowsWindows StoreMaciOSAndroid
Demo – The stack in action
Philip Moss
End-to-end stackA user personal virtual desktopRunning on clustered Hyper-V
VHDx over SMB 3.0Using a Storage spaces - SoFS
Advanced graphics driven by vGPU
Services: Exchange, SharePoint, LyncDelivered from VM’sRunning on clustered Hyper-v
VHDX over SMB 3.0
100% converged networking; with RDMASecurely accessed via Remote Desktop ServicesOver the Internet from the UKAnimation, video, 3D driven by RDP 8.1
Storage Spaces
Scale out file server
SMB 3.0
Hyper-V Cluster
HA VM File Server
VM – Windows
Client
Virtual GPU
Exchange
Lync
SharePoint
Remote Desktop Services
Summing Up – Part 1
SummaryHow to use the Microsoft stack to build a software defined datacentreDeploy a fault-tollorant, highly-available platform using commodity hardware
Storage – Scale out file serverNetwork – SMB 3.0 and software defined optimizationCompute – Hyper-V clustering and Hyper-V features to optimize virtual machine performance
Drive down operational cost
Standardize on a single set of management and automation technologiesPowershell
Deliver multi-tenant / isolation solutionsHyper-V network virtualization
Take advantage of new generation Remote Desktop ServicesProvide immersive and flexible virtual desktop solutions
(Brief) questions
Building on the core
Architecture – software defined datacentre
Storage
Networking
Compute
SOFS and Storage spaces
SMB 3.0 and Software defined networking
Hyper-V clustering
Core Platform AD, DNS, DHCP, WSUS
Services RDS, VDI, DPM
Productivity Applications Exchange, SharePoint, Lync
Hig
hly-a
vaila
ble
Bu
siness co
ntin
uity
HA and DR - Keeping the lights on
Hyper-V ClustersHyper-V Replica /
Azure Site Recovery
Application level clusters• Single Site• Multi-Site
Native application scale-out HA
Your high-availability arsenal
Compute
VM based clusters using shared VHDxIntroduced in 2012 R2Enables 100% VHDx based VM storage solutionRemoves reliance on synthetic iSCSI or FC for shared storage in VM
Primary workloadsHA file-serversLegacy SQL serversBespoke line of business application requiring shared disk
ConsiderationsNo support for Hyper-V replicaStretch clusters cannot be created
Hyper-V Cluster
VM Cluster
Scale Out File Server (Continuously Available)
VM A VM B
VHDx (VM A) VHDx (VM B)Shared HVDx
(cluster shared storage)
Hyper-V Cluster BHyper-V Cluster A
VM Cluster
Scale Out File Server (Continuously Available)
VM A VM B
VHDx (VM A) VHDx (VM B)Shared HVDx
(cluster shared storage)
Preventing all eggs in one (host) basket – VM affinity
Affinity controls and defines which VM’s may co-exist on a single host
Prevents two related VM’s or VM’s that must not fail at the same time from being on the same hostHyper-V cluster – without affinity setup
Hyper-V host A
Hyper-V host B
Hyper-V host C
VM B – VM
Cluster 1
VM B – VM
Cluster 2
VM A – VM
Cluster 1
VM A – VM
Cluster 2
Hyper-V cluster – with affinity setup
Preventing all eggs in one (host) basket – VM affinity
Affinity controls and defines which VM’s may co-exist on a single host
Prevents two related VM’s or VM’s that must not fail at the same time from being on the same host
Hyper-V host A
Hyper-V host B
Hyper-V host C
VM B – VM
Cluster 1
VM B – VM
Cluster 2
VM A – VM
Cluster 1
VM A – VM
Cluster 1
Cluster aware updating – your HA secrete weapon
Greatly simplifies updating clustersRemoves the requirement for manual drain stop / VM migrations
Drain stop’s hosts in turn and migrates workloads to other nodesAffinity rules are maintainedAffinity rules are invoked during drain stopRules can be soft or hardIf hard rules cannot be complied with, prioritisation is applied
May be used for all cluster workloadsHyper-VSoFS
VM high-availability NIC’sNo requirement for multiple vNIC’s in VMvSwtich takes care of vNIC to pNIC mapping / failoverOnly required for meet performance goals
Additional consideration should be applied to this configuration
SRIOV considerationsKey for high-performance and low latency applicationsDirect 1 to 1 mapping of pNIC to vNICNo inherent failover on vNIC is pNIC fails
When using SRIOV multiple NIC’s must be exposed to VMSetup VM based NIC teamAs SRIOV; pNIC’s on host will be dedicated to the VM’s use
Creates potential load and pNIC utilisation challenges on host
Consider using a non-SRIOV NIC as second vNIC
Provides fault tolerancePartially mitigates pNIC usage issuesNon-SRIOV vNIC will automatically get moved to a working pNIC by vSwtichPerformance degradation will occur
Hyper-V Replica (HVR)
Hyper-V Replica Overview
Simple Affordable Flexible
Inbox replication
Application agnostic
Storage agnostic
Hyper-V ReplicaHyper-V host to hyper-V host VM replication solutionInbox failover and DR solution for VM’sSupports point in time replication of VM’sPlanned and unplanned failover supportOff network target supportRemote network IP injection (into vNIC)
2012 R2 ImprovementsReduction in IO overheadSupport for tertiary replication locationChoice of replication interval
IO overhead increases with replication interval frequency
Multi-VM applications are potentially complex to manage
Azure Site Recovery (ASR)
Azure Site Recovery Overview
Azure Service for managing cross-site protection & recoveryMulti VM replication and failover solution, including automation and runbooksSimple, at scale, configuration of VM protection Reliable cloud based recovery plans Consistent user experience for remote managementExtensible from ground up
ASR Deployment Options
On-prem Hyper-V hosts
On-prem Hyper-V hosts
On-prem Hyper-V hosts
On-Prem to On-Prem
SC VMM required at all locationsDirect routable access between each site (to allow HVR to replicate)Secondary and territory replication targets supportedRecovery plans managed by yourselfFailover managed by yourself
ASR Deployment Options
On-prem Hyper-V hosts
Azure
On-Prem to Azure
ASR plug-in installed in all Hyper-V hosts to allow replication to and from AzureRecovery plans managed by yourselfFailover managed by yourself
ASR Deployment Options
On-prem Hyper-V hosts
Service provider
On-Prem to validated service provider
Publishing of Hyper-V hosts required to allow replicationRecovery plans managed by service providerFailover managed by service provider
ASR deployment option considerations
On-prem to on-prem On-prem to Azure On-prem to
service provider• Great option if you already
have more than one location (DC)
• Low cost – primary costs are MS ASR fee
• Potentially complex creation of recovery plans and failover process
• Great if you do not have a second location (DC)
• Very simple initial setup and maintenance
• Costs are lower than many other in market DR solutions
• Potential data sovereignty considerations (if no Azure DC’s are in region)
• Potentially complex creation of recovery plans and failover process
• Great if you do not have a second location (DC)
• Potentially complex setup• Excellent solution for
meeting regulatory or data sovereignty requirements
• Fully managed experience, no recovery plan creation or failover planning required
Understanding ASR
Pla
nn
ing Registration
Capacity PlanningPre-reqs
Con
fig
ure
Cloud ConfigureNetworksStorage
Pro
tect Identify
Candidate AppsEnable ProtectionRecovery Plans
Mon
itor Jobs
Resources
Recovery
Drill – DR testingPlanned FailoverUnplanned Failover
ASR workflow
Summing Up – Part 2
SummaryMaking things availableHow to create highly available VM’s
Clustered VM via shared VHDxUsing Cluster Aware Updating to simplify Hyper-V cluster management and maintenanceUse Hyper-V host affinity to prevent all eggs in one basket
Staying calm when the lights go outProviding DR and failover solutionsHyper-V replicaAzure Site Recovery
(Brief) questions
Windows Server 2016 – Evolution of the software defined DC
Take the power capability of Windows Server 2012 R2
Drive down costs
Reduce complexity and simplify management
Gain valuable new services
Storage – Windows Server 2016
Evolution of Scale out file serverStorage Spaces DirectDirect attached, instead of shared diskReduced disk costs
Less costly SATA SSDReduced required for SSD disks
In place upgrade of shared SAS SoFSNo migration path from shared SAS to storage spaces direct SoFS
Storage spaces direct
Shared SAS scale out file server
Windows Server nodes
Storage JBOD’s
Storage spaces directStorage Spaces Direct scale out file
server
Windows Server nodes
SMB 3.0
Storage JBOD’s
Storage spaces direct
Storage Spaces Direct scale out file server
Windows Server and storage inside the nodes
SMB 3.0
Storage Replica - Synchronous data replication between floors, buildings, campuses, cities...
Storage replica
ReplicationBlock-level, volume-basedSynchronous & asynchronous SMB 3.1.1 transport
FlexibilityAny Windows data volumeAny fixed disk storageAny storage fabric
ManagementFailover Cluster ManagerWindows PowerShellWMIEnd to end MS Storage Stack
Storage replicaVolume to volume replication solutionBlock not file level – Not DFSR
Volume agnostic
Supports Sync and A-Sync replicationLatency and bandwidth requirements affect Sync capability
Destination volume is always dismountedNot read-write or read-only destination
One to oneNo A-B-C, A-B+A-C or one-to-manyYou can still use other replication to add legs (E.g. Hyper-V Replica for A-B, SR for A-C)
SR is a great replication solution for Azure Site Recovery
Storage replica – scenario’s
NODE1 in HVCLUS
SR over SMB3
NODE3 in HVCLUS
NODE2 in HVCLUS NODE4 in HVCLUS
Man
hatt
an D
C
Jers
ey C
ity D
C
Stretch Cluster
Cluster to cluster
NODE1 in FSCLUS NODE2 in DRCLUS
NODE3 in FSCLUS NODE4 in DRCLUS
NODE2 in FSCLUS
NODE4 in FSCLUS
NODE1 in DRCLUS
NODE4 in DRCLUS
SR over SMB3
Man
hatt
an D
C
Jers
ey C
ity D
C
SRV1
SR over SMB3
SRV2
Man
hatt
an D
C
Jers
ey C
ity D
C
SRV1
SR over SMB3
Two separate servers
Single server – volume to volume
Converged or disaggregated storage?
Should storage and compute be separated to together?
Data storage
Hyper visor hosts
Desegregated Converged
Data storage and hyper visor in one system
Disaggregated
Allows compute and storage to scale independentlyRemoves bottleneck of storage on a specific hyper-visorDrives down operational costs and scale-out increases
Networking – Windows Server 2016
Physical networking – Windows Server 2016
RDMA support in a fully converged deployment
Parent OS
Switch agnostic team
RDMA support - today
pNIC pNIC pNIC pNIC
vSwitch
VM VM NIC
VM VM NIC
VM VM NIC
QoS
RDMA
RDMA requires dedicated NIC’s
Parent OS
Switch agnostic team
RDMA support in Server 2016 – full converged
pNIC pNIC pNIC pNIC
vSwitch
VM VM NIC
VM VM NIC
VM VM NIC
vNIC RDMA
vNIC RDAM
vNIC RDMA
vNIC RDMA
QoS
Virtual networking – Windows Server 2016
Virtual networkingNetwork controllerCentralized policy management
Service chainingSupport for virtual appliancesFirewalls
Scalable software load balancerReplaces NLBFull scale out and distributed topology
Understanding the network controller
Bare Metal Compute
Windows Server Hyper-V
Virtual Switch
Vir
tual
Netw
ork
s
Dis
trib
ute
d
Route
r
Unifi
ed E
dge
Soft
ware
Lo
ad
Bala
nci
ng
Serv
ice
Chain
ing
Dis
trib
ute
d
Fire
wall
Converged Nic with RDMA
Switching
RoutingFirewallin
g
Load balancin
gVPN
Physical Network
Physical Network Devices
Microsoft
Network Controll
er
Policy
Proven with Azure—scale out to many Multiplexer (MUX) instances
High-throughput between MUX and virtual networks
Reduced capex through multi-tenancy
Access to physical network resources from tenant virtual network
Centralized control and management through Network Controller
Easy fabric deployment through SCVMM
Integration with existing tenant portals via Network Controller—REST APIs or PowerShell
Scalable and available
Flexible and integrated
Easy management
Software Load Balancer (SLB): overview
NetworkController
Blue virtual
network
Purple virtual
network
Green virtual network
SLB MUX
SLB MUX
Edge routing infrastructure
Service Chaining
Gateway
VM
192.168.0.2
Problem: Tenant dependencies on 3rd party appliances.How to integrate them into Microsoft’s SDN platform?
Solution:Enable tenants to bring any virtualized network function to their virtual networkso No changes needed in Virtual Applianceo All major OS supported – Linux, BSD, and Windowso Policy-based ordering; support of pre-defined
groupso Easy management through SCVMM and
Windows Azure
Hyper-V
Hyper-VHyper-V
Service Chaining
Gateway
3rd party Antiviru
sVM
SourceIP=AnyDestinationIP=192.168.0.0/24Protocol=AnySourcePort=AnyDestinationPort=Any
Element1=“3rd Party Antivirus VM”
Virtual Network=“MyNetwork”
+ +
Rule Service Chain GroupNetwork Controlle
r
VM
192.168.0.2
Datacenter Firewall
Tenant0 Dip0 Tenant0 Dip1
Tenant1 Dip0 Tenant1 Dip1 Controller VM
Host
X
Traffic
FW Policy Update
XInbound block rule on TCP Port 80
REST API
Problem: East/West traffic securityFlexibility and SDN integration
Solution:Multi-Tenant Datacenter Firewall Serviceo Protect your workloads with
Dynamic Firewall Policyo Group your workloads with
Network Security Groupso Hybrid cloud consistency with
Azure ACLso Easy management through
SCVMM and Windows Azure
Compute – Windows Server 2016
Hyper-VIn place cluster upgradeRemoves the requirement for the traditional drain / evict workflowMixed version clusters are supported
Allows VM’s to live migrate and failover between Hyper-V host versions
Loss of data path handlingImproved behavior of VHDx and configuration file lossHyper-V does not get confused over loss of configuration fileMakes recovery easier after a storage failure or short period of “availability wobble”
Intelligent storage QoSManaged as a property of the VMApplied to the SoFSMigrates with the VM as it moves from host to host
Support for vRSS vNIC into parentAllows for much faster NIC speeds into parent via vSwitch
Shielded VM
What is a ‘Shielded VM’?
“A shielded VM is one that is protected from fabric-admins through virtualization based security and various cryptographic technologies.”
…Fixes the “we need to trust our fabric admin or hoster problem…..
A bit more specifically…
What is it and who’s it for?
A few highlightsAs a hoster:“I can protect my tenants VMs + their
data from host administrators.”
As a tenant:“I can run my workloads in the cloud
while meeting regulatory/compliance requirements.”
As an enterprise:“I can enforce strong separation of
duties between Hyper-V administrators and sensitive VM-workloads.”
Hardware-rooted technologies that strictly isolate the VM-guest operating system from host administrators
A Host Guardian Service that is able to identify legitimate Hyper-V hosts and certify them to run a given shielded VM
Virtualized Trusted Platform Module (vTPM) support for generation 2 virtual machines
Nano Server
Nano Server Minimum-footprint infrastructure OS and application OSNew Windows Server installation option ‘Cloud-first’ refactoring• Essential infrastructure OS requirements • Essential application OS requirements
Server roles and features enabled• Hyper-V clustering, Storage• Next-gen application platform, including run-
time• Windows Server Containers • Hyper-V Containers
Nano ServerServer Core
Minimal Server Interface
GUI Shell
Windows Server 2016
Powers modern cloud infrastructure • Faster time to value – order of magnitude
quicker deployment and start up time
• Enhanced productivity & lower downtime with much lower servicing footprint
• Enhanced protection with significantly lower attack surface
• Breakthrough efficiency with much lower resource consumption
Optimized for next-gen distributed applications • Higher density and performance for
container-based apps and micro-services
• Supports next-gen distributed app development frameworks
• Can interoperate with existing server applications (e.g., app front end running on Nano Server can work with SQL DB running on Server Core)
Nano Server
Understanding containersA new approach to build, ship, deploy, and instantiate applications
Physical
Virtual
}
}
Apps traditionally tied to physical server
New apps required new servers for resource isolation
Higher consolidation ratios and better server utilization
High app compatibility
Physical/Virtual
}Benefits
Enable modern app patterns
Empower dev-ops collaboration
Agility with resource-control
containers
Package and run apps within
Why Containers?
Developers
‘Write-once, run-anywhere’ portability
Composable, lightweight micro-services deployed as IaaS | PaaS
Rapid scale-up and scale-down
Operations
Enhances familiar IT deployment models
Flexible levels of isolation
Higher compute density
DevOps
Agility/ productivity for developers
Flexibility and control for IT
Services – Windows Server 2016
Remote desktop servicesvGPU enhancements - OpenGL and OpenCL supportProvides support for a broad range of new graphics and compute applicationsDedicated vRAM allocation
Important for certain application compatibilityTested against leading industry applications
AdobeAutodeckSchlumberger
Support for vGPU in Gen 2 VM’sSupport for Windows Server as VDI OSWorks correctly with RD BrokerEnhanced “client like” end user experienceCritical for hosters
Support for vGPU in Server OSAllows vGPU to be deployed in or more scenarios’
Policy based DNSSupport different DNS zone files depending on incoming connection criteriaTime of dayLocationLoad / performance of internal systems
Traffic directed on your network front-sidePrevents load entering your networkRemoves certain requirements to use load-balancers
Great solution for load-balancing between on-prem and cloud based resources
Move load based on time of day to meet peak and troth requirements
Send connections to the datacentre nearest to
them
Balance incoming connection based on internal system load
Redirect connections to another during failover or
system maintenance
Hardware planning
Hardware planningStorageLook for vendor approved solutions
Applies to shared SAS and Storage Spaces DirectUse 4K disksSSD – Enterprise grade, high IO, read / write balanced
NetworkingNetwork speed – 10Gbps minimum
Consider 40Gbps for storageSource RDMA NIC’sNetwork virtualisation offload on NIC’s
NVGREVXLAN
ComputeMake sure all systems have SLAT support on CPUTPM 2.0 support
Demo – VDI in Windows Server 2016Philip Moss
Where to find meMicrosoft MSE boothsHyper-VStorageCPS
BRK3503 - Best Practices for Deploying Disaster Recovery Services with Microsoft Azure Site Recovery
Questions
Learn more with FREE IT Pro Resources
Free technical training resources: On-demand online training: http://aka.ms/moderninfrastructure
Expand your Modern Infrastructure Knowledge
Free ebooks:Deploying Hyper-V with Software-Defined Storage & Networking: http://aka.ms/deployinghyperv
Microsoft System Center: Integrated Cloud Platform: http://aka.ms/cloud-platform-ebook
Join the IT Pro community: Twitter @MS_ITPro
Get hands-on: Free virtual labs: Microsoft Virtualization with Windows Server and System Center: http://aka.ms/virtualization-lab
Windows Azure Pack: Install and Configure: http://aka.ms/wap-lab
Visit Myignite at http://myignite.microsoft.com or download and use the Ignite Mobile App with the QR code above.
Please evaluate this sessionYour feedback is important to us!
© 2015 Microsoft Corporation. All rights reserved.