Upload
michael-rueefli
View
60
Download
2
Tags:
Embed Size (px)
Citation preview
Windows Server 2016 “sneak peak”
THOMAS MAURER
MVP Hyper-V
itnetx gmbh
Blog: www.thomasmaurer.ch
Twitter: @thomasmaurer
MICHAEL RUEEFLI
MVP Cloud & Datacenter Management
itnetx gmbh
Blog: www.miru.ch
Twitter: @drmiru
Session ObjectivesGet a “sneak peak” of Windows Server 2016 on:
• Software defined Storage, the next level
• Hyper-V
• Automation
• Nano Server
Software defined Storage (current)I can’t implement a Microsoft Storage Solution because…
“Replication on Storage-Level is missing”
“I wanna do Hyper-Converged”
“I don’t trust Microsoft doing Storage”
Software defined Storage v3.0• Volume based Storage Replication (sync / async)
• Storage Spaces direct
• Hyper-Converged
• ReFS
• Distributed Storage QoS
• Deduplication Enhancements
Current SDS with Server 2012 R2 (1 / 2)
Storage Pool
Storage Space Virtual Disks
Scale-Out File Server \\FileServer\Share
Cluster Shared Volumes C:\ClusterStorage
SMB
Storage Node Storage Node Storage Node Storage Node
Soft
ware
Defined S
tora
ge S
yst
em
• Scale-Out File Server
• Cluster Shared Volumes
• Storage Spaces
• Hardware
Shared JBOD Storage
Storage Spaces Direct logical View
Software Defined Storage System
Storage Node
Storage Node
Storage Node
Storage Node
Storage Pool
Storage Space Virtual Disks
Scale-Out File Server \\FileServer\Share
Cluster Shared Volumes C:\ClusterStorage
SMB3 Storage Network Fabric
Software Storage Bus
• Local Storage (inline or JBOD)
• Support for SATA and NVMe SSD
• Fault Tolerance to disk and node failures
• Hyper-V VM Storage
• Backup Storage
• Converged
• Hyper-converged
Scenarios
SCALE-OUT FILE SERVER CLUSTER
HYPER-V CLUSTER
Hyper-converged Converged (Disaggregated)
HYPER-V CLUSTER(S)
SMB3 STORAGE NETWORK FABRIC
Compute and Storage resources togetherCompute and Storage scale and are managed togetherTypically small to medium sized scale-out deployments
Compute and Storage resources separateCompute and Storage scale and are managed independentlyTypically larger scale-out deployments
Storage Spaces Direct - Data Placement• Virtual Disks• Virtual disks consists of extents
• Extents are 1GB, so a 100GB virtual disks has 100 extents
• Scale-Out• Extents are distributed across disks and servers
• Rebalancing occurs if adding more nodes
• Resiliency• A 2nd copy of an extent is placed on a different server
• A 3rd copy of an extent is placed on yet another server
Extent A Extent B Extent C
Virtual Disk Extents (3-way mirror)
Server EServer B Server C Server DServer A
A A’A’’
B B’B’’
C C’C’’
Storage Spaces Direct: Requirements (TP2)• 4 up to 12 Storage Nodes
• 10 Gbps RDMA (RoCE)
• Min. 1 SSD per Node
• Supported HW Model (VMs for LAB)
Storage Spaces Direct Development Partners
Quanta D51PH
HP Apollo 2000 System
Dell PowerEdge R730xdCisco UCS C3160 Rack Server
Intel® Server Board S2600WT-Based Systems Lenovo System x3650 M5
Fujitsu Primergy RX2540 M1
Storage Replica
BCDR
Synchronous or asynchronousCluster <-> ClusterServer <-> ServerMicrosoft Azure Site Recovery orchestration
Stretch Cluster
Synchronous stretch clusters across sites for HA
Benefits
Block-level, host-based, volume replicationEnd-to-end software stack from MicrosoftWorks with any Windows volumeHardware agnostic; existing SANs workUses SMB3 as transport
NODE1 in HVCLUS
SR over SMB3
NODE3 in HVCLUS
NODE2 in HVCLUS NODE4 in HVCLUS
Man
hattan
DC
Jers
ey C
ity D
C
Stretch Cluster
• Single cluster
• Automatic failover
• Synchronous
Cluster to Cluster
• Two separate clusters
• Manual failover
• Synchronous or asynchronousNODE1 in FSCLUS NODE2 in DRCLUS
NODE3 in FSCLUS NODE4 in DRCLUS
NODE2 in FSCLUS
NODE4 in FSCLUS
NODE1 in DRCLUS
NODE4 in DRCLUS
SR over SMB3
Man
hatt
an D
C
Jers
ey C
ity D
C
Server to Server
• Two separate servers
• Manual failover
• Synchronous or asynchronous
SRV1
SR over SMB3
SRV2
Man
hattan
DC
Jers
ey C
ity D
C
Storage Replica: Requirements (TP2)• Any volume (SAS, SAN, iSCSI, Local)
• <5 ms round trip between sites for synchronous mirror(at 1472 non-fragmented bytes packet size)
• RDMA NICs
• Identical size for source / target volume
• SSDs for log disks recommended
• Identical physical disk geometry
ReFS (Resilient File System)• New Default FS for VM Workloads
• Metadata Checksums
• User Data Checksums
• Checksum Verification
• Autohealing of detected Corruption
• Lightening fast VM Checkpoint creation / merging
• Ultrafast fixed VHDX creation
• Autohealing of detected Corruption
Hyper-V• Huge number of new features• Cluster Rolling Upgrades
• Online Resize Memory
• Hot add / remove Network Adapters
• Software Defined Networking / Network Controller
• Improved Shared VHDX, Hyper-V Manager, Backup…
• Security
• PowerShell Direct
Challenges in protecting high-value assets
Any seized or infected host administrators can access guest virtual machines
Impossible to identify legitimate hosts without a hardware based verification
Tenants VMs are exposed to storage
and network attacks while unencrypted
Fabric
Hypervisor
Customer
Hypervisor
Fabric
Storage
Host OS
Customer
Guest VM
Legitimate host?
Guest VM
Shielded VMs
Host Guardian Service
Storage
HOST without TPM (generic host)
Virtual hard disk
HOST with TPM
Virtual hard disk
Virtual hard disk
Shielded Virtual Machines
Shielded Virtual Machines
Shielded Virtual Machines
Spotlight capabilities
Shielded Virtual Machines can only run in fabrics that are designated as owners of that virtual machine
Shielded Virtual Machines will need to be encrypted (by BitLocker or other means) in order to ensure that only the designated owners can run this virtual machine
You can convert a running Generation 2 virtual machine into a Shielded Virtual Machine
PowerShell Direct to Guest OS• You can now script PowerShell in the Guest OS directly
from the Host OS• No need to configure PowerShell Remoting
• Or even have network connectivity
• Still need to have guest credentials
• Enter-PSSession –VMName “VM01” –Credentials $Cred
• Get-Service –VMName “VM01” –Credentials $Cred
Network Adapter Identification
• You can name individual network adapters in the virtual machine settings – and see the same name inside the guest operating system.
• PowerShell in host:
• PowerShell in guest:
Add-VMNetworkAdapter -VMName "TestVM" -SwitchName "Virtual Switch" -Name "Fred" -Passthru | Set-VMNetworkAdapter -DeviceNaming on
Get-NetAdapterAdvancedProperty | ?{$_.DisplayName -eq "Hyper-V Network Adapter Name"} | select Name, DisplayValue
Remote FX
• Support for OpenGL 4.4 and OpenCL 1.1 API
• Larger dedicated VRAM and configurable VRAM.
Automation
PartnersO
EMsManageme
nt Products
ISVsAutomation
Products
Windows Server
PowerShell PowerShell Workflow
Just Enough Admin
Desired State Configuration
MicrosoftSyste
m Cent
erService
Managemen
t Automation
Orchestrator
Azure
Azure
Automation
Azure
DSC
Automation vNext• Hybrid Worker
• Graphical Authoring
• Linux Support(DSC / SSH)
• Azure DSC Pull Server
• As Part of Azure Stack for “on premises”
•Reboots impact my business
•Server images are too big
• Infrastructure requires too many resources
Voice of the Customer (-> us!)
IT Pro Server Journey
Windows NT to Windows Server
2003
Windows/WindowsNT
Server Roles/Features
Windows Server 2008and
Windows Server 2008 R2
Server Core
Full Server
Windows Server 2012and
Windows Server 2012 R2
Server Core
Minimal Server Interface
GUI Shell
Windows NT / Windows Server 2003
Windows Server 2008 Windows Server 2008 R2
Windows Server 2012 Windows Server 2012 R2
• Azure• Patches and reboots interrupt service delivery
• (*VERY large # of servers) * (large OS resource consumption) => COGS
• Provisioning large host images competes for network resources
• Cloud Platform System (CPS)• Cloud-in-box running on 1-4 racks using System Center & Windows Server
• Setup time needs to be shortened
• Patches and reboots result in service disruption• Fully loaded CPS would live migrate > 16TB for every host OS patch• Network capacity could have otherwise gone to business uses• Reboots: Compute host ~2 minutes / Storage host ~5 minutes
Microsoft Cloud Journey
• A new headless, 64-bit only, deployment option for Windows Server
• Deep refactoring focused on • CloudOS infrastructure
• Born-in-the-cloud applications
• Follow the Server Core pattern
Nano Server - Next Step in the Cloud Journey
Server Core
Server with Local Admin Tools
Basic Client Experience
Nano Server
• Zero-footprint model • Server Roles and Optional Features live outside of Nano Server
• Standalone packages that install like applications
• Key Roles & Features• Hyper-V, Storage (SoFS), and Clustering
• Core CLR, ASP.NET 5 & PaaS
• Full Windows Server driver support
• Antimalware Built-in
• System Center and Apps Insight agents to follow
Nano Server - Roles & Features
•Web-based
• Includes replacements for local-only tools• Task Manager
• Registry Editor
• Event Viewer
• Device Manager
• Sconfig----------------------------
• Control Panel
• File Explorer
• Performance Monitor
• Disk Management
• Users/Groups Manager
• Also manages Server Core and Server with GUI
Remote Server Management Tools
• Born-in-the-cloud application support• Subset of Win32
• CoreCLR, PaaS, and ASP.NET 5
• Available everywhere• Host OS for physical hardware
• Guest OS in a VM
• Windows Server containers
• Hyper-V containers
• Future additions• PowerShell Desired State Configuration (DSC) & PackageManagement
• Additional Roles and Application Frameworks
Nano Server - Cloud Application platform
Servicing Improvements*
Series10
5
10
15
20
25
Critical Bulletins
Nano Server Server CoreFull Server
Series10
5
10
15
20
25
30
Important Bul-letins
Nano Server Server CoreFull Server
Series10
2
4
6
8
10
12
Number of Reboots
Nano Server Server CoreFull Server
23
8
2
9
2326
6
11
3
* Analysis based on all patches released in 2014
Security Improvements
Series10
5
10
15
20
25
30
35
Ports open
Nano Server Server Core
Series10
5
10
15
20
25
30
35
40
45
50
Services running
Nano Server Server Core
Series10
20
40
60
80
100
120
Drivers loaded
Nano Server Server Core
12
31
22
46
73
98
Series10
50
100
150
200
250
300
Boot IO (MB)
Nano Server Server Core
Resource Utilization Improvements
Series10
5
10
15
20
25
30
Process Count
Nano Server Server Core
Series10
20
40
60
80
100
120
140
160
Kernel memory in use (MB)
Nano Server Server Core
26
21
61
139
150
255
Series10
50
100
150
200
250
300
350
Setup Time (sec)
Nano Server Server Core
Series10
1
2
3
4
5
6
Disk Footprint (GB)
Nano Server Server Core
Deployment Improvements
Series10
1
2
3
4
5
6
7
VHD Size (GB)
Nano Server Server Core
.41
6.3
40
3004.84
.4
• An installation option, like Server Core
• Not listed in Setup because image must be customized with drivers• Separate folder on the Windows Server media
• Available in the Windows Server Technical Preview 2 releasedthis week
Nano Server in Windows Server 2016
And more...• PowerShell 5.0• DSC
• Package Management (Find-Package *Office 2016* | Install-Package)
• IIS 10 (HTTP/2)
• Soft Restart (No Initializing of Hardware Components)
• Windows Defender (Windows Server Antimalware is installed and enabled by default )
• Telnet Server (Is removed)
• ADFS
THANK YOU!THOMAS MAURER
MVP Hyper-V
itnetx gmbh
Blog: www.thomasmaurer.ch
Twitter: @thomasmaurer
MICHAEL RUEEFLI
MVP Cloud & Datacenter Management
itnetx gmbh
Blog: www.miru.ch
Twitter: @drmiru